CN110414611A - Image classification method and device, feature extraction network training method and device - Google Patents
Image classification method and device, feature extraction network training method and device Download PDFInfo
- Publication number
- CN110414611A CN110414611A CN201910702994.4A CN201910702994A CN110414611A CN 110414611 A CN110414611 A CN 110414611A CN 201910702994 A CN201910702994 A CN 201910702994A CN 110414611 A CN110414611 A CN 110414611A
- Authority
- CN
- China
- Prior art keywords
- feature
- network
- training
- image
- extracts
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
This disclosure relates to a kind of image classification method and device, feature extraction network training method and device, described image classification method, comprising: image to be processed input fisrt feature is extracted network, obtains the fisrt feature of image to be processed;Fisrt feature is inputted to M second feature respectively and extracts network, obtains M second feature;According to M second feature, the classification results of the image to be processed are obtained.Image classification method according to an embodiment of the present disclosure can extract network by multiple second feature and extract the various features in sample image respectively, keep characteristic information richer, improve the accuracy rate of classification processing.
Description
Technical field
This disclosure relates to field of computer technology more particularly to a kind of image classification method and device, feature extraction network
Training method and device.
Background technique
In the related art, neural network is when extracting the feature of the target object in image, using target object as one
A entirety carries out feature extraction, but the appearance of target object, posture, in the picture proportion, complex degree of background, target pair
As the factors such as integrated degree in the picture may influence to extract as a result, if extracting multiple features of target object, mind
More through network training process parameter, training difficulty is larger.
Summary of the invention
The present disclosure proposes a kind of image classification method and devices, feature extraction network training method and device.
According to the one side of the disclosure, a kind of image classification method is provided, comprising:
Image to be processed input fisrt feature is extracted into network, obtains the fisrt feature of the image to be processed;
The fisrt feature is inputted into M second feature respectively and extracts network, obtains M second feature;
According to the M second feature, the classification results of the image to be processed are obtained.
Image classification method according to an embodiment of the present disclosure can extract network by multiple second feature and extract sample respectively
Various features in this image keep characteristic information richer, improve the accuracy rate of classification processing.
In one possible implementation, the fisrt feature is the default network in fisrt feature extraction network
The output feature of level, it is described according to the M second feature, obtain the classification results of the image to be processed, comprising:
By the fisrt feature extract network described in preset network level subsequent level to the fisrt feature into
Row feature extraction processing, obtains third feature;
The classification results of the image to be processed are obtained according to the third feature and the M second feature.
According to the one side of the disclosure, a kind of feature extraction network training method is provided, comprising:
Sample image input fisrt feature is extracted into network, obtains the first training characteristics of the sample image;
First training characteristics are inputted into M second feature respectively and extract network, obtain M the second training characteristics;
According to the M the second training characteristics, M second feature of training extracts network.
Neural network training method according to an embodiment of the present disclosure can extract network by multiple second feature and mention respectively
Take the various features in sample image, improve the performance of neural network, and can the multiple second feature of station work extract networks, make
Training difficulty decline, improves the training effectiveness of neural network.
In one possible implementation, according to the M the second training characteristics, M second feature of training extracts net
Network, comprising:
According to the M the second training characteristics, station work N group second feature extracts network, and every group of second feature is extracted
Network includes that the M second feature extracts at least one of network, and M, N are integer and 1 < N≤M.
In one possible implementation, according to the M the second training characteristics, station work N group second feature is mentioned
Take network, comprising:
The markup information that corresponding second training characteristics of network and sample image are extracted according to i-th group of second feature, determines
I-th group of second feature extracts the first network loss of network, and i is integer and 1≤i≤N;
The first network loss of network is extracted according to i-th group of second feature, training i-th group of second feature extracts net
Network.
In one possible implementation, according to the M the second training characteristics, station work N group feature extraction net
Network, comprising:
According to first training characteristics and the M the second training characteristics, N group second feature described in station work is extracted
Network and the fisrt feature extract network.
In one possible implementation, in the training process that i-th group of second feature extracts network, k-th of training
Characteristic distance between second training characteristics in period be greater than the feature between the second training characteristics of -1 cycle of training of kth away from
From k is the integer greater than 1.
In this way, each second feature can be made to extract the different features that network extracts sample image respectively, reduced
Multiple second feature extract a possibility that network extracts the same or similar second training characteristics, reduce multiple characteristic informations
Between information redundancy, improve treatment effeciency, and it is richer to make multiple second feature extract the feature that networks extract, favorably
In the accuracy for improving classification results.
In one possible implementation, described according to the M the second training characteristics, station work N group second
After feature extraction network, the method also includes:
It is true respectively according to the markup information of first training characteristics, the M the second training characteristics and sample image
The fixed fisrt feature extracts network and the M second feature extracts the third network losses of network;
Network is extracted according to the fisrt feature and the M second feature extracts the third network losses of network, adjustment
The fisrt feature extracts network and the M second feature extracts the network parameter of network.
In one possible implementation, the method also includes:
Network handles processing image is extracted at least through the fisrt feature after the training and carries out feature extraction processing, is obtained
Obtain the fourth feature of the image to be processed;
According to the fourth feature, the classification results to the target object in the image to be processed are obtained.
In one possible implementation, network handles processing figure is extracted at least through the fisrt feature after the training
As carrying out feature extraction processing, the fifth feature of the image to be processed is obtained, comprising:
At least one second feature after extracting network and the training by the fisrt feature after the training extracts net
Network carries out feature extraction processing to image to be processed, obtains at least one fourth feature of the image to be processed.
According to the one side of the disclosure, a kind of image classification device is provided, comprising:
First extraction module obtains the image to be processed for image to be processed input fisrt feature to be extracted network
Fisrt feature;
Second extraction module extracts network for the fisrt feature to be inputted M second feature respectively, obtains M the
Two features;
Module is obtained, for obtaining the classification results of the image to be processed according to the M second feature.
In one possible implementation, the fisrt feature is the default network in fisrt feature extraction network
The output feature of level, second extraction module are further configured to:
By the fisrt feature extract network described in preset network level subsequent level to the fisrt feature into
Row feature extraction processing, obtains third feature;
The classification results of the image to be processed are obtained according to the third feature and the M second feature.
According to the one side of the disclosure, a kind of feature extraction network training device is provided, comprising:
Third extraction module obtains the of the sample image for sample image input fisrt feature to be extracted network
One training characteristics;
4th extraction module extracts network for first training characteristics to be inputted M second feature respectively, obtains M
A second training characteristics;
Training module, for according to the M the second training characteristics, M second feature of training to extract network.
In one possible implementation, the training module is further configured to:
According to the M the second training characteristics, station work N group second feature extracts network, and every group of second feature is extracted
Network includes that the M second feature extracts at least one of network, and M, N are integer and 1 < N≤M.
In one possible implementation, the training module is further configured to:
The markup information that corresponding second training characteristics of network and sample image are extracted according to i-th group of second feature, determines
I-th group of second feature extracts the first network loss of network, and i is integer and 1≤i≤N;
The first network loss of network is extracted according to i-th group of second feature, training i-th group of second feature extracts net
Network.
In one possible implementation, the training module is further configured to:
According to first training characteristics and the M the second training characteristics, N group second feature described in station work is extracted
Network and the fisrt feature extract network.
In one possible implementation, in the training process that i-th group of second feature extracts network, k-th of training
Characteristic distance between second training characteristics in period be greater than the feature between the second training characteristics of -1 cycle of training of kth away from
From k is the integer greater than 1.
In one possible implementation, described device further include:
Determining module, for the mark according to first training characteristics, the M the second training characteristics and sample image
Information is infused, determines the third network losses that the fisrt feature extracts network and the M second feature extracts network respectively;
Module is adjusted, for extracting the third of network and M second feature extraction network according to the fisrt feature
Network losses adjust the network parameter that the fisrt feature extracts network and the M second feature extracts network.
In one possible implementation, described device further include:
5th extraction module, at least through after the training fisrt feature extract network handles handle image into
Row feature extraction processing, obtains the fourth feature of the image to be processed;
Categorization module, for obtaining the classification to the target object in the image to be processed according to the fourth feature
As a result.
In one possible implementation, the 5th extraction module is further configured to:
At least one second feature after extracting network and the training by the fisrt feature after the training extracts net
Network carries out feature extraction processing to image to be processed, obtains at least one fourth feature of the image to be processed.
According to the one side of the disclosure, a kind of electronic equipment is provided, comprising:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to: execute the above method.
According to the one side of the disclosure, a kind of computer readable storage medium is provided, computer program is stored thereon with
Instruction, the computer program instructions realize the above method when being executed by processor.
It should be understood that above general description and following detailed description is only exemplary and explanatory, rather than
Limit the disclosure.
According to below with reference to the accompanying drawings to detailed description of illustrative embodiments, the other feature and aspect of the disclosure will become
It is clear.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and those figures show meet this public affairs
The embodiment opened, and together with specification it is used to illustrate the technical solution of the disclosure.
Fig. 1 shows the flow chart of the image classification method according to the embodiment of the present disclosure;
Fig. 2 shows the flow charts according to the feature extraction network training method of the embodiment of the present disclosure;
Fig. 3 shows the application schematic diagram of the neural network training method according to the embodiment of the present disclosure;
Fig. 4 shows the block diagram of the image classification device according to the embodiment of the present disclosure;
Fig. 5 shows the block diagram of the feature extraction network training device according to the embodiment of the present disclosure;
Fig. 6 shows the block diagram of the electronic device according to the embodiment of the present disclosure;
Fig. 7 shows the block diagram of the electronic device according to the embodiment of the present disclosure.
Specific embodiment
Various exemplary embodiments, feature and the aspect of the disclosure are described in detail below with reference to attached drawing.It is identical in attached drawing
Appended drawing reference indicate element functionally identical or similar.Although the various aspects of embodiment are shown in the attached drawings, remove
It non-specifically points out, it is not necessary to attached drawing drawn to scale.
Dedicated word " exemplary " means " being used as example, embodiment or illustrative " herein.Here as " exemplary "
Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
The terms "and/or", only a kind of incidence relation for describing affiliated partner, indicates that there may be three kinds of passes
System, for example, A and/or B, can indicate: individualism A exists simultaneously A and B, these three situations of individualism B.In addition, herein
Middle term "at least one" indicate a variety of in any one or more at least two any combination, it may for example comprise A,
B, at least one of C can indicate to include any one or more elements selected from the set that A, B and C are constituted.
In addition, giving numerous details in specific embodiment below to better illustrate the disclosure.
It will be appreciated by those skilled in the art that without certain details, the disclosure equally be can be implemented.In some instances, for
Method, means, element and circuit well known to those skilled in the art are not described in detail, in order to highlight the purport of the disclosure.
Fig. 1 shows the flow chart of the image classification method according to the embodiment of the present disclosure, as shown in Figure 1, which comprises
In step s 11, image to be processed input fisrt feature is extracted into network, obtains the first of the image to be processed
Feature;
In step s 12, the fisrt feature is inputted into M second feature respectively and extracts network, obtains M second spy
Sign;
In step s 13, according to the M second feature, the classification results of the image to be processed are obtained.
Image classification method according to an embodiment of the present disclosure can extract network by multiple second feature and extract sample respectively
Various features in this image keep characteristic information richer, improve the accuracy rate of classification processing.
In one possible implementation, described image classification method can be held by terminal device or other processing equipments
Row, wherein terminal device can be user equipment (User Equipment, UE), mobile device, user terminal, terminal, honeycomb
Phone, wireless phone, personal digital assistant (Personal Digital Assistant, PDA), handheld device, calculate equipment,
Mobile unit, wearable device etc..Other processing equipments can be server or cloud server etc..In some possible realization sides
In formula, this method can be realized in such a way that processor calls the computer-readable instruction stored in memory.
In one possible implementation, in step s 11, the fisrt feature is extracted network and be can be with multiple
The deep learning neural network of network level, such as convolutional neural networks etc. are also possible to other nets that can realize feature extraction
Network, it is not limited here.It may include one or more target objects in the image to be processed, the target object can be
People, vehicle, article etc..
In one possible implementation, image to be processed can be inputted fisrt feature and extracts network, fisrt feature mentions
Feature extraction processing can successively be carried out by taking the multiple network layers grade of network, for example, first network level can to sample image into
Row feature extraction, second network level can carry out feature extraction etc. to the feature that first network level obtains.In this example,
As network level is deepened, the size for the feature that network level obtains reduces, and receptive field increases.
In one possible implementation, the fisrt feature is the default network in fisrt feature extraction network
The output feature of level, that is, fisrt feature can be extracted to the feature that the default level in network obtains and be determined as first spy
Sign, for example, the default level can be the intermediate level that the fisrt feature extracts network, for example, the fisrt feature mentions
Taking network altogether includes ten network levels, can be using the 5th network level as the default level, and by the 5th network layer
The feature that grade obtains is determined as the fisrt feature.The disclosure to fisrt feature extract the quantity of the network network level that includes with
And default level is with no restrictions.
In one possible implementation, the fisrt feature can continue to input default in fisrt feature extraction network
The subsequent network level of level is handled, and can get the feature of subsequent level output, for example, third feature in this example can
Acquisition is more smaller than fisrt feature, but the feature that receptive field is bigger, for example, the port number of feature is more than fisrt feature, but every
The aspect ratio fisrt feature in a channel is small.
In one possible implementation, in step s 12, the fisrt feature, which extracts network, can extract sample image
A certain category feature, for example, the features such as key point of target object in extractable sample image.But in the appearance of target object
Shoot that unintelligible, posture is complicated, proportion is smaller in the picture, background is more complex or target object is imperfect etc. in the picture
In the case of, a single category feature is difficult to obtain higher classification accuracy.
In one possible implementation, network being extracted by multiple second feature, spy is carried out to sample image respectively
Sign is extracted, and can extract the multiclass feature of sample image.For example, net can be extracted by M (M is greater than 1 integer) a second feature
Network G1,G2...GM, feature extraction processing is carried out to the fisrt feature of sample image respectively, obtains M second feature.
In one possible implementation, deep copy can be carried out to fisrt feature, for example, can to the first training characteristics into
Row M times deep copy, obtains the copy of M fisrt feature.And the copy of M fisrt feature is inputted into M second feature respectively and is mentioned
It takes network to carry out feature extraction processing, obtains M second feature.
In one possible implementation, in step s 13, it can be determined according to M second feature described to be processed
The classification results of image, in this example, the classification results can be used for indicating that the classification of image to be processed, the disclosure tie classification
Fruit is with no restrictions.
In this example, image to be processed and reference picture can be inputted respectively to fisrt feature and extract network, and by each second
Feature extraction network obtains the second feature of image to be processed and the fixed reference feature of reference picture respectively, and determines image to be processed
Second feature and reference picture fixed reference feature characteristic similarity (for example, cosine similarity).For example, if figure to be processed
The characteristic similarity of the fixed reference feature of the second feature and reference picture of picture is greater than or equal to similarity threshold, then image to be processed
In target object and reference picture in target object be same class, that is, image to be processed and reference picture can be classified as together
One kind, conversely, image to be processed and reference picture can be classified as inhomogeneity.Alternatively, using second feature by image to be processed with
Multiple reference pictures are compared, for example, determining the spy of the second feature of image to be processed and the fixed reference feature of each reference picture
Similarity is levied, with the determining highest object reference feature of characteristic similarity with classification results, and by image to be processed and target
The corresponding reference picture of fixed reference feature is classified as same class.Or by the second feature of multiple images to be processed respectively with one
Or the fixed reference feature of multiple reference pictures is compared, to classify to multiple images to be processed, for example, one can be determined respectively
The classification of each video frame in section video (will not include target for example, one kind can be classified as the video frame including target object
The video frame of object is classified as another kind of etc.).
In one possible implementation, the fisrt feature, which extracts network, can also pass through the subsequent of the default level
Level handles fisrt feature, obtains classification results.The step S13 can include: net is extracted by the fisrt feature
The subsequent level that network level is preset described in network carries out feature extraction processing to the fisrt feature, obtains third feature;Root
The classification results of the image to be processed are obtained according to the third feature and the M second feature.
In this example, M second feature can be obtained by extracting network by the M second feature, be mentioned by fisrt feature
Take the default network level of network subsequent level can get third feature, and can by second feature and third feature come
Determine the classification results of image to be processed.For example, image to be processed and reference picture can be inputted respectively to fisrt feature extracts net
Network, and by fisrt feature extract network obtain image to be processed third feature, by each second feature extract network obtain respectively
The second feature of image to be processed, and network is extracted by fisrt feature extraction network and second feature and obtains reference picture respectively
Multiple fixed reference features.Further, can be compared respectively with corresponding fixed reference feature by third feature and M second feature
It is right, to determine characteristic similarity, and it can determine whether image to be processed and reference picture belong to together according to the characteristic similarity
It is a kind of.
Image classification according to an embodiment of the present disclosure can extract network by multiple second feature and extract sample graph respectively
Various features as in, and network is extracted by fisrt feature and extracts third feature, keep characteristic information richer, improves at classification
The accuracy rate of reason.
Fig. 2 shows the flow charts according to the feature extraction network training method of the embodiment of the present disclosure, as shown in Fig. 2, described
Method includes:
In the step s 21, sample image input fisrt feature is extracted into network, obtains the first training of the sample image
Feature;
In step S22, first training characteristics are inputted into M second feature respectively and extract network, obtain M second
Training characteristics;
In step S23, according to the M the second training characteristics, M second feature of training extracts network.
Neural network training method according to an embodiment of the present disclosure can extract network by multiple second feature and mention respectively
Take the various features in sample image, improve the performance of neural network, and can the multiple second feature of station work extract networks, make
Training difficulty decline, improves the training effectiveness of neural network.
In one possible implementation, in the step s 21, sample image can be inputted fisrt feature and extracts network,
The multiple network layers grade that fisrt feature extracts network can successively carry out feature extraction processing, and the network level may include convolution
Layer, active coating etc., in this example, it may include multiple network layers grade that fisrt feature, which extracts network, wherein fisrt feature extracts network
The exportable sample image of default level (such as intermediate some level) the first training characteristics.
In one possible implementation, in step S22, the first training characteristics can be inputted respectively to M second spy
Sign extracts network, obtains M the second training characteristics.It is copied deeply for example, M times can be carried out to the first training characteristics in the manner described above
Shellfish obtains the copy of M the first training characteristics.And the copy of M the first training characteristics is inputted into M second feature respectively and is extracted
Network carries out feature extraction processing, obtains M the second training characteristics.
It in one possible implementation, can be according to described M of M the second training characteristics training the in step S23
Two feature extraction networks.
In one possible implementation, there are many network parameter that M second feature extracts network, while the instruction of training
White silk difficulty is higher, and operand is larger.Network can be extracted to M second feature to be grouped, and M second feature of station work mentions
Take network, wherein step S13 can include: according to the M the second training characteristics, station work N group second feature extracts net
Network, it includes that the M second feature extracts at least one of network that every group of second feature, which extracts network, and M, N are integer and 1 < N
≤M。
In this example, M second feature can be extracted to network and be divided into N (N is integer and 1 < N≤M) group, and be respectively trained each
Group second feature extracts network and fisrt feature extracts network, for example, second feature is extracted network G1,G2..GMIt is divided into N group,
For example, first group is G11、G12……G1Q(wherein, Q is positive integer), second group is G21、G22……G2W(wherein, W is positive whole
Number) ... i-th group is Gi1、Gi2……GiE(wherein, E is positive integer) ... N group is GN1、GN2……GNR(wherein, R is positive
Integer).In this example, when training i-th (i is integer, and 1≤i≤N) group second feature extracts network, other groups second are special
The network parameter that sign extracts network remains unchanged.
In one possible implementation, according to the M the second training characteristics, station work N group second feature is mentioned
Take network can include: the markup information of corresponding second training characteristics of network and sample image is extracted according to i-th group of second feature,
Determine that i-th group of second feature extracts the first network loss of network, i is integer and 1≤i≤N;It is mentioned according to i-th group of second feature
The first network of network is taken to lose, training i-th group of second feature extracts network.
In one possible implementation, the fisrt feature that can be completed by training extracts network and extracts sample image
First training characteristics, and each group second feature can be trained to extract network in batches.For example, the 1st group of second feature extraction network can
It is trained in the 1st trained batch, the 2nd group of second feature is extracted network and can be trained in the 2nd trained batch ...
N group second feature is extracted network and can be trained in n-th training batch.
In one possible implementation, by taking i-th of trained batch as an example, i-th of trained batch can train i-th group
Two feature extraction networks.In this example, when i-th group of second feature of training extracts network, other group of second feature extracts network
Network parameter remain unchanged.In this example, i-th of trained batch can extract network to i-th group of second feature and repeatedly be instructed
Practice (for example, i-th of trained batch may include multiple cycles of training), that is, network and fisrt feature are extracted to i-th group of second feature
Extract the adjusting that network carries out multiple network parameter.Each trained batch can extract net to the second feature that the batch to be trained
Network carries out the adjusting of multiple network parameter.
In one possible implementation, corresponding second training characteristics of network can be extracted according to i-th group of second feature
And the markup information of sample image, determine that i-th group of second feature extracts the network losses of i-th of trained batch of network.Showing
In example, fisrt feature can be extracted to the feature of l (l is positive integer) layer of network as the first training characteristics hl, the M
A second feature extracts network can be respectively to the first training characteristics hlIt is handled, obtains the second training characteristics f respectively1、f2……
fM, wherein it is G that i-th group of second feature, which extracts network,i1、Gi2……GiE, the second instruction of i-th group of second feature extraction network acquisition
Practicing feature is fi1、fi2……fiE.The second training characteristics f that network obtains can be extracted according to i-th group of second featurei1、fi2……
fiEAnd the markup information of sample image determines that i-th group of second feature extracts the network losses L of network.
In this example, the markup information of sample image can be the characteristic similarity of sample image and reference picture, show
In example, if the target object in sample image and reference picture is same people, the feature phase of sample image and reference picture
It is labeled for 1 like degree, if the target object in sample image and reference picture is not same people, sample image and reference
The characteristic similarity of image is labeled for 0.
In this example, network can be extracted by fisrt feature and M second feature extracts network and extract reference picture respectively
Feature, and determine i-th group each second feature extract network extract sample image the second training characteristics and reference picture
Characteristic similarity between second training characteristics.And i-th group second is determined according to the difference of this feature similarity and markup information
The network losses of feature extraction network.For example, can determine that i-th group of the corresponding feature of each second feature extraction network is similar respectively
Difference between degree and the mark of sample image, and the difference is determined that i-th group of second feature extracts the first network of network
Lose L.In this example, the characteristic similarity can be indicated with cosine similarity, for example, output feature and reference picture
Export the cosine similarity between feature.The disclosure to the determination methods of network losses with no restrictions.
In this example, the first network that network can be extracted according to i-th group of second feature loses L, i-th group of second feature of training
Extract network.For example, it may be determined that the first network loss L extracts i-th group of second feature the gradient of each parameter of network(p indicates that i-th group of second feature extracts any network parameter of network), and according to the direction for minimizing network losses, lead to
Gradient descent method is crossed to adjust each network parameter.
In one possible implementation, in addition to according to make network losses minimize direction regulating networks parameter it
Outside, distance restraint can also be added.Characteristic distance between the second training characteristics of k-th of cycle of training is greater than -1 training of kth
Characteristic distance between second training characteristics in period, k are the integer greater than 1.
It in this example, can be according to making i-th group of second feature extract network by taking i-th group of second feature extracts network as an example
The direction that the characteristic distance between the second training characteristics obtained increases, carrys out regulating networks parameter.For example, i-th of trained batch
Network is extracted to i-th group of second feature and has carried out multiple training (that is, multiple adjusting has been carried out to network parameter), wherein in kth
After secondary regulating networks parameter (k-th of cycle of training), i-th group of second feature is extracted between the second training characteristics that network obtains
Characteristic distance is greater than the characteristic distance between a preceding regulating networks parameter (- 1 cycle of training of kth) afterwards the second training characteristics.
The characteristic distance may include Euclidean distance or COS distance etc., the disclosure to the definition of characteristic distance with no restrictions.Addition away from
, can be by training from constraint, the parameter differences for extracting each second feature between network are increasing, and each second feature extracts net
The difference between the second training characteristics that network extracts is increasing, that is, so that each second feature is extracted network and extracts sample graph respectively
The different features of picture.
In this way, each second feature can be made to extract the different features that network extracts sample image respectively, reduced
Multiple second feature extract a possibility that network extracts the same or similar second training characteristics, reduce multiple characteristic informations
Between information redundancy, improve treatment effeciency, and it is richer to make multiple second feature extract the feature that networks extract, favorably
In the accuracy for improving classification results.
In one possible implementation, each cycle of training of i-th of trained batch can be determined through the above way
Network losses, and network and i-th group of second feature are extracted to fisrt feature according to network losses in each cycle of training and mentioned
It takes the network parameter of network to be adjusted, and in the case where the training condition of i-th of satisfaction trained batch, completes i-th of instruction
The training for practicing batch, followed by the training of i+1 training batch, that is, i-th group of second feature extracts network and meet training
When condition, the training that i-th group of second feature extracts network is completed, the training that i+1 group second feature extracts network is then carried out.
Wherein, the training condition of described i-th trained batch may include frequency of training (that is, number cycle of training), for example, working as frequency of training
When reaching preset times, meet the training condition of i-th of trained batch.Alternatively, the training condition of described i-th trained batch can
Size including network losses holds back scattered property, for example, when the network losses are less than or equal to loss threshold value or converge on default
When in section, meet the training condition of i-th of trained batch.The disclosure to training condition with no restrictions.
It in one possible implementation, can be identical by extracting the training method of network with i-th group of second feature
Method carries out the training that multiple groups second feature extracts network, each group second feature can be extracted network training and completed.At every group
In the training of two feature extraction networks, the network parameter that the corresponding second feature of the batch extracts network is only adjusted, other second
The network parameter of feature extraction network remains unchanged.
In one possible implementation, fisrt feature can be extracted to network and extracts network with M second feature, jointly
It is trained, wherein according to the M the second training characteristics, station work N group feature extraction network, comprising: according to described the
One training characteristics and the M the second training characteristics, N group second feature extraction network and the fisrt feature described in station work
Extract network.In this example, network can be extracted to N group second feature and the fisrt feature is extracted network and trained in batches, that is,
Station work.For example, the 1st group of second feature extracts network and fisrt feature is extracted network and can be carried out in the 1st trained batch
Training, the 2nd group of second feature extracts network and fisrt feature is extracted network and can be trained in the 2nd trained batch ... N
Group second feature extracts network and fisrt feature is extracted network and can be trained in n-th training batch.
In one possible implementation, by taking i-th of trained batch as an example, net can be extracted according to i-th group of second feature
The markup information of corresponding second training characteristics of network and sample image determines that i-th group of second feature extracts network and described first
The network losses of the trained batch of i-th of feature extraction network.For example, fisrt feature extracts the subsequent of the default level of network
Network level can continue with the first training characteristics hl, can get the output feature h that fisrt feature extracts networkl+n(n first
The quantity of network level after the default level of feature extraction network), it can be according to hl+n, i-th group of second feature extract network and obtain
The the second training characteristics f obtainedi1、fi2……fiEAnd the markup information of sample image determines that fisrt feature extracts network and the
The third network losses of i group second feature extraction network.And network and i-th group of second feature extraction net are extracted according to fisrt feature
The third network losses of network extract the network parameter of network to adjust fisrt feature extraction network and i-th group of second feature.For example,
Each network parameter can be adjusted by gradient descent method.And network and fisrt feature extraction network are extracted in i-th group of second feature
When meeting training condition, the training that i-th group of second feature extracts network is completed, then i+1 group second feature is carried out and extracts net
The training of network.Further, when adjusting fisrt feature extraction network and i-th group of second feature extracts the network parameter of network,
The network parameter that other second feature extract network remains unchanged.
In one possible implementation, network is extracted in each group second feature and fisrt feature extraction network training is complete
All second feature, can also be extracted network by Cheng Hou and fisrt feature extracts network and carries out parameter adjustment jointly, in the basis
The M the second training characteristics, after station work N group second feature extraction network, the method also includes: according to described the
The markup information of one training characteristics, the M the second training characteristics and sample image determines that the fisrt feature is extracted respectively
Network and the M second feature extract the third network losses of network;Network and the M are extracted according to the fisrt feature
Second feature extracts the third network losses of network, adjusts the fisrt feature and extracts network and M second feature extraction
The network parameter of network.
In one possible implementation, each group second feature can not be repartitioned and extract network, and according to fisrt feature
It extracts network and all second feature extracts the output of network (that is, M second feature extracts network) and the mark of sample image
It infuses information and determines third network losses jointly, and network and all second feature are extracted to fisrt feature according to third network losses
The network parameter for extracting network is adjusted.It is less than preset threshold in adjustment preset times or the network losses or converges on pre-
If when section, the fisrt feature after can get training extracts network and M second feature extracts network.The adjustment amplitude of the adjustment
The adjustment amplitude of the network parameter of network and fisrt feature extraction network is extracted when being smaller than station work to each group second feature,
That is, the whole fine tuning of all feature extraction networks can be carried out.And the adjustment can make fisrt feature extract the second spy of network and each group
The characteristic extraction procedure that sign extracts network is more coordinated.Fisrt feature after training extracts network and second feature is extracted network and be can be used
During carrying out classification processing to image.
Neural network training method according to an embodiment of the present disclosure can extract network by multiple second feature and mention respectively
Take the various features in sample image, improve the accuracy of neural network, and can the multiple second feature of station work extract networks,
Make to train difficulty decline, improves the training effectiveness of neural network.And each second feature can be made to extract network and extract sample graph respectively
The different features of picture reduce multiple second feature and extract the possibility that network extracts the same or similar second training characteristics
Property so that multiple second feature extract network extract feature it is richer, be conducive to target object appearance shooting it is unintelligible,
Posture is complicated, proportion is smaller in the picture, background is more complex or target object it is imperfect in the picture when, obtain
Various features improve the accuracy of classification results.Further, compared to only fisrt feature being trained to extract network, by multiple
Second feature is extracted network and is trained jointly with fisrt feature extraction network, and the performance that fisrt feature extracts network can be promoted,
Fisrt feature can be used only after training and extract network to image progress classification processing, promote fisrt feature and extract Web vector graphic
Flexibility, and promote the classification accuracy that fisrt feature extracts network.
In one possible implementation, the fisrt feature after training can be used extracts network and second feature extracts net
Network carries out classification processing to image to be processed.In this example, network can be extracted without using second feature, is mentioned using only fisrt feature
It takes network to carry out classification processing, network can also be extracted without using whole second feature, fisrt feature is used only and extracts network and portion
Divide second feature to extract network and carries out classification processing.The method also includes: at least through the fisrt feature after the training
It extracts network handles processing image and carries out feature extraction processing, obtain the fourth feature of the image to be processed;According to described
Four features obtain the classification results to the target object in the image to be processed.
It in one possible implementation, may include one or more target objects in the image to be processed, it is described
Target object can be people, vehicle, article etc..Fisrt feature after training is extracted network and can be mentioned to image to be processed progress feature
It takes.
In this example, network handles only can be extracted by fisrt feature and handles image progress feature extraction, fisrt feature mentions
The fourth feature can be confirmed as by taking the output result of network.Network can also be extracted by fisrt feature and second feature mentions
Network handles processing image is taken to carry out feature extraction.For example, some or all of can be extracted by M second feature in network with
And fisrt feature extracts network handles processing image and carries out feature extraction, obtains multiple fourth feature.At least through the training
Fisrt feature afterwards extracts network handles processing image and carries out feature extraction processing, and obtain the image to be processed the 5th is special
Sign, comprising: at least one second feature after extracting network and the training by the fisrt feature after the training extracts net
Network carries out feature extraction processing to image to be processed, obtains at least one fourth feature of the image to be processed.
In one possible implementation, network only can be extracted by fisrt feature, feature is carried out to image to be processed
Extraction process obtains fourth feature.
In one possible implementation, image to be processed can be inputted fisrt feature and extracts network, fisrt feature mentions
Image to be processed can be handled by taking each network level of network, and fisrt feature can be extracted network default level the
One training characteristics carry out deep copy, and the copy input second feature for the first training characteristics that copy is obtained extracts network.And
Network can be extracted by M second feature and fisrt feature extracts network handles and handles image progress feature extraction.It can be by first
The fisrt feature of the default level of feature extraction network carries out deep copy, obtains M fisrt feature, and input M second respectively
Feature extraction network.Fisrt feature extracts network and M second feature extraction network can be respectively to fisrt feature or fisrt feature
Copy carry out feature extraction processing, and obtain fourth feature.For example, can get M+1 fourth feature.
In one possible implementation, a part and fisrt feature in network can be extracted by M second feature
It extracts network handles processing image and carries out feature extraction, for example, one or more second feature can be used to extract network and first
Feature extraction network handles handle image and carry out feature extraction.Fisrt feature extracts network and one or more second feature are extracted
Network can carry out feature extraction processing by the copy to fisrt feature or fisrt feature respectively, and obtain multiple fourth feature.
In one possible implementation, the target object in image to be processed is determined using the fourth feature
Classification results.For example, it may be determined that the characteristic similarity between fourth feature and the feature of reference picture is (for example, cosine similarity
Deng), if the characteristic similarity is greater than or equal to similarity threshold, one can will be classified as with reference picture in image to be processed
Class.Alternatively, image to be processed is compared with multiple reference pictures using fourth feature, for example, determine fourth feature with
The characteristic similarity of the fixed reference feature of each reference picture, with determining special with the fourth feature highest object reference of characteristic similarity
Sign, and image to be processed reference picture corresponding with object reference feature is classified as same class.Or by multiple figures to be processed
The fourth feature of picture is compared with one or more reference pictures respectively, to classify to multiple images to be processed, for example,
Can determine respectively each video frame in one section of video classification (for example, can by include target object video frame be classified as one kind,
By do not include target object video frame be classified as it is another kind of etc.).
Fig. 3 shows the application schematic diagram of the neural network training method according to the embodiment of the present disclosure, as shown in figure 3, in mind
In training process through network, the sample image input fisrt feature including one or more target objects can be extracted network,
The multiple network layers grade that fisrt feature extracts network can successively carry out feature extraction processing, and obtain the in first of network level
One training characteristics hl。
It in one possible implementation, can be by the first training characteristics hlDeep copy is carried out, and by the M of acquisition first
The copy of training characteristics inputs M second feature respectively and extracts network G1,G2..GM, M the second training characteristics f are obtained respectively1、
f2……fM.Also, the subsequent network level that fisrt feature extracts first of network level of network can continue special to the first training
Sign is handled, and the output feature h that fisrt feature extracts network is obtainedl+n.Further, using the second training characteristics f1、
f2……fMAnd fisrt feature extracts the output feature h of networkl+nStation work second feature extracts network G1,G2..GMWith first
Feature extraction network.
In one possible implementation, second feature can be extracted network G1,G2..GMIt is divided into N group, for example, first
Group is G11、G12……G1Q, second group is G21、G22……G2W, i-th group is Gi1、Gi2……GiE, N group is GN1、GN2……
GNR.Every group of second feature can be respectively trained and extract network and fisrt feature extraction network.
In one possible implementation, by taking i-th of trained batch as an example, i-th of trained batch can train i-th group
Two feature extraction networks, when i-th group of second feature of training extracts network, other group of second feature extracts the network parameter of network
It remains unchanged.When i-th group of second feature of training extracts network, the output feature h of network can be extracted according to fisrt featurel+n,
I group second feature extracts the second training characteristics f of network outputi1、fi2……fiEAnd the markup information of sample image determines
I-th group of second feature extracts network and fisrt feature extracts the network losses of network, and adjusts fisrt feature according to network losses
It extracts network and i-th group of second feature extracts the network parameter of network, for example, being adjusted according to the direction for minimizing network losses
Network parameter.And when adjusting network parameter, distance restraint can be added, that is, can obtain according to making i-th group of second feature extract network
The direction that characteristic distance between the second training characteristics taken increases, carrys out regulating networks parameter, each second feature extract network it
Between parameter differences it is increasing, the difference that each second feature is extracted between the second training characteristics that network extracts is increasing,
That is, each second feature is made to extract the different features that network extracts sample image respectively, be conducive to the diversification for the feature extracted.
In the training condition of i-th of satisfaction trained batch, the training of i-th of batch is completed.
In one possible implementation, every group of second feature can be respectively trained by the above method and extract network and the
One feature extraction network.And after the completion of each group second feature extracts network and fisrt feature and extracts network training, by all the
Two feature extraction networks and fisrt feature extract network and carry out parameter adjustment jointly, and the fisrt feature after being trained extracts network
Network is extracted with second feature.
In one possible implementation, to image to be processed carry out classification processing when, can be at least through training after
Fisrt feature extract network and extract the fourth feature of image to be processed.Network and one or more can also be extracted by fisrt feature
A second feature extracts the fourth feature that network extracts image to be processed respectively.Further, it can be determined by fourth feature
The classification results of target object in image to be processed.For example, according to the feature between fourth feature and the feature of reference picture
Similarity determines the classification results of image to be processed.
In one possible implementation, the neural network training method and image classification method can be used for video
In the classification processing of frame, for example, inquiring some pedestrian in monitor video, network and one can be extracted by fisrt feature
A or multiple second feature extract network to determine the classification of each video frame in monitor video, that is, by the video including the pedestrian
Frame is divided into one kind, by do not include the pedestrian video frame be divided into it is another kind of etc..The disclosure to the neural network training method and
Image classification method application field is with no restrictions.
It is appreciated that above-mentioned each embodiment of the method that the disclosure refers to, without prejudice to principle logic,
To engage one another while the embodiment to be formed after combining, as space is limited, the disclosure is repeated no more.
In addition, the disclosure additionally provides image classification device and feature extraction network training device, electronic equipment, computer
Readable storage medium storing program for executing, program, the above-mentioned any method that can be used to realize the disclosure and provide, corresponding technical solution and description and
Referring to the corresponding record of method part, repeat no more.
It will be understood by those skilled in the art that each step writes sequence simultaneously in the above method of specific embodiment
It does not mean that stringent execution sequence and any restriction is constituted to implementation process, the specific execution sequence of each step should be with its function
It can be determined with possible internal logic.
Fig. 4 shows the block diagram of the image classification device according to the embodiment of the present disclosure, as shown in figure 4, described device includes:
First extraction module 11 obtains the figure to be processed for image to be processed input fisrt feature to be extracted network
The fisrt feature of picture;
Second extraction module 12 extracts network for the fisrt feature to be inputted M second feature respectively, obtains M
Second feature;
Module 13 is obtained, for obtaining the classification results of the image to be processed according to the M second feature.
In one possible implementation, the fisrt feature is the default network in fisrt feature extraction network
The output feature of level, second extraction module are further configured to:
By the fisrt feature extract network described in preset network level subsequent level to the fisrt feature into
Row feature extraction processing, obtains third feature;
The classification results of the image to be processed are obtained according to the third feature and the M second feature.
Fig. 5 shows the block diagram of the feature extraction network training device according to the embodiment of the present disclosure, as shown in figure 5, the dress
It sets and includes:
Third extraction module 21 obtains the sample image for sample image input fisrt feature to be extracted network
First training characteristics;
4th extraction module 22 extracts network for first training characteristics to be inputted M second feature respectively, obtains
M the second training characteristics;
Training module 23, for according to the M the second training characteristics, M second feature of training to extract network.
In one possible implementation, the training module is further configured to:
According to the M the second training characteristics, station work N group second feature extracts network, and every group of second feature is extracted
Network includes that the M second feature extracts at least one of network, and M, N are integer and 1 < N≤M.
In one possible implementation, the training module is further configured to:
The markup information that corresponding second training characteristics of network and sample image are extracted according to i-th group of second feature, determines
I-th group of second feature extracts the first network loss of network, and i is integer and 1≤i≤N;
The first network loss of network is extracted according to i-th group of second feature, training i-th group of second feature extracts net
Network.
In one possible implementation, the training module is further configured to:
According to first training characteristics and the M the second training characteristics, N group second feature described in station work is extracted
Network and the fisrt feature extract network.
In one possible implementation, in the training process that i-th group of second feature extracts network, k-th of training
Characteristic distance between second training characteristics in period be greater than the feature between the second training characteristics of -1 cycle of training of kth away from
From k is the integer greater than 1.
In one possible implementation, described device further include:
Determining module, for the mark according to first training characteristics, the M the second training characteristics and sample image
Information is infused, determines the third network losses that the fisrt feature extracts network and the M second feature extracts network respectively;
Module is adjusted, for extracting the third of network and M second feature extraction network according to the fisrt feature
Network losses adjust the network parameter that the fisrt feature extracts network and the M second feature extracts network.
In one possible implementation, described device further include:
5th extraction module, at least through after the training fisrt feature extract network handles handle image into
Row feature extraction processing, obtains the fourth feature of the image to be processed;
Categorization module, for obtaining the classification to the target object in the image to be processed according to the fourth feature
As a result.
In one possible implementation, the 5th extraction module is further configured to:
At least one second feature after extracting network and the training by the fisrt feature after the training extracts net
Network carries out feature extraction processing to image to be processed, obtains at least one fourth feature of the image to be processed.
In some embodiments, the embodiment of the present disclosure provides the function that has of device or comprising module can be used for holding
The method of row embodiment of the method description above, specific implementation are referred to the description of embodiment of the method above, for sake of simplicity, this
In repeat no more
The embodiment of the present disclosure also proposes a kind of computer readable storage medium, is stored thereon with computer program instructions, institute
It states when computer program instructions are executed by processor and realizes the above method.Computer readable storage medium can be non-volatile meter
Calculation machine readable storage medium storing program for executing.
The embodiment of the present disclosure also proposes a kind of electronic equipment, comprising: processor;For storage processor executable instruction
Memory;Wherein, the processor is configured to the above method.
The equipment that electronic equipment may be provided as terminal, server or other forms.
Fig. 6 is the block diagram of a kind of electronic equipment 800 shown according to an exemplary embodiment.For example, electronic equipment 800 can
To be mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, Medical Devices are good for
Body equipment, the terminals such as personal digital assistant.
Referring to Fig. 6, electronic equipment 800 may include following one or more components: processing component 802, memory 804,
Power supply module 806, multimedia component 808, audio component 810, the interface 812 of input/output (I/O), sensor module 814,
And communication component 816.
The integrated operation of the usual controlling electronic devices 800 of processing component 802, such as with display, call, data are logical
Letter, camera operation and record operate associated operation.Processing component 802 may include one or more processors 820 to hold
Row instruction, to perform all or part of the steps of the methods described above.In addition, processing component 802 may include one or more moulds
Block, convenient for the interaction between processing component 802 and other assemblies.For example, processing component 802 may include multi-media module, with
Facilitate the interaction between multimedia component 808 and processing component 802.
Memory 804 is configured as storing various types of data to support the operation in electronic equipment 800.These data
Example include any application or method for being operated on electronic equipment 800 instruction, contact data, telephone directory
Data, message, picture, video etc..Memory 804 can by any kind of volatibility or non-volatile memory device or it
Combination realize, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable
Except programmable read only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, fastly
Flash memory, disk or CD.
Power supply module 806 provides electric power for the various assemblies of electronic equipment 800.Power supply module 806 may include power supply pipe
Reason system, one or more power supplys and other with for electronic equipment 800 generate, manage, and distribute the associated component of electric power.
Multimedia component 808 includes the screen of one output interface of offer between the electronic equipment 800 and user.
In some embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch surface
Plate, screen may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touches
Sensor is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding
The boundary of movement, but also detect duration and pressure associated with the touch or slide operation.In some embodiments,
Multimedia component 808 includes a front camera and/or rear camera.When electronic equipment 800 is in operation mode, as clapped
When taking the photograph mode or video mode, front camera and/or rear camera can receive external multi-medium data.It is each preposition
Camera and rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 810 is configured as output and/or input audio signal.For example, audio component 810 includes a Mike
Wind (MIC), when electronic equipment 800 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone
It is configured as receiving external audio signal.The received audio signal can be further stored in memory 804 or via logical
Believe that component 816 is sent.In some embodiments, audio component 810 further includes a loudspeaker, is used for output audio signal.
I/O interface 812 provides interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock
Determine button.
Sensor module 814 includes one or more sensors, for providing the state of various aspects for electronic equipment 800
Assessment.For example, sensor module 814 can detecte the state that opens/closes of electronic equipment 800, the relative positioning of component, example
As the component be electronic equipment 800 display and keypad, sensor module 814 can also detect electronic equipment 800 or
The position change of 800 1 components of electronic equipment, the existence or non-existence that user contacts with electronic equipment 800, electronic equipment 800
The temperature change of orientation or acceleration/deceleration and electronic equipment 800.Sensor module 814 may include proximity sensor, be configured
For detecting the presence of nearby objects without any physical contact.Sensor module 814 can also include optical sensor,
Such as CMOS or ccd image sensor, for being used in imaging applications.In some embodiments, which may be used also
To include acceleration transducer, gyro sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between electronic equipment 800 and other equipment.
Electronic equipment 800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.Show at one
In example property embodiment, communication component 816 receives broadcast singal or broadcast from external broadcasting management system via broadcast channel
Relevant information.In one exemplary embodiment, the communication component 816 further includes near-field communication (NFC) module, short to promote
Cheng Tongxin.For example, radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band can be based in NFC module
(UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, electronic equipment 800 can be by one or more application specific integrated circuit (ASIC), number
Word signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating
The memory 804 of machine program instruction, above-mentioned computer program instructions can be executed by the processor 820 of electronic equipment 800 to complete
The above method.
Fig. 7 is the block diagram of a kind of electronic equipment 1900 shown according to an exemplary embodiment.For example, electronic equipment 1900
It may be provided as a server.Referring to Fig. 7, electronic equipment 1900 includes processing component 1922, further comprise one or
Multiple processors and memory resource represented by a memory 1932, can be by the execution of processing component 1922 for storing
Instruction, such as application program.The application program stored in memory 1932 may include it is one or more each
Module corresponding to one group of instruction.In addition, processing component 1922 is configured as executing instruction, to execute the above method.
Electronic equipment 1900 can also include that a power supply module 1926 is configured as executing the power supply of electronic equipment 1900
Management, a wired or wireless network interface 1950 is configured as electronic equipment 1900 being connected to network and an input is defeated
(I/O) interface 1958 out.Electronic equipment 1900 can be operated based on the operating system for being stored in memory 1932, such as
Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating
The memory 1932 of machine program instruction, above-mentioned computer program instructions can by the processing component 1922 of electronic equipment 1900 execute with
Complete the above method.
The disclosure can be system, method and/or computer program product.Computer program product may include computer
Readable storage medium storing program for executing, containing for making processor realize the computer-readable program instructions of various aspects of the disclosure.
Computer readable storage medium, which can be, can keep and store the tangible of the instruction used by instruction execution equipment
Equipment.Computer readable storage medium for example can be-- but it is not limited to-- storage device electric, magnetic storage apparatus, optical storage
Equipment, electric magnetic storage apparatus, semiconductor memory apparatus or above-mentioned any appropriate combination.Computer readable storage medium
More specific example (non exhaustive list) includes: portable computer diskette, hard disk, random access memory (RAM), read-only deposits
It is reservoir (ROM), erasable programmable read only memory (EPROM or flash memory), static random access memory (SRAM), portable
Compact disk read-only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical coding equipment, for example thereon
It is stored with punch card or groove internal projection structure and the above-mentioned any appropriate combination of instruction.Calculating used herein above
Machine readable storage medium storing program for executing is not interpreted that instantaneous signal itself, the electromagnetic wave of such as radio wave or other Free propagations lead to
It crosses the electromagnetic wave (for example, the light pulse for passing through fiber optic cables) of waveguide or the propagation of other transmission mediums or is transmitted by electric wire
Electric signal.
Computer-readable program instructions as described herein can be downloaded to from computer readable storage medium it is each calculate/
Processing equipment, or outer computer or outer is downloaded to by network, such as internet, local area network, wide area network and/or wireless network
Portion stores equipment.Network may include copper transmission cable, optical fiber transmission, wireless transmission, router, firewall, interchanger, gateway
Computer and/or Edge Server.Adapter or network interface in each calculating/processing equipment are received from network to be counted
Calculation machine readable program instructions, and the computer-readable program instructions are forwarded, for the meter being stored in each calculating/processing equipment
In calculation machine readable storage medium storing program for executing.
Computer program instructions for executing disclosure operation can be assembly instruction, instruction set architecture (ISA) instructs,
Machine instruction, machine-dependent instructions, microcode, firmware instructions, condition setup data or with one or more programming languages
The source code or object code that any combination is write, the programming language include the programming language-of object-oriented such as
Smalltalk, C++ etc., and conventional procedural programming languages-such as " C " language or similar programming language.Computer
Readable program instructions can be executed fully on the user computer, partly execute on the user computer, be only as one
Vertical software package executes, part executes on the remote computer or completely in remote computer on the user computer for part
Or it is executed on server.In situations involving remote computers, remote computer can pass through network-packet of any kind
It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit
It is connected with ISP by internet).In some embodiments, by utilizing computer-readable program instructions
Status information carry out personalized customization electronic circuit, such as programmable logic circuit, field programmable gate array (FPGA) or can
Programmed logic array (PLA) (PLA), the electronic circuit can execute computer-readable program instructions, to realize each side of the disclosure
Face.
Referring herein to according to the flow chart of the method, apparatus (system) of the embodiment of the present disclosure and computer program product and/
Or block diagram describes various aspects of the disclosure.It should be appreciated that flowchart and or block diagram each box and flow chart and/
Or in block diagram each box combination, can be realized by computer-readable program instructions.
These computer-readable program instructions can be supplied to general purpose computer, special purpose computer or other programmable datas
The processor of processing unit, so that a kind of machine is produced, so that these instructions are passing through computer or other programmable datas
When the processor of processing unit executes, function specified in one or more boxes in implementation flow chart and/or block diagram is produced
The device of energy/movement.These computer-readable program instructions can also be stored in a computer-readable storage medium, these refer to
It enables so that computer, programmable data processing unit and/or other equipment work in a specific way, thus, it is stored with instruction
Computer-readable medium then includes a manufacture comprising in one or more boxes in implementation flow chart and/or block diagram
The instruction of the various aspects of defined function action.
Computer-readable program instructions can also be loaded into computer, other programmable data processing units or other
In equipment, so that series of operation steps are executed in computer, other programmable data processing units or other equipment, to produce
Raw computer implemented process, so that executed in computer, other programmable data processing units or other equipment
Instruct function action specified in one or more boxes in implementation flow chart and/or block diagram.
The flow chart and block diagram in the drawings show system, method and the computer journeys according to multiple embodiments of the disclosure
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
One module of table, program segment or a part of instruction, the module, program segment or a part of instruction include one or more use
The executable instruction of the logic function as defined in realizing.In some implementations as replacements, function marked in the box
It can occur in a different order than that indicated in the drawings.For example, two continuous boxes can actually be held substantially in parallel
Row, they can also be executed in the opposite order sometimes, and this depends on the function involved.It is also noted that block diagram and/or
The combination of each box in flow chart and the box in block diagram and or flow chart, can the function as defined in executing or dynamic
The dedicated hardware based system made is realized, or can be realized using a combination of dedicated hardware and computer instructions.
The presently disclosed embodiments is described above, above description is exemplary, and non-exclusive, and
It is not limited to disclosed each embodiment.Without departing from the scope and spirit of illustrated each embodiment, for this skill
Many modifications and changes are obvious for the those of ordinary skill in art field.The selection of term used herein, purport
In the principle, practical application or technological improvement to the technology in market for best explaining each embodiment, or lead this technology
Other those of ordinary skill in domain can understand each embodiment disclosed herein.
Claims (10)
1. a kind of image classification method characterized by comprising
Image to be processed input fisrt feature is extracted into network, obtains the fisrt feature of the image to be processed;
The fisrt feature is inputted into M second feature respectively and extracts network, obtains M second feature;
According to the M second feature, the classification results of the image to be processed are obtained.
2. the method according to claim 1, wherein the fisrt feature is that the fisrt feature is extracted in network
Default network level output feature, it is described according to the M second feature, obtain the classification knot of the image to be processed
Fruit, comprising:
The subsequent level that network level is preset described in network is extracted by the fisrt feature to fisrt feature progress spy
Extraction process is levied, third feature is obtained;
The classification results of the image to be processed are obtained according to the third feature and the M second feature.
3. a kind of feature extraction network training method characterized by comprising
Sample image input fisrt feature is extracted into network, obtains the first training characteristics of the sample image;
First training characteristics are inputted into M second feature respectively and extract network, obtain M the second training characteristics;
According to the M the second training characteristics, M second feature of training extracts network.
4. according to the method described in claim 3, it is characterized in that, training M second according to the M the second training characteristics
Feature extraction network, comprising:
According to the M the second training characteristics, station work N group second feature extracts network, and every group of second feature extracts network
At least one of network is extracted including the M second feature, M, N are integer and 1 < N≤M.
5. according to the method described in claim 4, it is characterized in that, according to the M the second training characteristics, station work N group
Second feature extracts network, comprising:
The markup information that corresponding second training characteristics of network and sample image are extracted according to i-th group of second feature, determines i-th group
Second feature extracts the first network loss of network, and i is integer and 1≤i≤N;
The first network loss of network is extracted according to i-th group of second feature, training i-th group of second feature extracts network.
6. according to the method described in claim 4, it is characterized in that, according to the M the second training characteristics, station work N group
Feature extraction network, comprising:
According to first training characteristics and the M the second training characteristics, N group second feature described in station work extracts network
And the fisrt feature extracts network.
7. a kind of image classification device characterized by comprising
First extraction module obtains the of the image to be processed for image to be processed input fisrt feature to be extracted network
One feature;
Second extraction module extracts network for the fisrt feature to be inputted M second feature respectively, obtains M second spy
Sign;
Module is obtained, for obtaining the classification results of the image to be processed according to the M second feature.
8. a kind of feature extraction network training device characterized by comprising
Third extraction module obtains the first instruction of the sample image for sample image input fisrt feature to be extracted network
Practice feature;
4th extraction module extracts network for first training characteristics to be inputted M second feature respectively, obtains M the
Two training characteristics;
Training module, for according to the M the second training characteristics, M second feature of training to extract network.
9. a kind of electronic equipment characterized by comprising
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to: perform claim require any one of 1 to 6 described in method.
10. a kind of computer readable storage medium, is stored thereon with computer program instructions, which is characterized in that the computer
Method described in any one of claim 1 to 6 is realized when program instruction is executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910702994.4A CN110414611A (en) | 2019-07-31 | 2019-07-31 | Image classification method and device, feature extraction network training method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910702994.4A CN110414611A (en) | 2019-07-31 | 2019-07-31 | Image classification method and device, feature extraction network training method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110414611A true CN110414611A (en) | 2019-11-05 |
Family
ID=68364765
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910702994.4A Pending CN110414611A (en) | 2019-07-31 | 2019-07-31 | Image classification method and device, feature extraction network training method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110414611A (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106951923A (en) * | 2017-03-21 | 2017-07-14 | 西北工业大学 | A kind of robot three-dimensional shape recognition process based on multi-camera Vision Fusion |
CN108537743A (en) * | 2018-03-13 | 2018-09-14 | 杭州电子科技大学 | A kind of face-image Enhancement Method based on generation confrontation network |
CN108764207A (en) * | 2018-06-07 | 2018-11-06 | 厦门大学 | A kind of facial expression recognizing method based on multitask convolutional neural networks |
CN108898145A (en) * | 2018-06-15 | 2018-11-27 | 西南交通大学 | A kind of image well-marked target detection method of combination deep learning |
EP3451293A1 (en) * | 2017-08-28 | 2019-03-06 | Thomson Licensing | Method and apparatus for filtering with multi-branch deep learning |
CN109558781A (en) * | 2018-08-02 | 2019-04-02 | 北京市商汤科技开发有限公司 | A kind of multi-angle video recognition methods and device, equipment and storage medium |
CN109871883A (en) * | 2019-01-24 | 2019-06-11 | 北京市商汤科技开发有限公司 | Neural network training method and device, electronic equipment and storage medium |
CN109886114A (en) * | 2019-01-18 | 2019-06-14 | 杭州电子科技大学 | A kind of Ship Target Detection method based on cluster translation feature extraction strategy |
CN109982092A (en) * | 2019-04-28 | 2019-07-05 | 华侨大学 | HEVC interframe fast method based on branch intensive loop convolutional neural networks |
-
2019
- 2019-07-31 CN CN201910702994.4A patent/CN110414611A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106951923A (en) * | 2017-03-21 | 2017-07-14 | 西北工业大学 | A kind of robot three-dimensional shape recognition process based on multi-camera Vision Fusion |
EP3451293A1 (en) * | 2017-08-28 | 2019-03-06 | Thomson Licensing | Method and apparatus for filtering with multi-branch deep learning |
CN108537743A (en) * | 2018-03-13 | 2018-09-14 | 杭州电子科技大学 | A kind of face-image Enhancement Method based on generation confrontation network |
CN108764207A (en) * | 2018-06-07 | 2018-11-06 | 厦门大学 | A kind of facial expression recognizing method based on multitask convolutional neural networks |
CN108898145A (en) * | 2018-06-15 | 2018-11-27 | 西南交通大学 | A kind of image well-marked target detection method of combination deep learning |
CN109558781A (en) * | 2018-08-02 | 2019-04-02 | 北京市商汤科技开发有限公司 | A kind of multi-angle video recognition methods and device, equipment and storage medium |
CN109886114A (en) * | 2019-01-18 | 2019-06-14 | 杭州电子科技大学 | A kind of Ship Target Detection method based on cluster translation feature extraction strategy |
CN109871883A (en) * | 2019-01-24 | 2019-06-11 | 北京市商汤科技开发有限公司 | Neural network training method and device, electronic equipment and storage medium |
CN109982092A (en) * | 2019-04-28 | 2019-07-05 | 华侨大学 | HEVC interframe fast method based on branch intensive loop convolutional neural networks |
Non-Patent Citations (2)
Title |
---|
HONGMIN GAO ET AL.: "Multi-branch fusion network for hyperspectral image classification", 《KNOWLEDGE-BASED SYSTEMS》 * |
高旻健: "基于并联卷积神经网络的跨年龄人脸验证", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106339680B (en) | Face key independent positioning method and device | |
CN109816764A (en) | Image generating method and device, electronic equipment and storage medium | |
CN109522910A (en) | Critical point detection method and device, electronic equipment and storage medium | |
CN109800737A (en) | Face recognition method and device, electronic equipment and storage medium | |
CN110348537A (en) | Image processing method and device, electronic equipment and storage medium | |
CN109241835A (en) | Image processing method and device, electronic equipment and storage medium | |
CN109697734A (en) | Position and orientation estimation method and device, electronic equipment and storage medium | |
CN109618184A (en) | Method for processing video frequency and device, electronic equipment and storage medium | |
CN109800744A (en) | Image clustering method and device, electronic equipment and storage medium | |
CN110458218A (en) | Image classification method and device, sorter network training method and device | |
CN109871883A (en) | Neural network training method and device, electronic equipment and storage medium | |
CN110390394A (en) | Criticize processing method and processing device, electronic equipment and the storage medium of normalization data | |
CN110287874A (en) | Target tracking method and device, electronic equipment and storage medium | |
CN109658401A (en) | Image processing method and device, electronic equipment and storage medium | |
CN110532956A (en) | Image processing method and device, electronic equipment and storage medium | |
CN109801270A (en) | Anchor point determines method and device, electronic equipment and storage medium | |
CN110503023A (en) | Biopsy method and device, electronic equipment and storage medium | |
CN110298310A (en) | Image processing method and device, electronic equipment and storage medium | |
CN109087238A (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN109934275A (en) | Image processing method and device, electronic equipment and storage medium | |
CN109615593A (en) | Image processing method and device, electronic equipment and storage medium | |
CN109919300A (en) | Neural network training method and device and image processing method and device | |
CN109829863A (en) | Image processing method and device, electronic equipment and storage medium | |
CN110060215A (en) | Image processing method and device, electronic equipment and storage medium | |
CN108985176A (en) | image generating method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191105 |