CN108229298A - The training of neural network and face identification method and device, equipment, storage medium - Google Patents
The training of neural network and face identification method and device, equipment, storage medium Download PDFInfo
- Publication number
- CN108229298A CN108229298A CN201710929741.1A CN201710929741A CN108229298A CN 108229298 A CN108229298 A CN 108229298A CN 201710929741 A CN201710929741 A CN 201710929741A CN 108229298 A CN108229298 A CN 108229298A
- Authority
- CN
- China
- Prior art keywords
- neural network
- value
- angle
- training
- loss function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
Training and face identification method and device, equipment, storage medium the embodiment of the invention discloses a kind of neural network, wherein, training method includes:Sample image is handled by neural network, obtains the classification results of the sample image;The sample image is labeled with class label;Optimization processing is performed to the classification results, based on the classification results after optimization processing and known class label counting loss functional value;The optimization processing includes normalized and/or angle is handled;The neural network is trained based on the loss function value.The embodiment of the present invention realizes quick counting loss function;And loss function value is enable to drop to reasonable level, loss function will not be met with and restrained the problem of slow;The neural network trained by the loss function value can extract the feature more compacted.
Description
Technical field
The present invention relates to technical field of computer vision, the training of especially a kind of neural network and face identification method and
Device, electronic equipment, computer storage media.
Background technology
In recent years, due to the development of deep neural network, deep neural network has been widely used in all kinds of identification missions
In.For general object, scene or action recognition task, the classification of test sample is all included in training set, because
This, the inhomogeneity another characteristic that neural network extracts only needs in feature space to divide.And recognition of face task has
Certain particularity can not be objectively being realized due to collecting the proprietary face picture in the whole world for network training,
Therefore, physical presence largely tests situation of the sample not in training is gathered, and recognition of face is still in computer vision field
One of highest identification mission of difficulty.
Invention content
The embodiment of the present invention provides training and the face recognition technology of a kind of neural network.
A kind of training method of neural network provided in an embodiment of the present invention, including:
Sample image is handled by neural network, obtains the classification results of the sample image;The sample graph
As being labeled with class label, the neural network includes at least one convolutional layer and two full articulamentums;
Optimization processing is performed to the classification results, based on the classification results after optimization processing and known class label meter
Calculate loss function value;The optimization processing includes normalized and/or angle is handled;
The neural network is trained based on the loss function value.
In another embodiment based on the above method of the present invention, the neural network include at least one convolutional layer and
At least two full articulamentums;
It is described that sample image is handled by neural network, the classification results of the sample image are obtained, including:
Sample image is inputted into the neural network, convolutional calculation is performed to sample image by each convolutional layer, is led to
First full articulamentum output feature vector of the neural network is crossed, passes through the last one full articulamentum of the neural network
Output category result.
In another embodiment based on the above method of the present invention, further include:
The weight vector of the last one full articulamentum is obtained based on weighted value in the last one described full articulamentum;
Perform dot product operations to described eigenvector and the weight vector, output described eigenvector and the weights to
The dot product of amount.
In another embodiment based on the above method of the present invention, the optimization processing includes normalized;
Optimization processing is performed to the classification results of the acquisition, including:
The modulus value normalization of described eigenvector is characterized constant, is weights by the modulus value normalization of the weight vector
Constant;
By calculating the characteristic constant and weights constant and the product of included angle cosine value, described eigenvector and institute are obtained
State the dot product of weight vector;The angle is the angle between described eigenvector and the weight vector.
In another embodiment based on the above method of the present invention, the optimization processing includes angle processing;
Optimization processing is performed to the classification results of the acquisition, including:
Angle angle with angle allowance is multiplied and obtains new angle;The angle be described eigenvector with the weights to
Angle between amount, the angle allowance are constant;
By calculating the modulus value of described eigenvector and the modulus value of the weight vector and the product of new angle cosine value, obtain
Obtain described eigenvector and the dot product of the weight vector.
In another embodiment based on the above method of the present invention, the optimization processing includes normalized and angle
Processing;
Optimization processing is performed to the classification results of the acquisition, including:
The modulus value normalization of described eigenvector is characterized constant, is weights by the modulus value normalization of the weight vector
Constant, and angle angle with angle allowance is multiplied and obtains new angle;The angle be described eigenvector with the weights to
Angle between amount, the angle allowance are constant;
By calculating the product of the characteristic constant and weights constant and new angle cosine value, obtain described eigenvector with
The dot product of the weight vector.
In another embodiment based on the above method of the present invention, classification results based on after optimization processing and
The class label counting loss functional value known, including:
Product and known class label based on the characteristic constant and weights constant and new angle cosine value, pass through
Loss function value is calculated in the loss function formula.
In another embodiment based on the above method of the present invention, the product of the characteristic constant and the weights constant
To be more than 1 constant.
In another embodiment based on the above method of the present invention, the characteristic constant and the weights constant are 1.
In another embodiment based on the above method of the present invention, the angle allowance is the constant more than 1.
In another embodiment based on the above method of the present invention, the nerve net is trained based on the loss function value
Network, including:
It is greater than or equal to preset value in response to the convergence rate of the loss function, according to the loss function value of acquisition, leads to
The parameter in the reversed gradient algorithm adjustment neural network is crossed, corresponding to the convergence of the corresponding loss function of the neural network
Speed is greater than or equal to preset value, returns and performs the behaviour that the neural network by adjusting parameter handles sample image
Make.
In another embodiment based on the above method of the present invention, the nerve net is trained based on the loss function value
Network, including:
In response to the loss function counting loss functional value number be less than preset value, according to obtain loss function value,
Parameter in the neural network is adjusted, and to the number of the loss function counting loss functional value by reversed gradient algorithm
Add 1;
Be less than preset value in response to the number of the loss function counting loss functional value, return perform it is described by adjusting
The operation that the neural network of parameter handles sample image.
The other side of the embodiment of the present invention additionally provides a kind of face identification method, including:
Image is handled by neural network, exports the feature and/or recognition result of face in described image;It is described
Neural network is based on method as described above training and obtains.
The other side of the embodiment of the present invention additionally provides the training device of neural network, including:
Taxon is handled sample image for passing through neural network, obtains the classification knot of the sample image
Fruit;The sample image is labeled with class label;
Optimize unit, for performing optimization processing to the classification results;The optimization processing include normalized and/
Or angle processing;
Costing bio disturbance unit, for based on the classification results after optimization processing and known class label counting loss function
Value;
Training unit trains the neural network for being based on the loss function value.
In another embodiment based on above device of the present invention, the neural network include at least one convolutional layer and
At least two full articulamentums;
The taxon, specifically for sample image is inputted the neural network, by each convolutional layer to sample
This image performs convolutional calculation, exports feature vector by first full articulamentum of the neural network, passes through the nerve
The full articulamentum output category result of the last one of network.
In another embodiment based on above device of the present invention, further include:
Weights acquiring unit, for being based in the last one described full articulamentum, the last one connects entirely described in weighted value acquisition
Connect the weight vector of layer;
Dot product computing unit for performing dot product operations to described eigenvector and the weight vector, exports the spy
The dot product of sign vector and the weight vector.
In another embodiment based on above device of the present invention, the optimization processing includes normalized;
The optimization unit, specifically for the modulus value normalization of described eigenvector is characterized constant, by the weights
Vector field homoemorphism value is normalized to weights constant;
The dot product computing unit calculates the characteristic constant and weights constant and included angle cosine value specifically for passing through
Product obtains described eigenvector and the dot product of the weight vector;The angle be described eigenvector with the weights to
Angle between amount.
In another embodiment based on above device of the present invention, the optimization processing includes angle processing;
The optimization unit obtains new angle specifically for angle angle is multiplied with angle allowance;The angle is institute
The angle between feature vector and the weight vector is stated, the angle allowance is constant;
The dot product computing unit calculates the modulus value of described eigenvector and the mould of the weight vector specifically for passing through
The product of value and new angle cosine value obtains described eigenvector and the dot product of the weight vector.
In another embodiment based on above device of the present invention, the optimization processing includes normalized and angle
Processing;
The optimization unit, specifically for the modulus value normalization of described eigenvector is characterized constant, by the weights
Vector field homoemorphism value is normalized to weights constant, and angle angle with angle allowance is multiplied and obtains new angle;The angle is institute
The angle between feature vector and the weight vector is stated, the angle allowance is constant;
The dot product computing unit calculates the characteristic constant and weights constant and new angle cosine value specifically for passing through
Product, obtain the dot product of described eigenvector and the weight vector.
In another embodiment based on above device of the present invention, the costing bio disturbance unit, specifically for being based on
Characteristic constant and weights constant and product and the known class label of new angle cosine value are stated, passes through loss function public affairs
Loss function value is calculated in formula.
In another embodiment based on above device of the present invention, the product of the characteristic constant and the weights constant
To be more than 1 constant.
In another embodiment based on above device of the present invention, the characteristic constant and the weights constant are 1.
In another embodiment based on above device of the present invention, the angle allowance is the constant more than 1.
In another embodiment based on above device of the present invention, the training unit, specifically in response to described
The convergence rate of loss function is greater than or equal to preset value, according to the loss function value of acquisition, is adjusted by reversed gradient algorithm
Parameter in the neural network, corresponding to the corresponding loss function of the neural network convergence rate be greater than or equal to it is default
Value returns and performs the operation that the neural network by adjusting parameter handles sample image.
In another embodiment based on above device of the present invention, the training unit, specifically in response to described
The number of loss function counting loss functional value is less than preset value, according to loss function value is obtained, passes through reversed gradient algorithm tune
Parameter in the whole neural network, and 1 is added to the number of the loss function counting loss functional value;
Be less than preset value in response to the number of the loss function counting loss functional value, return perform it is described by adjusting
The operation that the neural network of parameter handles sample image.
One side according to embodiments of the present invention, a kind of face identification device provided, including:
Processing unit is handled image by neural network;The neural network is based on as above any nerve
The training method training of network obtains;
As a result output unit, for exporting the feature and/or recognition result of face in described image.
One side according to embodiments of the present invention, a kind of electronic equipment provided, including processor, the processor packet
Include the training device of neural network as described above or face identification device as described above.
One side according to embodiments of the present invention, a kind of electronic equipment provided, including:Memory, can for storing
Execute instruction;
And processor, for communicating to perform the executable instruction so as to complete god as described above with the memory
The operation of training method or face identification method as described above through network.
A kind of one side according to embodiments of the present invention, the computer storage media provided, can for storing computer
The instruction of reading, described instruction are performed the training method for performing neural network as described above or recognition of face as described above
The operation of method.
The training method of a kind of neural network provided based on the above embodiment of the present invention and device, electronic equipment, calculating
Machine storage medium handles sample image by neural network, obtains the classification results of sample image, classification results are held
Row optimization processing, based on the classification results after optimization processing and known class label counting loss functional value;Due to performing
On the one hand optimization processing reduces influence of the classification results to loss function value, realize quick counting loss function;On the other hand
Loss function value is enable to drop to reasonable level, loss function will not be met with and restrained the problem of slow;Pass through the loss function value
Trained neural network can extract the feature more compacted.
Below by drawings and examples, technical scheme of the present invention is described in further detail.
Description of the drawings
The attached drawing of a part for constitution instruction describes the embodiment of the present invention, and is used to explain together with description
The principle of the present invention.
With reference to attached drawing, according to following detailed description, the present invention can be more clearly understood, wherein:
Fig. 1 is the flow chart of training method one embodiment of neural network of the present invention.
Fig. 2 is the structure diagram of training device one embodiment of neural network of the present invention.
Fig. 3 is the structure diagram for realizing the terminal device of the embodiment of the present application or the electronic equipment of server.
Specific embodiment
Carry out the various exemplary embodiments of detailed description of the present invention now with reference to attached drawing.It should be noted that:Unless in addition have
Body illustrates that the unlimited system of component and the positioned opposite of step, numerical expression and the numerical value otherwise illustrated in these embodiments is originally
The range of invention.
Simultaneously, it should be appreciated that for ease of description, the size of the various pieces shown in attached drawing is not according to reality
Proportionate relationship draw.
It is illustrative to the description only actually of at least one exemplary embodiment below, is never used as to the present invention
And its application or any restrictions that use.
Technology, method and apparatus known to person of ordinary skill in the relevant may be not discussed in detail, but suitable
In the case of, the technology, method and apparatus should be considered as part of specification.
It should be noted that:Similar label and letter represents similar terms in following attached drawing, therefore, once a certain Xiang Yi
It is defined in a attached drawing, then in subsequent attached drawing does not need to that it is further discussed.
The embodiment of the present invention can be applied to computer system/server, can be with numerous other general or specialized calculating
System environments or configuration operate together.Suitable for be used together with computer system/server well-known computing system, ring
The example of border and/or configuration includes but not limited to:Personal computer system, server computer system, thin client, thick client
Machine, hand-held or laptop devices, the system based on microprocessor, set-top box, programmable consumer electronics, NetPC Network PC,
Minicomputer system, large computer system and distributed cloud computing technology environment including any of the above described system, etc..
Computer system/server can be in computer system executable instruction (such as journey performed by computer system
Sequence module) general linguistic context under describe.In general, program module can include routine, program, target program, component, logic, number
According to structure etc., they perform specific task or realize specific abstract data type.Computer system/server can be with
Implement in distributed cloud computing environment, in distributed cloud computing environment, task is long-range by what is be linked through a communication network
Manage what equipment performed.In distributed cloud computing environment, program module can be located at the Local or Remote meter for including storage device
It calculates in system storage medium.
For recognition of face, in the case where all test samples can not be collected, it is desirable to the spy of the training set of network extraction
Sign can not only divide in feature space, also more to compact and occupy feature space few as possible, so as to the test specimens outside set
Example leaving space avoids mistake caused by the coincidence between feature from differentiating.
Therefore, the key of the face recognition algorithms based on deep neural network is to learn special to the face more to compact
Sign:Make the feature of the different pictures of same person that network extracts as near as possible in the distance of feature space, i.e. variance within clusters
It is small;And allow different people feature feature space distance as far as possible, i.e., inter-class variance is big.Whether the calculation based on classification
Method or directly the metric learning method of training characteristics, essential purpose is all to obtain the feature more compacted.
Fig. 1 is the flow chart of training method one embodiment of neural network of the present invention.As shown in Figure 1, the embodiment side
Method includes:
Step 101, sample image is handled by neural network, obtains the classification results of sample image.
Wherein, sample image is labeled with class label.
Step 102, optimization processing is performed to classification results, based on the classification results after optimization processing and known classification mark
Sign counting loss functional value;Optimization processing includes normalized and/or angle is handled.
The feature more compacted in order to obtain needs to optimize classification results processing, based on the classification after optimization processing
As a result more conducively trained loss function value can be obtained, the neural network trained based on the loss function value can obtain more
The face characteristic to compact.
Step 103, neural network is trained based on loss function value.
Training method based on a kind of neural network that the above embodiment of the present invention provides, by neural network to sample graph
As being handled, the classification results of sample image are obtained, optimization processing are performed to classification results, based on the classification after optimization processing
As a result with known class label counting loss functional value;Due to performing optimization processing, classification results pair are on the one hand reduced
Quick counting loss function is realized in the influence of loss function value;On the other hand loss function value is enable to drop to reasonable level, no
Loss function can be met with and restrained the problem of slow;The neural network trained by the loss function value can extract feature faster.
In a specific example of training method above-described embodiment of neural network of the present invention, neural network is included at least
One convolutional layer and two full articulamentums, operation 101 include:
Sample image is inputted into neural network, convolutional calculation is performed to sample image by each convolutional layer, passes through nerve net
The full articulamentum output feature vector of first of network, passes through the last one full articulamentum output category result of neural network.
In the present embodiment, which is the neural network for recognition of face, generally includes at least one convolutional layer
With at least two full articulamentums, convolutional calculation is performed by convolutional layer, obtains the face of different levels respectively by each convolutional layer
Based on first full articulamentum output for the feature vector of whole face, people is obtained based on the last one full articulamentum for feature
The classification results of face image.
In a specific example of training method above-described embodiment of neural network of the present invention, further include:
The weight vector of the last one full articulamentum is obtained based on weighted value in the last one full articulamentum;
Dot product operations are performed to feature vector and weight vector, export the dot product of feature vector and weight vector.
In the present embodiment, by obtaining the weighted value in the last one full articulamentum, obtain corresponding to the power of the full articulamentum
Value vector, the weight vector are used for follow-up counting loss functional value;In counting loss functional value, need to use feature vector and
The dot product of the weight vector in practical applications, can directly export feature vector by the last one improved full articulamentum
With the dot product of weight vector.
Another embodiment of the training method of neural network of the present invention, on the basis of the various embodiments described above, at optimization
Reason includes normalized;
Optimization processing is performed to the classification results of acquisition, including:
The modulus value normalization of feature vector is characterized constant, is weights constant by the modulus value normalization of weight vector;
By calculating the product of characteristic constant and weights constant and included angle cosine value, feature vector and weight vector are obtained
Dot product;Angle is the angle between feature vector and weight vector.
In the present embodiment, the modulus value of feature vector and the modulus value of weight vector be one may be any form arbitrary number
Value, in order to reduce influence of the modulus value of the modulus value of feature vector and weight vector in the calculating of loss function value, by feature vector
Modulus value normalization for a characteristic constant, the value of characteristic constant can be 1 or more than 1, by the modulus value normalizing of weight vector
Turn to a weights constant, the value of weights constant can be 1 or less than 1;When characteristic constant and the value of weights constant are not 1
When, it needs to ensure that characteristic constant and the product of weights constant are more than 1;
By calculating the product of characteristic constant and weights constant and included angle cosine value, feature vector and weight vector are obtained
Shown in the calculation formula of dot product such as formula (1):
Wherein, wjFor weight vector, fiFor the feature vector that network extracts the i-th pictures,For weight vector
Angle between feature vector;For weight vector and the dot product of feature vector.
Shown in common sorter network loss function formula such as formula (2):
Wherein, LsoftmaxFor loss function value, byiAnd bjFor offset vector, do not make a search to the value in the present invention, it can
It is set to 0,Angle between weight vector and feature vector;For weight vector and feature vector
Dot product.At this point, bringing formula (1) into formula (2) can obtain the following formula (3):
Formula (3) is the formula of counting loss function commonly used in the prior art, when by weight vector therein and feature
When vector is normalized respectively as weight constant and characteristic constant, when weight constant and characteristic constant, the public affairs of loss function
Formula is transformed to as shown in formula (4):
At this point, loss function value is only related to the angle or cosine similarity of feature vector and weight vector;Using formula
(4), will faster during counting loss functional value, it but can also be there are some other problem, when weight constant and characteristic constant value are 1
When, network optimization problem will occur, loss function value will there are one very high lower bound because normalization can lead to loss function value
It can not decline, the preferable neural network of performance can not be obtained, therefore, the present embodiment is proposed weight constant and characteristic constant
Product is the constant more than 1, is provided with characteristic constant more than 1, and weights constant is less than 1, passes through improved normalization operation, solution
It has determined optimization problem, loss function value can drop to rational level.
Another embodiment of the training method of neural network of the present invention, on the basis of the various embodiments described above, at optimization
Reason includes angle processing;
Optimization processing is performed to the classification results of acquisition, including:
Angle angle with angle allowance is multiplied and obtains new angle;Angle is the folder between feature vector and weight vector
Angle, angle allowance are constant;
By calculating the product of the modulus value of feature vector and the modulus value of weight vector and new angle cosine value, obtain feature to
Amount and the dot product of weight vector.
In the present embodiment, by the way that angle angle is multiplied with angle allowance, the judgement based on cosine value at this time just needs more
Add strictly, therefore increase trained difficulty, of a sort feature vector is made more to compact in feature space.
The a still further embodiment of the training method of neural network of the present invention, on the basis of the various embodiments described above, at optimization
Reason includes normalized and angle processing;
Optimization processing is performed to the classification results of acquisition, including:
The modulus value normalization of feature vector is characterized constant, is weights constant by the modulus value normalization of weight vector, and
Angle angle with angle allowance is multiplied and obtains new angle;Angle is the angle between feature vector and weight vector, and angle is abundant
It measures as constant;
By calculating characteristic constant and the product of weights constant and new angle cosine value, feature vector and weight vector are obtained
Dot product.
In the present embodiment, it is combined by that will normalize and introduce angle allowance, optimization is realized to loss function,
By above-described embodiment it is found that embodying loss function value and feature in the obtained formula (4) of loss function by normalizing
The angle or cosine similarity of vector sum weight vector are related, at this point, grader is by cos θ1>cosθ2Situation be judged as classification
1, on the contrary it is judged as classification 2, cos θ1>cosθ2What is embodied is the cosine similarity of feature and weight vectors of all categories.And which
Which class is classification more like will be judged as;Angle allowance m is introduced into Classification Loss function again at this time, the formula of loss function
As shown in formula (5):
Now it is necessary to meet cos (m θ1)>cosθ2, classification can be just determined as to 1, that is, increase trained difficulty, make
Of a sort feature vector is more compacted in feature space, and angle is closer to the direction of the corresponding weight vectors of the category;
And at this point, during by by weight constant and characteristic constant value for constant more than 1, optimum training effect is up to, was both solved
Optimization problem, while solve the problems, such as that feature is compacted.
In a specific example of training method above-described embodiment of neural network of the present invention, operation 102 includes:
Feature based constant and weights constant and product and the known class label of new angle cosine value, pass through loss
Loss function value is calculated in function formula.
In the present embodiment, the process of counting loss functional value can bring formula (5) between and be calculated, and can also be pair
The formula that characteristic constant and weights constant are added in formula (5) carries out counting loss functional value.
In a specific example of training method above-described embodiment of neural network of the present invention, angle allowance is more than 1
Constant.
In the present embodiment, the characteristic calculated by cosine determines, if any making to meet cos (m θ1)>cosθ2Difficulty be more than
cosθ1>cosθ2, angle allowance m is needed at this time>1, angle allowance therefore, in the present embodiment is limited as the constant more than 1.
In a specific example of training method above-described embodiment of neural network of the present invention, operation 103 includes:
It is greater than or equal to preset value in response to the convergence rate of loss function, according to the loss function value of acquisition, by anti-
The parameter in neural network is adjusted to gradient algorithm;
It is greater than or equal to preset value in response to the convergence rate of the corresponding loss function of neural network, returns and perform described lead to
Cross the operation that the neural network of adjusting parameter handles sample image.
The present embodiment judges the training degree of neural network by the convergence rate of loss function, when neural network is corresponding
When the convergence rate of loss function is less than preset value, trained neural network is obtained.
In a specific example of training method above-described embodiment of neural network of the present invention, operation 103 includes:
It is less than preset value in response to the number of loss function counting loss functional value, according to loss function value is obtained, passes through
Parameter in reversed gradient algorithm adjustment neural network, and 1 is added to the number of loss function counting loss functional value;
It is less than preset value in response to the number of loss function counting loss functional value, returns to the god performed by adjusting parameter
The operation handled through network sample image.
The present embodiment judges the training degree of neural network by the number of counting loss function, when counting loss function
When number reaches preset value, trained neural network is obtained.
One of ordinary skill in the art will appreciate that:Realizing all or part of step of above method embodiment can pass through
The relevant hardware of program instruction is completed, and aforementioned program can be stored in a computer read/write memory medium, the program
When being executed, step including the steps of the foregoing method embodiments is performed;And aforementioned storage medium includes:ROM, RAM, magnetic disc or light
The various media that can store program code such as disk.
Fig. 2 is the structure diagram of training device one embodiment of neural network of the present invention.The device of the embodiment can
It is used to implement the above-mentioned each method embodiment of the present invention.As shown in Fig. 2, the device of the embodiment includes:
Taxon 21 is handled sample image for passing through neural network, obtains the classification results of sample image.
Wherein, sample image is labeled with class label.
Optimize unit 22, for performing optimization processing to classification results;Optimization processing includes normalized and/or angle
Processing.
The feature more compacted in order to obtain needs to optimize classification results processing, based on the classification after optimization processing
As a result more conducively trained loss function value can be obtained, the neural network trained based on the loss function value can obtain more
The face characteristic to compact.
Costing bio disturbance unit 23, for based on the classification results after optimization processing and known class label counting loss letter
Numerical value.
Training unit 24, for being based on loss function value training neural network.
Training device based on a kind of neural network that the above embodiment of the present invention provides, by neural network to sample graph
As being handled, the classification results of sample image are obtained, optimization processing are performed to classification results, based on the classification after optimization processing
As a result with known class label counting loss functional value;Due to performing optimization processing, classification results pair are on the one hand reduced
Quick counting loss function is realized in the influence of loss function value;On the other hand loss function value is enable to drop to reasonable level, no
Loss function can be met with and restrained the problem of slow;The neural network trained by the loss function value can extract feature faster.
In a specific example of training device above-described embodiment of neural network of the present invention, neural network is included at least
One convolutional layer and at least two full articulamentums;
Specifically for sample image is inputted neural network, volume is performed by each convolutional layer to sample image for taxon
Product calculates, and feature vector is exported by first full articulamentum of neural network, passes through the last one full connection of neural network
Layer output category result.
In a specific example of training device above-described embodiment of neural network of the present invention, further include:
Weights acquiring unit, for obtaining the power of the last one full articulamentum based on weighted value in the last one full articulamentum
Value vector;
Dot product computing unit, for performing dot product operations to feature vector and weight vector, output feature vector with it is described
The dot product of weight vector.
Another embodiment of the training device of neural network of the present invention, on the basis of the various embodiments described above, at optimization
Reason includes normalized;
Optimize unit, specifically for the modulus value normalization of feature vector is characterized constant, the modulus value of weight vector is returned
One turns to weights constant;
Dot product computing unit calculates characteristic constant and weights constant and the product of included angle cosine value specifically for passing through, obtains
Obtain the dot product of feature vector and weight vector;Angle is the angle between feature vector and weight vector.
In the present embodiment, the modulus value of feature vector and the modulus value of weight vector be one may be any form arbitrary number
Value, in order to reduce influence of the modulus value of the modulus value of feature vector and weight vector in the calculating of loss function value, by feature vector
Modulus value normalization for a characteristic constant, the value of characteristic constant can be 1 or more than 1, by the modulus value normalizing of weight vector
Turn to a weights constant, the value of weights constant can be 1 or less than 1;When characteristic constant and the value of weights constant are not 1
When, it needs to ensure that characteristic constant and the product of weights constant are more than 1.
Another embodiment of the training device of neural network of the present invention, on the basis of the various embodiments described above, at optimization
Reason includes angle processing;
Optimize unit, new angle is obtained specifically for angle angle is multiplied with angle allowance;Angle be feature vector with
Angle between weight vector, angle allowance are constant;
Dot product computing unit is calculated specifically for passing through more than the modulus value of feature vector and the modulus value of weight vector and new angle
The product of string value obtains the dot product of feature vector and weight vector.
In the present embodiment, by the way that angle angle is multiplied with angle allowance, the judgement based on cosine value at this time just needs more
Add strictly, therefore increase trained difficulty, of a sort feature vector is made more to compact in feature space.
The a still further embodiment of the training method of neural network of the present invention, on the basis of the various embodiments described above, at optimization
Reason includes normalized and angle processing;
Optimize unit, specifically for the modulus value normalization of feature vector is characterized constant, the modulus value of weight vector is returned
One turns to weights constant, and angle angle with angle allowance is multiplied and obtains new angle;Angle is feature vector and weight vector
Between angle, angle allowance be constant;
Dot product computing unit calculates characteristic constant and the product of weights constant and new angle cosine value specifically for passing through,
Obtain the dot product of feature vector and weight vector.
In the present embodiment, it is combined by that will normalize and introduce angle allowance, optimization is realized to loss function,
By above-described embodiment it is found that embodying loss function value and feature in the obtained formula (4) of loss function by normalizing
The angle or cosine similarity of vector sum weight vector are related, at this point, grader is by cos θ1>cosθ2Situation be judged as classification
1, on the contrary it is judged as classification 2, cos θ1>cosθ2What is embodied is the cosine similarity of feature and weight vectors of all categories.And which
Which class is classification more like will be judged as;Angle allowance m is introduced into Classification Loss function again at this time, the formula of loss function
As shown in formula (5):
Now it is necessary to meet cos (m θ1)>cosθ2, classification can be just determined as to 1, that is, increase trained difficulty, make
Of a sort feature vector is more compacted in feature space, and angle is closer to the direction of the corresponding weight vectors of the category;
And at this point, during by by weight constant and characteristic constant value for constant more than 1, optimum training effect is up to, was both solved
Optimization problem, while solve the problems, such as that feature is compacted.
In a specific example of training device above-described embodiment of neural network of the present invention, costing bio disturbance unit, tool
Body is for feature based constant and weights constant and product and the known class label of new angle cosine value, by losing letter
Loss function value is calculated in number formula.
In a specific example of training device above-described embodiment of neural network of the present invention, characteristic constant and weights are normal
Several products is the constant more than 1.
In a specific example of training device above-described embodiment of neural network of the present invention, characteristic constant and weights are normal
Number is 1.
In a specific example of training device above-described embodiment of neural network of the present invention, angle allowance is more than 1
Constant.
In a specific example of training device above-described embodiment of neural network of the present invention, training unit is specific to use
It is greater than or equal to preset value in the convergence rate in response to loss function, according to the loss function value of acquisition, passes through reversed gradient
Parameter in algorithm adjustment neural network, corresponding to the corresponding loss function of neural network convergence rate be greater than or equal to it is default
Value returns to the operation for performing and being handled by adjusting the neural network of parameter sample image.
In a specific example of training device above-described embodiment of neural network of the present invention, training unit is specific to use
It is less than preset value in the number in response to loss function counting loss functional value, according to loss function value is obtained, passes through reversely ladder
The parameter in algorithm adjustment neural network is spent, and 1 is added to the number of loss function counting loss functional value;
It is less than preset value in response to the number of loss function counting loss functional value, returns to the god performed by adjusting parameter
The operation handled through network sample image.
The another aspect of the embodiment of the present invention additionally provides a kind of face identification method, including:
Image is handled by neural network, exports the feature and/or recognition result of face in image;The nerve net
Any of the above-described embodiment training of training method of the network based on neural network of the present invention obtains.
Using normalization characteristic face algorithm in deep neural network, it is not necessary to modify the frames of traditional sorter network.
Only specified constant need to be normalized to, and second is connected entirely after first full articulamentum output face characteristic of network
Weight vector in layer normalizes to, and when two vectors do dot product, and angle allowance, final training classification are added in by aforementioned formula
Device.
And in test, the face characteristic that network exports need to only be extracted, and ratio is done with giving other face characteristics
It is right, that is, calculate Euclidean distance or cosine similarity.It proves after tested, face identification method provided in this embodiment can be existing
Have and best effect is realized on public data collection, it was demonstrated that the validity and superiority of this method.
One of ordinary skill in the art will appreciate that:Realizing all or part of step of above method embodiment can pass through
The relevant hardware of program instruction is completed, and aforementioned program can be stored in a computer read/write memory medium, the program
When being executed, step including the steps of the foregoing method embodiments is performed;And aforementioned storage medium includes:ROM, RAM, magnetic disc or light
The various media that can store program code such as disk.
One embodiment of face identification device of the present invention.The device of the embodiment, including:
Processing unit is handled image by neural network;The neural network is based on any of the above-described implementation of the present invention
The training method training of the neural network of example obtains;
As a result output unit, for exporting the feature and/or recognition result of face in image.
In depth convolutional neural networks frame using normalization characteristic face algorithm, it is not necessary to modify the frames of traditional sorter network
Frame.Only specified constant need to be normalized to, and complete by second after first full articulamentum output face characteristic of network
Weight vector in articulamentum normalizes to, and when two vectors do dot product, and angle allowance, final training are added in by aforementioned formula
Grader.
One side according to embodiments of the present invention, a kind of electronic equipment provided, including processor, processor includes this
Invent the training device or face identification device of the neural network described in any of the above-described embodiment.
One side according to embodiments of the present invention, a kind of electronic equipment provided, including:Memory, can for storing
Execute instruction;
And processor, for being communicated with memory with the instruction for performing executable instruction neural network thereby completing the present invention
Practice the operation of method or any of the above-described embodiment of face identification method.
A kind of one side according to embodiments of the present invention, the computer storage media provided, can for storing computer
The instruction of reading, described instruction is performed the training method for performing neural network of the present invention or face identification method is any of the above-described
The operation of embodiment.
The embodiment of the present invention additionally provides a kind of electronic equipment, such as can be mobile terminal, personal computer (PC), put down
Plate computer, server etc..Below with reference to Fig. 3, it illustrates suitable for being used for realizing the terminal device of the embodiment of the present application or service
The structure diagram of the electronic equipment 300 of device:As shown in figure 3, computer system 300 includes one or more processors, communication
Portion etc., one or more of processors are for example:One or more central processing unit (CPU) 301 and/or one or more
Image processor (GPU) 313 etc., processor can according to the executable instruction being stored in read-only memory (ROM) 302 or
From the executable instruction that storage section 308 is loaded into random access storage device (RAM) 303 perform various appropriate actions and
Processing.Communication unit 312 may include but be not limited to network interface card, and the network interface card may include but be not limited to IB (Infiniband) network interface card,
Processor can communicate with read-only memory 302 and/or random access storage device 330 to perform executable instruction,
It is connected by bus 304 with communication unit 312 and is communicated through communication unit 312 with other target devices, is implemented so as to complete the application
The corresponding operation of any one method that example provides for example, being handled by neural network sample image, obtains sample image
Classification results;Optimization processing is performed to classification results, based on the classification results after optimization processing and known class label meter
Calculate loss function value;Optimization processing includes normalized and/or angle is handled;Neural network is trained based on loss function value.
In addition, in RAM 303, it can also be stored with various programs and data needed for device operation.CPU301、ROM302
And RAM303 is connected with each other by bus 304.In the case where there is RAM303, ROM302 is optional module.RAM303 is stored
Executable instruction is written in executable instruction into ROM302 at runtime, and it is above-mentioned logical that executable instruction performs processor 301
The corresponding operation of letter method.Input/output (I/O) interface 305 is also connected to bus 304.Communication unit 312 can be integrally disposed,
It may be set to be with multiple submodule (such as multiple IB network interface cards), and in bus link.
I/O interfaces 305 are connected to lower component:Importation 306 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 307 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 308 including hard disk etc.;
And the communications portion 309 of the network interface card including LAN card, modem etc..Communications portion 309 via such as because
The network of spy's net performs communication process.Driver 310 is also according to needing to be connected to I/O interfaces 305.Detachable media 311, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 310, as needed in order to be read from thereon
Computer program be mounted into storage section 308 as needed.
Need what is illustrated, framework as shown in Figure 3 is only a kind of optional realization method, can root during concrete practice
The component count amount and type of above-mentioned Fig. 3 are selected, are deleted, increased or replaced according to actual needs;It is set in different function component
Put, can also be used it is separately positioned or integrally disposed and other implementations, such as GPU and CPU separate setting or can be by GPU collection
Into on CPU, communication unit separates setting, can also be integrally disposed on CPU or GPU, etc..These interchangeable embodiments
Each fall within protection domain disclosed by the invention.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product, it is machine readable including being tangibly embodied in
Computer program on medium, computer program are included for the program code of the method shown in execution flow chart, program code
It may include the corresponding instruction of corresponding execution method and step provided by the embodiments of the present application, for example, by neural network to sample graph
As being handled, the classification results of sample image are obtained;Optimization processing is performed to classification results, based on the classification after optimization processing
As a result with known class label counting loss functional value;Optimization processing includes normalized and/or angle is handled;Based on damage
Lose functional value training neural network.In such embodiments, which can be by communications portion 309 from network
It is downloaded and installed and/or is mounted from detachable media 311.In the computer program by central processing unit (CPU) 301
During execution, the above-mentioned function of being limited in the present processes is performed.
Methods and apparatus of the present invention, equipment may be achieved in many ways.For example, software, hardware, firmware can be passed through
Or any combinations of software, hardware, firmware realize methods and apparatus of the present invention, equipment.The step of for method
Sequence is stated merely to illustrate, the step of method of the invention is not limited to sequence described in detail above, unless with other
Mode illustrates.In addition, in some embodiments, the present invention can be also embodied as recording program in the recording medium, this
A little programs include being used to implement machine readable instructions according to the method for the present invention.Thus, the present invention also covering stores to hold
The recording medium of the program of row according to the method for the present invention.
Description of the invention provides for the sake of example and description, and is not exhaustively or will be of the invention
It is limited to disclosed form.Many modifications and variations are obvious for the ordinary skill in the art.It selects and retouches
It states embodiment and is to more preferably illustrate the principle of the present invention and practical application, and those of ordinary skill in the art is enable to manage
The solution present invention is so as to design the various embodiments with various modifications suitable for special-purpose.
Claims (10)
1. a kind of training method of neural network, which is characterized in that including:
Sample image is handled by neural network, obtains the classification results of the sample image;The sample image mark
It is marked with class label;
Optimization processing is performed to the classification results, damage is calculated based on the classification results after optimization processing and known class label
Lose functional value;The optimization processing includes normalized and/or angle is handled;
The neural network is trained based on the loss function value.
2. according to the method described in claim 1, it is characterized in that, the neural network is including at least one convolutional layer and at least
Two full articulamentums;
It is described that sample image is handled by neural network, the classification results of the sample image are obtained, including:
Sample image is inputted into the neural network, convolutional calculation is performed to sample image by each convolutional layer, passes through institute
First full articulamentum output feature vector of neural network is stated, is exported by the last one full articulamentum of the neural network
Classification results.
3. it according to the method described in claim 2, it is characterized in that, further includes:
The weight vector of the last one full articulamentum is obtained based on weighted value in the last one described full articulamentum;
Dot product operations, output described eigenvector and the weight vector are performed to described eigenvector and the weight vector
Dot product.
4. according to the method described in claim 3, it is characterized in that, the optimization processing includes normalized;
Optimization processing is performed to the classification results of the acquisition, including:
The modulus value normalization of described eigenvector is characterized constant, the modulus value normalization of the weight vector is normal for weights
Number;
By calculating the characteristic constant and weights constant and the product of included angle cosine value, described eigenvector and the power are obtained
It is worth the dot product of vector;The angle is the angle between described eigenvector and the weight vector.
5. a kind of face identification method, which is characterized in that including:
Image is handled by neural network, exports the feature and/or recognition result of face in described image;The nerve
Network is based on any the method training of claim 1-4 and obtains.
6. a kind of training device of neural network, which is characterized in that including:
Taxon is handled sample image for passing through neural network, obtains the classification results of the sample image;Institute
It states sample image and is labeled with class label;
Optimize unit, for performing optimization processing to the classification results;The optimization processing includes normalized and/or angle
Degree processing;
Costing bio disturbance unit, for based on the classification results after optimization processing and known class label counting loss functional value;
Training unit trains the neural network for being based on the loss function value.
7. a kind of face identification device, which is characterized in that including:
Processing unit is handled image by neural network;The neural network is based on any sides of claim 1-4
Method training obtains;
As a result output unit, for exporting the feature and/or recognition result of face in described image.
8. a kind of electronic equipment, which is characterized in that including processor, the processor includes the nerve net described in claim 6
Face identification device described in the training device or claim 7 of network.
9. a kind of electronic equipment, which is characterized in that including:Memory, for storing executable instruction;
And processor, appointed for communicating with the memory with performing the executable instruction so as to complete Claims 1-4
The operation of face identification method described in the training method or claim 5 of a neural network of anticipating.
10. a kind of computer storage media, for storing computer-readable instruction, which is characterized in that described instruction is held
Perform claim requires the recognition of face side described in the training method or claim 5 of neural network described in 1 to 4 any one during row
The operation of method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710929741.1A CN108229298A (en) | 2017-09-30 | 2017-09-30 | The training of neural network and face identification method and device, equipment, storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710929741.1A CN108229298A (en) | 2017-09-30 | 2017-09-30 | The training of neural network and face identification method and device, equipment, storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108229298A true CN108229298A (en) | 2018-06-29 |
Family
ID=62654533
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710929741.1A Pending CN108229298A (en) | 2017-09-30 | 2017-09-30 | The training of neural network and face identification method and device, equipment, storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108229298A (en) |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109002562A (en) * | 2018-08-30 | 2018-12-14 | 北京信立方科技发展股份有限公司 | A kind of instrument identification model training method and device and instrument recognition methods and device |
CN109063742A (en) * | 2018-07-06 | 2018-12-21 | 平安科技(深圳)有限公司 | Butterfly identifies network establishing method, device, computer equipment and storage medium |
CN109190654A (en) * | 2018-07-09 | 2019-01-11 | 上海斐讯数据通信技术有限公司 | The training method and device of human face recognition model |
CN109190120A (en) * | 2018-08-31 | 2019-01-11 | 第四范式(北京)技术有限公司 | Neural network training method and device and name entity recognition method and device |
CN109214175A (en) * | 2018-07-23 | 2019-01-15 | 中国科学院计算机网络信息中心 | Method, apparatus and storage medium based on sample characteristics training classifier |
CN109257622A (en) * | 2018-11-01 | 2019-01-22 | 广州市百果园信息技术有限公司 | A kind of audio/video processing method, device, equipment and medium |
CN109543524A (en) * | 2018-10-18 | 2019-03-29 | 同盾控股有限公司 | A kind of image-recognizing method, device |
CN109685106A (en) * | 2018-11-19 | 2019-04-26 | 深圳博为教育科技有限公司 | A kind of image-recognizing method, face Work attendance method, device and system |
CN109711358A (en) * | 2018-12-28 | 2019-05-03 | 四川远鉴科技有限公司 | Neural network training method, face identification method and system and storage medium |
CN110349124A (en) * | 2019-06-13 | 2019-10-18 | 平安科技(深圳)有限公司 | Vehicle appearance damages intelligent detecting method, device and computer readable storage medium |
CN110490239A (en) * | 2019-08-06 | 2019-11-22 | 腾讯医疗健康(深圳)有限公司 | Training method, the quality classification method, device and equipment of image quality control network |
CN110569911A (en) * | 2019-09-11 | 2019-12-13 | 深圳绿米联创科技有限公司 | Image recognition method, device, system, electronic equipment and storage medium |
CN110880018A (en) * | 2019-10-29 | 2020-03-13 | 北京邮电大学 | Convolutional neural network target classification method based on novel loss function |
WO2020082595A1 (en) * | 2018-10-26 | 2020-04-30 | 平安科技(深圳)有限公司 | Image classification method, terminal device and non-volatile computer readable storage medium |
CN111144566A (en) * | 2019-12-30 | 2020-05-12 | 深圳云天励飞技术有限公司 | Neural network weight parameter training method, characteristic classification method and corresponding device |
CN111340213A (en) * | 2020-02-19 | 2020-06-26 | 浙江大华技术股份有限公司 | Neural network training method, electronic device, and storage medium |
CN111401112A (en) * | 2019-01-03 | 2020-07-10 | 北京京东尚科信息技术有限公司 | Face recognition method and device |
CN111415333A (en) * | 2020-03-05 | 2020-07-14 | 北京深睿博联科技有限责任公司 | Training method and device for breast X-ray image antisymmetric generation analysis model |
CN111507469A (en) * | 2019-01-31 | 2020-08-07 | 斯特拉德视觉公司 | Method and device for optimizing hyper-parameters of automatic labeling device |
CN111582376A (en) * | 2020-05-09 | 2020-08-25 | 北京字节跳动网络技术有限公司 | Neural network visualization method and device, electronic equipment and medium |
CN111757172A (en) * | 2019-03-29 | 2020-10-09 | Tcl集团股份有限公司 | HDR video acquisition method, HDR video acquisition device and terminal equipment |
CN111832342A (en) * | 2019-04-16 | 2020-10-27 | 阿里巴巴集团控股有限公司 | Neural network, training and using method, device, electronic equipment and medium |
CN112348045A (en) * | 2019-08-09 | 2021-02-09 | 北京地平线机器人技术研发有限公司 | Training method and training device for neural network and electronic equipment |
CN112784953A (en) * | 2019-11-07 | 2021-05-11 | 佳能株式会社 | Training method and device of object recognition model |
CN113688933A (en) * | 2019-01-18 | 2021-11-23 | 北京市商汤科技开发有限公司 | Training method and classification method and device of classification network, and electronic equipment |
WO2021244521A1 (en) * | 2020-06-04 | 2021-12-09 | 广州虎牙科技有限公司 | Object classification model training method and apparatus, electronic device, and storage medium |
CN113989519A (en) * | 2021-12-28 | 2022-01-28 | 中科视语(北京)科技有限公司 | Long-tail target detection method and system |
CN114636995A (en) * | 2022-03-16 | 2022-06-17 | 中国水产科学研究院珠江水产研究所 | Underwater sound signal detection method and system based on deep learning |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060034495A1 (en) * | 2004-04-21 | 2006-02-16 | Miller Matthew L | Synergistic face detection and pose estimation with energy-based models |
CN106202329A (en) * | 2016-07-01 | 2016-12-07 | 北京市商汤科技开发有限公司 | Sample data process, data identification method and device, computer equipment |
CN106803069A (en) * | 2016-12-29 | 2017-06-06 | 南京邮电大学 | Crowd's level of happiness recognition methods based on deep learning |
CN106934408A (en) * | 2015-12-29 | 2017-07-07 | 北京大唐高鸿数据网络技术有限公司 | Identity card picture sorting technique based on convolutional neural networks |
-
2017
- 2017-09-30 CN CN201710929741.1A patent/CN108229298A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060034495A1 (en) * | 2004-04-21 | 2006-02-16 | Miller Matthew L | Synergistic face detection and pose estimation with energy-based models |
CN106934408A (en) * | 2015-12-29 | 2017-07-07 | 北京大唐高鸿数据网络技术有限公司 | Identity card picture sorting technique based on convolutional neural networks |
CN106202329A (en) * | 2016-07-01 | 2016-12-07 | 北京市商汤科技开发有限公司 | Sample data process, data identification method and device, computer equipment |
CN106803069A (en) * | 2016-12-29 | 2017-06-06 | 南京邮电大学 | Crowd's level of happiness recognition methods based on deep learning |
Non-Patent Citations (1)
Title |
---|
WEIYANG LIU等: "Large-Margin Softmax Loss for Convolutional Neural Networks", 《PROCEEDINGS OF THE 33RD INTERNATIONAL CONFERENCE ON MACHINE LEARNING》 * |
Cited By (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109063742A (en) * | 2018-07-06 | 2018-12-21 | 平安科技(深圳)有限公司 | Butterfly identifies network establishing method, device, computer equipment and storage medium |
CN109063742B (en) * | 2018-07-06 | 2023-04-18 | 平安科技(深圳)有限公司 | Butterfly identification network construction method and device, computer equipment and storage medium |
CN109190654A (en) * | 2018-07-09 | 2019-01-11 | 上海斐讯数据通信技术有限公司 | The training method and device of human face recognition model |
CN109214175A (en) * | 2018-07-23 | 2019-01-15 | 中国科学院计算机网络信息中心 | Method, apparatus and storage medium based on sample characteristics training classifier |
CN109214175B (en) * | 2018-07-23 | 2021-11-16 | 中国科学院计算机网络信息中心 | Method, device and storage medium for training classifier based on sample characteristics |
CN109002562A (en) * | 2018-08-30 | 2018-12-14 | 北京信立方科技发展股份有限公司 | A kind of instrument identification model training method and device and instrument recognition methods and device |
CN109002562B (en) * | 2018-08-30 | 2021-04-13 | 北京信立方科技发展股份有限公司 | Instrument recognition model training method and device and instrument recognition method and device |
CN109190120B (en) * | 2018-08-31 | 2020-01-21 | 第四范式(北京)技术有限公司 | Neural network training method and device and named entity identification method and device |
CN109190120A (en) * | 2018-08-31 | 2019-01-11 | 第四范式(北京)技术有限公司 | Neural network training method and device and name entity recognition method and device |
CN109543524A (en) * | 2018-10-18 | 2019-03-29 | 同盾控股有限公司 | A kind of image-recognizing method, device |
WO2020082595A1 (en) * | 2018-10-26 | 2020-04-30 | 平安科技(深圳)有限公司 | Image classification method, terminal device and non-volatile computer readable storage medium |
CN109257622A (en) * | 2018-11-01 | 2019-01-22 | 广州市百果园信息技术有限公司 | A kind of audio/video processing method, device, equipment and medium |
CN109685106A (en) * | 2018-11-19 | 2019-04-26 | 深圳博为教育科技有限公司 | A kind of image-recognizing method, face Work attendance method, device and system |
CN109711358A (en) * | 2018-12-28 | 2019-05-03 | 四川远鉴科技有限公司 | Neural network training method, face identification method and system and storage medium |
CN111401112B (en) * | 2019-01-03 | 2024-06-18 | 北京京东尚科信息技术有限公司 | Face recognition method and device |
CN111401112A (en) * | 2019-01-03 | 2020-07-10 | 北京京东尚科信息技术有限公司 | Face recognition method and device |
CN113688933B (en) * | 2019-01-18 | 2024-05-24 | 北京市商汤科技开发有限公司 | Classification network training method, classification method and device and electronic equipment |
CN113688933A (en) * | 2019-01-18 | 2021-11-23 | 北京市商汤科技开发有限公司 | Training method and classification method and device of classification network, and electronic equipment |
CN111507469B (en) * | 2019-01-31 | 2023-10-13 | 斯特拉德视觉公司 | Method and device for optimizing super parameters of automatic labeling device |
CN111507469A (en) * | 2019-01-31 | 2020-08-07 | 斯特拉德视觉公司 | Method and device for optimizing hyper-parameters of automatic labeling device |
CN111757172A (en) * | 2019-03-29 | 2020-10-09 | Tcl集团股份有限公司 | HDR video acquisition method, HDR video acquisition device and terminal equipment |
CN111832342A (en) * | 2019-04-16 | 2020-10-27 | 阿里巴巴集团控股有限公司 | Neural network, training and using method, device, electronic equipment and medium |
CN110349124A (en) * | 2019-06-13 | 2019-10-18 | 平安科技(深圳)有限公司 | Vehicle appearance damages intelligent detecting method, device and computer readable storage medium |
CN110490239A (en) * | 2019-08-06 | 2019-11-22 | 腾讯医疗健康(深圳)有限公司 | Training method, the quality classification method, device and equipment of image quality control network |
CN110490239B (en) * | 2019-08-06 | 2024-02-27 | 腾讯医疗健康(深圳)有限公司 | Training method, quality classification method, device and equipment of image quality control network |
CN112348045A (en) * | 2019-08-09 | 2021-02-09 | 北京地平线机器人技术研发有限公司 | Training method and training device for neural network and electronic equipment |
CN110569911A (en) * | 2019-09-11 | 2019-12-13 | 深圳绿米联创科技有限公司 | Image recognition method, device, system, electronic equipment and storage medium |
CN110569911B (en) * | 2019-09-11 | 2022-06-07 | 深圳绿米联创科技有限公司 | Image recognition method, device, system, electronic equipment and storage medium |
CN110880018A (en) * | 2019-10-29 | 2020-03-13 | 北京邮电大学 | Convolutional neural network target classification method based on novel loss function |
CN110880018B (en) * | 2019-10-29 | 2023-03-14 | 北京邮电大学 | Convolutional neural network target classification method |
CN112784953A (en) * | 2019-11-07 | 2021-05-11 | 佳能株式会社 | Training method and device of object recognition model |
CN111144566A (en) * | 2019-12-30 | 2020-05-12 | 深圳云天励飞技术有限公司 | Neural network weight parameter training method, characteristic classification method and corresponding device |
CN111144566B (en) * | 2019-12-30 | 2024-03-22 | 深圳云天励飞技术有限公司 | Training method for neural network weight parameters, feature classification method and corresponding device |
CN111340213A (en) * | 2020-02-19 | 2020-06-26 | 浙江大华技术股份有限公司 | Neural network training method, electronic device, and storage medium |
CN111340213B (en) * | 2020-02-19 | 2023-01-17 | 浙江大华技术股份有限公司 | Neural network training method, electronic device, and storage medium |
CN111415333A (en) * | 2020-03-05 | 2020-07-14 | 北京深睿博联科技有限责任公司 | Training method and device for breast X-ray image antisymmetric generation analysis model |
CN111415333B (en) * | 2020-03-05 | 2023-12-01 | 北京深睿博联科技有限责任公司 | Mammary gland X-ray image antisymmetric generation analysis model training method and device |
CN111582376A (en) * | 2020-05-09 | 2020-08-25 | 北京字节跳动网络技术有限公司 | Neural network visualization method and device, electronic equipment and medium |
CN111582376B (en) * | 2020-05-09 | 2023-08-15 | 抖音视界有限公司 | Visualization method and device for neural network, electronic equipment and medium |
WO2021244521A1 (en) * | 2020-06-04 | 2021-12-09 | 广州虎牙科技有限公司 | Object classification model training method and apparatus, electronic device, and storage medium |
CN113989519B (en) * | 2021-12-28 | 2022-03-22 | 中科视语(北京)科技有限公司 | Long-tail target detection method and system |
CN113989519A (en) * | 2021-12-28 | 2022-01-28 | 中科视语(北京)科技有限公司 | Long-tail target detection method and system |
CN114636995A (en) * | 2022-03-16 | 2022-06-17 | 中国水产科学研究院珠江水产研究所 | Underwater sound signal detection method and system based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108229298A (en) | The training of neural network and face identification method and device, equipment, storage medium | |
Lu et al. | Learning optimal seeds for diffusion-based salient object detection | |
CN103632168B (en) | Classifier integration method for machine learning | |
CN108229296A (en) | The recognition methods of face skin attribute and device, electronic equipment, storage medium | |
CN111027378B (en) | Pedestrian re-identification method, device, terminal and storage medium | |
CN108229330A (en) | Face fusion recognition methods and device, electronic equipment and storage medium | |
CN108229479A (en) | The training method and device of semantic segmentation model, electronic equipment, storage medium | |
US20050251347A1 (en) | Automatic visual recognition of biological particles | |
CN110472494A (en) | Face feature extracts model training method, facial feature extraction method, device, equipment and storage medium | |
CN110020592A (en) | Object detection model training method, device, computer equipment and storage medium | |
CN107430678A (en) | Use the inexpensive face recognition of Gauss received field feature | |
CN109800821A (en) | Method, image processing method, device, equipment and the medium of training neural network | |
CN104899579A (en) | Face recognition method and face recognition device | |
CN105303150B (en) | Realize the method and system of image procossing | |
CN110717554B (en) | Image recognition method, electronic device, and storage medium | |
CN104239902B (en) | Hyperspectral image classification method based on non local similitude and sparse coding | |
CN108231190A (en) | Handle the method for image and nerve network system, equipment, medium, program | |
CN106845358A (en) | A kind of method and system of handwritten character characteristics of image identification | |
CN108280451A (en) | Semantic segmentation and network training method and device, equipment, medium, program | |
CN110084609B (en) | Transaction fraud behavior deep detection method based on characterization learning | |
CN110348494A (en) | A kind of human motion recognition method based on binary channels residual error neural network | |
CN108228684A (en) | Training method, device, electronic equipment and the computer storage media of Clustering Model | |
CN110457677A (en) | Entity-relationship recognition method and device, storage medium, computer equipment | |
CN109726918A (en) | The personal credit for fighting network and semi-supervised learning based on production determines method | |
CN104809471B (en) | A kind of high spectrum image residual error integrated classification method based on spatial spectral information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180629 |