CN110009052A - A kind of method of image recognition, the method and device of image recognition model training - Google Patents

A kind of method of image recognition, the method and device of image recognition model training Download PDF

Info

Publication number
CN110009052A
CN110009052A CN201910289986.1A CN201910289986A CN110009052A CN 110009052 A CN110009052 A CN 110009052A CN 201910289986 A CN201910289986 A CN 201910289986A CN 110009052 A CN110009052 A CN 110009052A
Authority
CN
China
Prior art keywords
image
training
feature
scale
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910289986.1A
Other languages
Chinese (zh)
Other versions
CN110009052B (en
Inventor
王一同
黄佳博
季兴
周正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910289986.1A priority Critical patent/CN110009052B/en
Publication of CN110009052A publication Critical patent/CN110009052A/en
Application granted granted Critical
Publication of CN110009052B publication Critical patent/CN110009052B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

This application discloses a kind of methods of image recognition, comprising: obtains images to be recognized;The first characteristics of image of images to be recognized is obtained by small-scale image recognition model, wherein small-scale image recognition model is deployed in terminal device;According to the first characteristics of image and N number of second characteristics of image, the image similarity between the first characteristics of image and the second characteristics of image is determined, wherein the second characteristics of image is that image to be matched passes through characteristics of image accessed by large-scale image identification model;The image recognition result of images to be recognized is determined according to image similarity.Disclosed herein as well is the method for image recognition model training and devices.The application extracts the characteristics of image of high quality using large-scale image identification model, is able to carry out efficient calculating using small-scale image recognition model, to promote the recognition accuracy of small-scale image recognition model under the premise of guaranteeing operation efficiency.

Description

A kind of method of image recognition, the method and device of image recognition model training
Technical field
This application involves artificial intelligence field more particularly to a kind of methods of image recognition, image recognition model training Method and device.
Background technique
Recognition of face is an important research topic in computer vision field, and has in industrial circle and answer extensively With.With the development of mobile device and universal, the demand for running face recognition algorithms on the terminal device is increasingly enhanced.However, The limited operational capability of terminal system, memory space and the high request to operation real-time, so that directly being difficult to run at it Large Scale Neural Networks model becomes can not.
Currently, being by extensive recognition of face convolutional Neural net for the face identification method of terminal device design Structure and arithmetic operation in network (Convolutional Neural Networks, CNN) model improve, and to the greatest extent may be used Under the premise of being able to maintain model performance, the number of parameters in model is reduced.
The face identification method of operation on the terminal device is by reducing model parameter with lift scheme operation speed mostly Degree.However, the model parameter due to mini Mod is limited, the complexity for the solution that can be fitted little Xu for large-sized model It is more, so as to cause recognition accuracy decline.If directly large-sized model is used in terminal system, although can guarantee higher knowledge Other performance, but it is high to the requirement of the computing capability of terminal device, and guarantee is unable to get in terms of recognition efficiency.
Summary of the invention
The embodiment of the present application provides the method and device of a kind of method of image recognition, image recognition model training, benefit The characteristics of image that high quality is extracted with large-scale image identification model is able to carry out efficiently using small-scale image recognition model It calculates, to promote the recognition accuracy of small-scale image recognition model under the premise of guaranteeing operation efficiency.
In view of this, the application first aspect provides a kind of method of image recognition, comprising:
Obtain images to be recognized;
The first characteristics of image of the images to be recognized is obtained by small-scale image recognition model, wherein the small rule Mould image recognition model is deployed in terminal device;
According to the first image feature and N number of second characteristics of image, the first image feature and described second are determined Image similarity between characteristics of image, wherein second characteristics of image is that image to be matched is identified by large-scale image Characteristics of image accessed by model, the model parameter quantity of the large-scale image identification model are greater than the small-scale image and know The model parameter quantity of other model, the N are the integer more than or equal to 1;
The image recognition result of the images to be recognized is determined according to described image similarity.
The application second aspect provides a kind of method of image recognition model training, comprising:
It obtains to training image set, wherein it is described to include that at least one waits for training image in training image set, often It is a to correspond to an identity label to training image;
It is obtained by large-scale image identification model each to corresponding to training image first to training image feature, In;
It is described each to corresponding to training image second wait train by being obtained to the small-scale image recognition model of training Characteristics of image, wherein each second to training image feature correspond to a class weight vector, the class weight vector with The identity label has one-to-one relationship;
According to it is described each to training image corresponding to described first to training image feature, described second to training scheme As feature and class weight vector, it is trained to described to the small-scale image recognition model of training, obtains small-scale image Identification model, wherein the small-scale image recognition model is deployed in terminal device, the mould of the large-scale image identification model Shape parameter quantity is greater than the model parameter quantity of the small-scale image recognition model.
The application third aspect provides a kind of pattern recognition device, comprising:
Module is obtained, for obtaining images to be recognized;
The acquisition module is also used to obtain the first image of the images to be recognized by small-scale image recognition model Feature, wherein the small-scale image recognition model is deployed in terminal device;
Determining module, the first image feature and N number of second characteristics of image for being obtained according to the acquisition module, Determine the image similarity between the first image feature and second characteristics of image, wherein second characteristics of image Pass through characteristics of image accessed by large-scale image identification model, the mould of the large-scale image identification model for image to be matched Shape parameter quantity is greater than the model parameter quantity of the small-scale image recognition model, and the N is the integer more than or equal to 1;
The determining module is also used to determine the image recognition knot of the images to be recognized according to described image similarity Fruit.
In a kind of possible design, in the first implementation of the third aspect of the embodiment of the present application,
The determining module, if being specifically used for the N is equal to 1, according to the first image feature and second figure As feature calculation obtains described image similarity;
If described image similarity reaches similarity threshold, it is determined that the images to be recognized and the image to be matched have There is identical identity label.
In a kind of possible design, in second of implementation of the third aspect of the embodiment of the present application,
The determining module, if being specifically used for the N is greater than 1, according to the first image feature and each second figure As feature, N number of image similarity is calculated;
Image to be matched corresponding to target image similarity is determined from N number of image similarity, wherein the mesh Logo image similarity is the maximum value in N number of image similarity;
Determine image to be matched body having the same corresponding to the images to be recognized and the target image similarity Part label.
In a kind of possible design, in the third implementation of the third aspect of the embodiment of the present application,
The determining module is specifically used for calculating described image similarity in the following way:
Wherein, the S (Ip,Ig) indicate the image similarity of the images to be recognized and the image to be matched, the Ip Indicate the images to be recognized, the IgIndicate the image to be matched, the FS(Ip) indicate the first image feature, institute State FB(Ig) indicate second characteristics of image, described | | | | indicate that the mould of feature is long.
The application fourth aspect provides a kind of image recognition model training apparatus, comprising:
Module is obtained, for obtaining to training image set, wherein described to include at least one in training image set To training image, each correspond to an identity label to training image;
The acquisition module is also used to obtain each by large-scale image identification model to corresponding to training image the One to training image feature;
The acquisition module is also used to described each to training image by obtaining to the small-scale image recognition model of training Corresponding second is to training image feature, wherein and each second corresponds to a class weight vector to training image feature, The class weight vector and the identity label have one-to-one relationship;
Training module, for according to it is described acquisition module obtain it is described each to training image corresponding to described first To training image feature, described second to training image feature and class weight vector, to described to the small-scale image of training Identification model is trained, and obtains small-scale image recognition model, wherein the small-scale image recognition model is deployed in terminal Equipment, the model parameter quantity of the large-scale image identification model are greater than the model parameter of the small-scale image recognition model Quantity.
In a kind of possible design, in the first implementation of the fourth aspect of the embodiment of the present application,
The acquisition module is also used to obtain each by large-scale image identification model to corresponding to training image the Before one to training image feature, obtained each by large-scale image identification model to be trained to corresponding to training image the Three to training image feature, wherein each third waits for that training image feature corresponds to a class weight vector;
The training module, be also used to according to the acquisition module obtain it is described each to training image corresponding to the Three identify mould to the large-scale image to be trained to training image feature and class weight vector, using Classification Loss function Type is trained, and obtains the large-scale image identification model.
In a kind of possible design, in second of implementation of the fourth aspect of the embodiment of the present application,
The training module, specifically for according to it is described each to training image corresponding to described second to training image Feature and described each to class weight vector corresponding to training image, determines first-loss function;
According to it is described each to training image corresponding to described first to training image feature and described each wait instruct Described second to training image feature corresponding to white silk image, determines the second loss function;
According to the first-loss function and second loss function, target loss function is determined;
It is trained, is obtained described small to the small-scale image recognition model of training to described using the target loss function Scale image recognition model.
In a kind of possible design, in the third implementation of the fourth aspect of the embodiment of the present application,
The training module, specifically for determining the first-loss function in the following way:
s.t.||FS(I) | |=1, | | W | |=1;
Wherein, the LLCMLIndicate the first-loss function, the N indicate it is described in training image set wait instruct Practice the sum of image, the i indicate it is described to i-th in training image set to training image, the j indicates described wait instruct Practice j-th in image collection to training image, the e indicates the nature truth of a matter, and the cos () indicates two co sinus vector included angles Value, the s and the m indicate the hyper parameter of the first-loss function, the IiIndicate described i-th to training image, The FS(Ii) indicate it is described i-th to corresponding to training image second to training image feature, the WiIndicate described i-th It is a to class weight vector corresponding to training image, the WjIt indicates to weigh to classification corresponding to training image for described j-th Weight vector, the W indicate class weight vector, the FS(I) second is indicated to training image feature, and the s.t. indicates limited It is formed on, described | | | | indicate that the mould of feature is long, the FS() indicates by described to the small-scale image recognition model extraction of training Feature.
In a kind of possible design, in the 4th kind of implementation of the fourth aspect of the embodiment of the present application,
The training module, specifically for determining second loss function in the following way:
Wherein, the LL2Indicate second loss function, the N indicate it is described in training image set wait train The sum of image, the i indicate it is described to i-th in training image set to training image, the IiIt indicates described i-th To training image, the FS(Ii) indicate it is described i-th to corresponding to training image second to training image feature, the FB (Ii) indicate it is described i-th to corresponding to training image first to training image feature, it is described | | | |2Indicate the L2 model of vector Number, the FS() indicates by described to the small-scale image recognition model extraction feature of training, the FB() indicates by described Large-scale image identification model extracts feature.
In a kind of possible design, in the 5th kind of implementation of the fourth aspect of the embodiment of the present application,
The training module, specifically for determining the target loss function in the following way:
L=λ1LLCML2LL2
Wherein, the L indicates the target loss function, the λ1Indicate the weight parameter of the first-loss function, The λ2Indicate the weight parameter of second loss function, the LLCMLIndicate the first-loss function, the LL2It indicates Second loss function.
The 5th aspect of the application provides a kind of terminal device, comprising: memory, transceiver, processor and bus system;
Wherein, the memory is for storing program;
The processor is used to execute the program in the memory, includes the following steps:
Obtain images to be recognized;
The first characteristics of image of the images to be recognized is obtained by small-scale image recognition model, wherein the small rule Mould image recognition model is deployed in terminal device;
According to the first image feature and N number of second characteristics of image, the first image feature and described second are determined Image similarity between characteristics of image, wherein second characteristics of image is that image to be matched is identified by large-scale image Characteristics of image accessed by model, the model parameter quantity of the large-scale image identification model are greater than the small-scale image and know The model parameter quantity of other model, the N are the integer more than or equal to 1;
The image recognition result of the images to be recognized is determined according to described image similarity;
The bus system is for connecting the memory and the processor, so that the memory and the place Reason device is communicated.
The 6th aspect of the application provides a kind of server, comprising: memory, transceiver, processor and bus system;
Wherein, the memory is for storing program;
The processor is used to execute the program in the memory, includes the following steps:
It obtains to training image set, wherein it is described to include that at least one waits for training image in training image set, often It is a to correspond to an identity label to training image;
It is obtained by large-scale image identification model each to corresponding to training image first to training image feature;
It is described each to corresponding to training image second wait train by being obtained to the small-scale image recognition model of training Characteristics of image, wherein each second to training image feature correspond to a class weight vector, the class weight vector with The identity label has one-to-one relationship;
According to it is described each to training image corresponding to described first to training image feature, described second to training scheme As feature and class weight vector, it is trained to described to the small-scale image recognition model of training, obtains small-scale image Identification model, wherein the small-scale image recognition model is deployed in terminal device, the mould of the large-scale image identification model Shape parameter quantity is greater than the model parameter quantity of the small-scale image recognition model;
The bus system is for connecting the memory and the processor, so that the memory and the place Reason device is communicated.
The 7th aspect of the application provides a kind of computer readable storage medium, in the computer readable storage medium It is stored with instruction, when run on a computer, so that computer executes method described in above-mentioned various aspects.
As can be seen from the above technical solutions, the embodiment of the present application has the advantage that
In the embodiment of the present application, a kind of method of image recognition is provided, first acquisition images to be recognized, then by small First characteristics of image of scale image recognition model acquisition images to be recognized, wherein small-scale image recognition model is deployed in end End equipment determines the first characteristics of image and the second characteristics of image next according to the first characteristics of image and N number of second characteristics of image Between image similarity, wherein the second characteristics of image is image to be matched by accessed by large-scale image identification model Characteristics of image, N are the integer more than or equal to 1, and the image recognition result of images to be recognized is finally determined according to image similarity. By the above-mentioned means, extracting image spy all in database using large-scale image identification model in advance in the server Sign, and the characteristics of image of small-scale image recognition model extraction images to be recognized is used on the terminal device, utilize Large Scale Graphs As identification model extracts the characteristics of image of high quality, it is able to carry out efficient calculating using small-scale image recognition model, thus The recognition accuracy of small-scale image recognition model is promoted under the premise of guaranteeing operation efficiency.
Detailed description of the invention
Fig. 1 is a configuration diagram of image identification system in the embodiment of the present application;
Fig. 2 is method one embodiment schematic diagram of image recognition in the embodiment of the present application;
Fig. 3 is the application framework schematic diagram that image compares in the embodiment of the present application;
Fig. 4 is an application framework schematic diagram of image retrieval in the embodiment of the present application;
Fig. 5 is method one embodiment schematic diagram of image recognition model training in the embodiment of the present application;
Fig. 6 is a trained block schematic illustration of large-scale image identification model in the embodiment of the present application;
Fig. 7 is a trained block schematic illustration of the embodiment of the present application middle and small scale image recognition model;
Fig. 8 is one embodiment schematic diagram of pattern recognition device in the embodiment of the present application;
Fig. 9 is one embodiment schematic diagram of image recognition model training apparatus in the embodiment of the present application;
Figure 10 is a structural schematic diagram of terminal device in the embodiment of the present application;
Figure 11 is a structural schematic diagram of server in the embodiment of the present application.
Specific embodiment
The embodiment of the present application provides the method and device of a kind of method of image recognition, image recognition model training, benefit The characteristics of image that high quality is extracted with large-scale image identification model is able to carry out efficiently using small-scale image recognition model It calculates, to promote the recognition accuracy of small-scale image recognition model under the premise of guaranteeing operation efficiency.
The description and claims of this application and term " first ", " second ", " third ", " in above-mentioned attached drawing The (if present)s such as four " are to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should manage The data that solution uses in this way are interchangeable under appropriate circumstances, so that embodiments herein described herein for example can be to remove Sequence other than those of illustrating or describe herein is implemented.In addition, term " includes " and " corresponding to " and their times What is deformed, it is intended that cover it is non-exclusive include, for example, contain the process, method of a series of steps or units, system, Product or equipment those of are not necessarily limited to be clearly listed step or unit, but may include be not clearly listed or for The intrinsic other step or units of these process, methods, product or equipment.
It should be understood that the method for image recognition provided by the present application is applied to artificial intelligence field, people specifically can be applied to Face identification and face verification, more specifically, can be applied to authentication, the payment of brush face, cell gate inhibition and face monitoring Etc. scenes.The method of image recognition provided herein can on the terminal device carry out quickly and accurately facial image Identification, and more can identify quickly and accurately the identity coherence of human face photo to be compared, to expression, age and posture Equal variations have stronger robustness.
Present applicant proposes a kind of image-recognizing methods based on non-symmetrical features.This method uses big rule in the training stage Training of the characteristics of image that mould image recognition model extraction obtains as the small-scale image recognition model of label instructions, so that small rule Mould image recognition model can learn to how input picture being mapped to feature space locating for large-scale image identification model In.In the application stage, the characteristics of image of database images (Gallery) is extracted using large-scale image identification model first.Its In, it is offline that the process of characteristics of image is extracted using large-scale image identification model, that is to say, that large-scale image identifies mould Type, which only needs to carry out the characteristics of image in Gallery primary extract, to be used for multiple times, and do not need to be identified every time When all use large-sized model to carry out feature extraction, therefore the operation not will cause the efficiency of image recognition model use on line Big influence.When with image recognition model on line, after getting facial image to be identified, using trained small Scale image recognition model extract real-time face characteristic to be identified, and with face characteristic to be identified with extracted Characteristics of image is matched in Gallery, to obtain recognition result.Due in training stage small-scale image recognition model learning It maps an image in feature space belonging to large-scale image identification model.So in the application stage, can use this Asymmetrical form matches the feature of large-scale image identification model Yu small-scale image recognition model.
In order to make it easy to understand, this method is applied to image shown in FIG. 1 present applicant proposes a kind of method of image recognition Identifying system, referring to Fig. 1, Fig. 1 is a configuration diagram of image identification system in the embodiment of the present application, as shown, Client obtains a facial image, and the facial image is then input to preparatory trained small-scale image recognition model, Wherein, small-scale image recognition model is deployed in client local, and has less model parameter.Known by small-scale image The characteristics of image A of other mode input facial image.Next the figure that large-scale image identification model is extracted in advance is directly acquired As feature B, large-scale image identification model is disposed on the server, has more model parameter.Client is by image Feature A and characteristics of image B are compared, to obtain recognition result.
It should be noted that client deployment is on terminal device, wherein terminal device includes but is not limited only to plate electricity Brain, laptop, palm PC, mobile phone, interactive voice equipment and PC (personal computer, PC), herein Without limitation.Wherein, interactive voice equipment includes but is not limited only to intelligent sound and intelligent appliance.
The method that the application proposes is relative to the method for symmetrically carrying out image recognition using small-scale image recognition model For, under the premise of guaranteeing calculating speed, improve recognition accuracy.Relative to symmetrically using large-scale image identification mould For type carries out the method for image recognition, under the premise of sacrificing recognition performance less as far as possible, model meter is largely improved Speed is calculated, while reducing the volume of model to reduce requirement of the model to the calculating memory of terminal device.
In conjunction with above-mentioned introduction, the method for image recognition in the application will be introduced below, referring to Fig. 2, the application Method one embodiment of image recognition includes: in embodiment
101, images to be recognized is obtained;
In the present embodiment, pattern recognition device obtains images to be recognized, it is to be understood that pattern recognition device is deployed in On terminal device, images to be recognized can be the image obtained after the camera captured in real-time by terminal device, be also possible to It is stored in the image of terminal device local.Wherein, images to be recognized specifically can be facial image, can also be animal painting, Plant image or building object image, it might even be possible to it is dynamic image, the application is illustrated by taking facial image as an example, however this It is not construed as the restriction to the application.
102, the first characteristics of image of images to be recognized is obtained by small-scale image recognition model, wherein small-scale figure As identification model is deployed in terminal device;
In the present embodiment, images to be recognized is input in small-scale image recognition model by pattern recognition device, small by this Scale image recognition model exports corresponding first characteristics of image, wherein small-scale image recognition model is deployed in terminal device On, characteristics of image can be extracted in the state of offline.
It is understood that small-scale image recognition model can be Mobile Net, Shuffle Net or One of Squeeze Net can also be other lightweight networks.Wherein, include in Mobile Net series MobileNet V1, MobileNet V2 and mobile terminal face network (Mobile Facenet).MobileNet V1 is introduced Depth separates convolution algorithm to reduce the extensive parameter in traditional convolution algorithm.Furthermore also introduce two hyper parameters with Control the resolution ratio and model width of input picture.MobileNet V2 introduces reversing residual error on the basis of V1 (Inverted Residual) and linear obstruction (Linear Bottlenecks) are moved back with alleviating serious feature present in V1 Change problem.And Mobile Facenet has then carried out modification appropriate to the network structure of V2 so that model is more suitable for face and knows Other task.
Shuffle Net is a kind of improvement shape of residual error neural network (Residual Neural Network, ResNet) Formula is reduced on a large scale by the method for a group convolution (Group Convolution) and Channel Exchange (Channel Shuffle) The calculation amount of ResNet.Group convolution algorithm can be effectively reduced the operand of convolution algorithm, while Channel Exchange operation can be protected Demonstrate,prove the information interchange between different groups.
Squeeze Net be a kind of light weight convolutional neural networks (Convolutional Neural Networks, CNN) model, the Squeeze Net are mainly built-up as basic element stack by Fire module.Fire module is by one A extruding (Squeeze) layer and extension (Expand) layer are composed.In modelling, a large amount of volumes for using 1x1 Product core replaces the convolution kernel of 3x3, while reducing the input channel number of 3x3 convolution kernel to the greatest extent, to largely reduce the ginseng of convolution algorithm Number.In addition, model is conducive to lift scheme accuracy so that low layer nervous layer has bigger characteristic pattern by delay down-sampling.
103, according to the first characteristics of image and N number of second characteristics of image, the first characteristics of image and the second characteristics of image are determined Between image similarity, wherein the second characteristics of image is image to be matched by accessed by large-scale image identification model Characteristics of image, the model parameter quantity of large-scale image identification model are greater than the model parameter number of small-scale image recognition model Amount, N are the integer more than or equal to 1;
In the present embodiment, pattern recognition device, can be by first after the first characteristics of image for extracting images to be recognized Characteristics of image and N number of second characteristics of image are matched, and N is the integer more than or equal to 1, therefore, available N number of first figure As the image similarity between feature and the second characteristics of image.
It should be noted that N number of second characteristics of image is by large-scale image identification model from database images (Gallery) all characteristics of image extracted in, the process using large-scale image identification model is usually offline, that is, It says, large-scale image identification model only needs repeatedly make the characteristics of image feature extraction of progress in Gallery With without all using large-scale image identification model to carry out feature extraction, usual situation when being identified every time Under, large-scale image identification model is disposed on the server, and in practical applications, large-scale image identification model can also dispose On the terminal device.Therefore, the operation of feature is extracted using large-scale image identification model to the efficiency of image recognition on line simultaneously Excessive influence is not will cause.When applying small-scale image recognition model on line, after getting images to be recognized, instruction is used The small-scale image recognition model perfected extracts the characteristics of image of images to be recognized in real time, and with its with extracted Feature corresponding to image is matched in Gallery, thus obtains the image similarity between image two-by-two.
Wherein, the model parameter quantity of large-scale image identification model is greater than the model parameter of small-scale image recognition model Therefore quantity, that is, the complexity of large-scale image identification model are run higher than the complexity of small-scale image recognition model Large-scale image identification model needs to occupy more resources, and small-scale image recognition model running get up it is more light.
104, the image recognition result of images to be recognized is determined according to image similarity.
In the present embodiment, pattern recognition device determines images to be recognized according to the pattern recognition device between image two-by-two Image recognition result.
Due to mapping an image to large-scale image identification model feature in training stage small-scale image recognition model habit In affiliated feature space.So in the application stage, large-scale image knowledge can be matched using this asymmetrical form The feature of other model and small-scale image recognition model.
In the embodiment of the present application, a kind of method of image recognition is provided, first acquisition images to be recognized, then by small First characteristics of image of scale image recognition model acquisition images to be recognized, wherein small-scale image recognition model is deployed in end End equipment determines the first characteristics of image and the second characteristics of image next according to the first characteristics of image and N number of second characteristics of image Between image similarity, wherein the second characteristics of image is image to be matched by accessed by large-scale image identification model Characteristics of image, N are the integer more than or equal to 1, and the image recognition result of images to be recognized is finally determined according to image similarity. By the above-mentioned means, extracting image spy all in database using large-scale image identification model in advance in the server Sign, and the characteristics of image of small-scale image recognition model extraction images to be recognized is used on the terminal device, utilize Large Scale Graphs As identification model extracts the characteristics of image of high quality, it is able to carry out efficient calculating using small-scale image recognition model, thus The recognition accuracy of small-scale image recognition model is promoted under the premise of guaranteeing operation efficiency.
Optionally, on the basis of above-mentioned Fig. 2 corresponding embodiment, the method for image recognition provided by the embodiments of the present application In first alternative embodiment, according to the first characteristics of image and N number of second characteristics of image, the first characteristics of image and the second figure are determined As the image similarity between feature, may include:
If N is equal to 1, image similarity is obtained according to the first characteristics of image and the second box counting algorithm;
The image recognition result of images to be recognized is determined according to image similarity, comprising:
If image similarity reaches similarity threshold, it is determined that images to be recognized and image to be matched identity having the same Label.
In the present embodiment, a kind of determining image recognition result will be introduced method.In the case where N is equal to 1, that is, will The characteristics of image of two images carries out similarity comparison, so that whether this two images belong to the same identity label.
For ease of description, it is introduced by taking verifying (verification) facial image as an example, referring to Fig. 3, Fig. 3 is The application framework schematic diagram that image compares in the embodiment of the present application, as shown, images to be recognized is input on a small scale Image recognition model, the characteristics of image A as corresponding to the small-scale image recognition model output images to be recognized.And server is pre- First pass through the characteristics of image that large-scale image identification model extracts image in Gallery, wherein include the figure of image to be matched As feature B, at this point, characteristics of image A and characteristics of image B are compared, i.e., after the similarity calculation between feature To image similarity.If image similarity reaches similarity threshold, it is determined that images to be recognized and image to be matched are same Identity, also with regard to identity label having the same, such as obtained image recognition result are as follows: the personage in images to be recognized is " small It is bright ".If image similarity is to reach similarity threshold, then it represents that images to be recognized and image to be matched are not belonging to the same body Part, also just there is different identity labels, for example obtained image recognition result is that the personage in images to be recognized is not " small It is bright ".
Secondly, a kind of method that image compares is provided, if N is equal to 1, according to the first image in the embodiment of the present application Feature and the second box counting algorithm obtain image similarity, if image similarity reaches similarity threshold, it is determined that be identified Image and image to be matched identity label having the same.By the above-mentioned means, in the scene of some image authentications, it can be pre- The facial image feature that all known identities in database are first extracted using large-scale image identification model, is needing to carry out face It when verifying, is calculated in real time using small-scale image recognition model, and obtains corresponding facial image feature, to be identified Facial image carry out 1 to 1 face verification, thus using asymmetrical feature carry out human face similarity degree calculating, Neng Gou The recognition accuracy of boosting algorithm under the premise of guarantee efficiency of algorithm.
Optionally, on the basis of above-mentioned Fig. 2 corresponding embodiment, the method for image recognition provided by the embodiments of the present application In second alternative embodiment, according to the first characteristics of image and N number of second characteristics of image, the first characteristics of image and the second figure are determined As the image similarity between feature, may include:
If N is greater than 1, according to the first characteristics of image and each second characteristics of image, N number of image similarity is calculated;
The image recognition result of images to be recognized is determined according to image similarity, comprising:
Image to be matched corresponding to target image similarity is determined from N number of image similarity, wherein target image phase Like the maximum value that degree is in N number of image similarity;
Determine image to be matched identity label having the same corresponding to images to be recognized and target image similarity.
In the present embodiment, in the present embodiment, another method for determining image recognition result will be introduced.It is greater than 1 feelings in N Under condition, that is, by the characteristics of image progress similarity comparison between multiple images, so that it is determined that the identity mark of images to be recognized Label.
For ease of description, it is introduced by taking identification (Identification) facial image as an example, referring to Fig. 4, Fig. 4 For an application framework schematic diagram of image retrieval in the embodiment of the present application, as shown, images to be recognized is input to small rule Mould image recognition model, the characteristics of image A as corresponding to the small-scale image recognition model output images to be recognized.And server The characteristics of image that large-scale image identification model extracts all images in Gallery is first passed through in advance, for example extracts the image of image 1 Feature B extracts the characteristics of image B of image 2, and so on, the characteristics of image of each image is stored to image feature base In.Image feature base includes N number of characteristics of image, then, is needed the characteristics of image A of images to be recognized is special with image respectively Each characteristics of image in sign database is matched, it is assumed that N 100, then available 100 image similarities.From this Select maximum value as target image similarity in 100 image similarities, it is to be identified corresponding to the target image similarity Image and image to be matched identity label having the same, for example obtained image recognition result is the people in images to be recognized Object is " small red ".
Secondly, in the embodiment of the present application, a kind of method for having supplied image retrieval, if N is greater than 1, according to the first image spy Sign and each second characteristics of image, are calculated N number of image similarity, and target image is then determined from N number of image similarity Image to be matched corresponding to similarity, wherein target image similarity is the maximum value in N number of image similarity, last true Determine image to be matched identity label having the same corresponding to images to be recognized and target image similarity.Pass through above-mentioned side Formula can be extracted using large-scale image identification model all known in database in advance in the scene of some image retrievals The facial image feature of identity is calculated when needing to carry out face verification using small-scale image recognition model in real time, And corresponding facial image feature is obtained, 1 is carried out than more recognitions of face to facial image to be identified, thus using non-right The feature of title carries out human face similarity degree calculating, can under the premise of guaranteeing efficiency of algorithm boosting algorithm recognition accuracy.
Optionally, on the basis of above-mentioned Fig. 2 corresponding one embodiment, image recognition provided by the embodiments of the present application Method third alternative embodiment in, image similarity is obtained according to the first characteristics of image and the second box counting algorithm, can To include:
Image similarity is calculated in the following way:
Wherein, S (Ip,Ig) indicate images to be recognized and image to be matched image similarity, IpIndicate images to be recognized, IgIndicate image to be matched, FS(Ip) indicate the first characteristics of image, FB(Ig) indicate the second characteristics of image, | | | | indicate the mould of feature It is long.
In the present embodiment, a kind of mode that image similarity calculates is described.In the application stage, trained big rule Mould image recognition model is used to extract the characteristics of image of all images in Gallery, trained small-scale image recognition Model is used to extract the characteristics of image of images to be recognized.For two given feature vector x1And x2, can be with by following formula Calculate cosine similarity S:
Wherein, cosine similarity S is bigger, indicates feature vector x1With feature vector x2It is got over from a possibility that same person Height, otherwise, it means that a possibility that coming from the same person is lower.
More specifically, being illustrated by taking two images as an example, IpIndicate images to be recognized, IgIndicate image to be matched, FS() Expression passes through small-scale image recognition model extraction feature, i.e. FS(Ip) indicate the first characteristics of image, FB() indicates by advising greatly Mould image recognition model extraction feature, i.e. FB(Ig) indicate the second characteristics of image, then by calculating FB(Ig) and FS(Ip) it is remaining String similarity measures the human face similarity degree in images to be recognized and database between image, and calculation formula may be expressed as:
Wherein, S (Ip,Ig) indicate images to be recognized and image to be matched image similarity, for face verification or face For comparison, if S (Ip,Ig) be greater than decision threshold (Threshold), then it is determined as same identity, it is on the contrary then be determined as different bodies Part.For recognition of face or face search for, target be to look in N number of image with it is most like in images to be recognized as a result, Similarity maximum value is found by sequence to realize.This measuring similarity side based on non-symmetrical features is taken in the application stage Method can utilize upper large-scale image identification model in the case where the small-scale image recognition model of holding efficient arithmetic speed Obtained high-quality characteristics are extracted, to promote the accuracy rate of recognition of face.
Again, it in the embodiment of the present application, provides and a kind of is obtained according to the first characteristics of image and the second box counting algorithm The method of image similarity.By the above-mentioned means, the image similarity between characteristics of image can accurately be calculated, thus Reliable foundation, and the feasibility and operability of lifting scheme are provided for subsequent identification.
Based on above-mentioned introduction, the method for image recognition model training in the application will be introduced below, please refer to figure 5, method one embodiment of image recognition model training includes: in the embodiment of the present application
201, it obtains to training image set, wherein to include that at least one waits for training image in training image set, often It is a to correspond to an identity label to training image;
In the present embodiment, how to train introduction to obtain small-scale image recognition model, image recognition model training apparatus It is deployed in server.It is obtained by image recognition model training apparatus to training image set, wherein should be to training image set Specifically can be in Gallery to training image, specifically can be facial image to training image, can also be animal scheme Picture, plant image or building object image, it might even be possible to be dynamic image, the application is illustrated by taking facial image as an example, so And this is not construed as the restriction to the application.
For to training image, each to the corresponding identity label of training image, for example, to training image 1 Identity label is 001, and the identity that identity label 001 indicates is " Xiao Ming ", and the identity label to training image 2 is 002, identity mark The identity that label 002 indicate is " small red ", and so on.
202, it is obtained by large-scale image identification model each to corresponding to training image first to training image spy Sign, wherein;
In the present embodiment, image recognition model training apparatus can directly acquire trained large-scale image identification Model can also obtain large-scale image identification model using to the training of training image set.Image recognition model training apparatus It will be input in large-scale image identification model to each of training image set to training image, known by the large-scale image Other model output is each to corresponding to training image first to training image feature.
203, by obtaining to the small-scale image recognition model of training each to corresponding to training image second wait train Characteristics of image, wherein each second corresponds to a class weight vector, class weight vector and identity to training image feature Label has one-to-one relationship;
In the present embodiment, image recognition model training apparatus uses trained large-scale image identification model, treats instruction Practice the training that small-scale image recognition model carries out Classification and Identification.Image recognition model training apparatus will be in training image set Each of be input to training image to be waited training small-scale image recognition model by this in the small-scale image recognition model of training Output is each to corresponding to training image second to training image feature.Wherein, each second is corresponding to training image feature In a class weight vector, i.e., each there is class weight vector corresponding to an identity label, classification to training image Weight vectors and identity label have one-to-one relationship.
204, according to each to training image corresponding to first to training image feature, second to training image feature with And class weight vector, it treats the small-scale image recognition model of training and is trained, obtain small-scale image recognition model, In, small-scale image recognition model is deployed in terminal device.
In the present embodiment, image recognition model training apparatus uses loss function, to each to corresponding to training image First to training image feature, second to training image feature and class weight vector, treat the small-scale image recognition of training Model is learnt, wherein class weight vector can make to training small-scale image recognition model output second wait instruct Practice characteristics of image and is more nearly identity label.It specifically may be considered classifier weight.Loss function is iteratively solved, and according to damage It loses function to calculate gradient and update to the small-scale image recognition model of training, until convergence, small-scale image recognition mould can be obtained Type.
In the embodiment of the present application, a kind of method of image recognition model training is provided, is obtained first to training image collection It closes, is then obtained by large-scale image identification model each to corresponding to training image first to training image feature, and And it is obtained by large-scale image identification model each to corresponding to training image first to training image feature, last basis Each to corresponding to training image first to training image feature, second to training image feature and class weight vector, It treats the small-scale image recognition model of training to be trained, obtains small-scale image recognition model.By the above-mentioned means, designing A kind of model training method based on non-symmetrical features, it is special in the image that the training stage is extracted using large-scale image identification model Sign is used as label, to participate in instructing and supervising the training process of small-scale image recognition model, so that small-scale image recognition mould Input picture can be mapped in feature space locating for large-scale image identification model by type.
Optionally, on the basis of above-mentioned Fig. 5 corresponding embodiment, image recognition model instruction provided by the embodiments of the present application In experienced first alternative embodiment of method, obtained each by large-scale image identification model to corresponding to training image the Before one to training image feature, can also include:
Training image each is waited for third corresponding to training image by large-scale image identification model to be trained acquisition Feature, wherein each third waits for that training image feature corresponds to a class weight vector;
According to it is each to training image corresponding to third wait for training image feature and class weight vector, using classification Loss function is treated trained large-scale image identification model and is trained, and large-scale image identification model is obtained.
In the present embodiment, describes a kind of training and obtain the mode of large-scale image identification model.In training Large Scale Graphs During as identification model, needs to utilize largely to training image and its corresponding identity label, it is extensive to treat training Image recognition model be trained (such as 10,000 identity labels and it is corresponding 10,000 to training image), trained purpose is So that each image passes through the error minimum after large-scale image identification model, with true identity label.
For the ease of introducing, referring to Fig. 6, Fig. 6 is an instruction of large-scale image identification model in the embodiment of the present application Practice block schematic illustration, as shown, by be input to large-scale image knowledge to be trained to training image in training image set Other model answers characteristics of image by the large-scale image identification model output phase to be trained, meanwhile, it is predicted according to characteristics of image Identity label, the prediction identity label of training image is treated using Classification Loss function and to the true identity mark of training image Label are learnt, i.e., the minimum range of prediction identity label and true positive label is calculated using Classification Loss function.When reaching most When small distance, large-scale image identification model to be trained restrains to arrive large-scale image identification model.
Specifically, it is input to large-scale image identification model to be trained to training image by each, to extract correspondence Third wait for training image feature, according to third wait for training image feature obtain corresponding class weight vector to get to prediction Then identity label is calculated using Classification Loss function between the prediction identity label and true identity label of training image Thus distance obtains large-scale image identification model.
It is understood that Classification Loss function used in training large-scale image identification model, with the small-scale figure of training The first-loss function as used in identification model must be same class loss function, it is to be understood that this kind of loss function packet Contain but is not limited only to big cosine losses (large margin cosine loss) loss function, cross entropy loss function (softmax loss) and center loss function (center loss) etc., herein without limitation.
Wherein, a kind of novel loss function for classification of LCML loss function, by improving traditional softmax Activation primitive makes training stage loss function depend only on the cosine value of feature and classified weight, removes theorem in Euclid space L2 model The influence of number (L2-Norm).Cosine boundary interval is introduced on the basis of cosine losses function simultaneously, enhances the boundary of classification Interval, so that the class inherited of feature is bigger, difference is smaller in class.
Softmax loss is a kind of activation primitive for being mainly used for classification, is calculated and exponential form including linear inner product Normalization calculates, and input is feature vector, and output is that functional value of the normalization in [0,1] is considered posterior probability (posterior probability).It is incorporated in CNN, the inner product operation of its classification is indicated with full articulamentum.The target of softmax be so that The posterior probability of the corresponding true classification (ground truth) of feature is maximum.The loss function of softmax is to intersect entropy function (cross entropy), under conditions of with softmax defining classification target, the optimization process of model is so that softmax Loss is minimum.
It is understood that large-scale image identification model can be the CNN network based on ResNet, CNN network be by The directed acyclic network of the compositions such as convolutional layer, full articulamentum and pond layer.Network is obtained by the multilayer convolution to input picture To multi-level feature, these features pass through linear combination and Nonlinear Mapping, reach the mesh such as image recognition, understanding and classification 's.Network can obtain model output (prediction or feature) by propagated forward, pass through back-propagating (gradient descent algorithm) The parameter of more new model is optimized with implementation model so that loss function is minimum.Common optimization algorithm is that stochastic gradient descent is calculated Method (Stochastic Gradient Descent, SGD), i.e., calculate ladder based on the lot sample sheet (minibatch) randomly selected Error is spent, thus iteration optimization model.The parameter scale for including in CNN model determines the capability of fitting of model to a certain extent And computational efficiency.Parameter scale is bigger, and capability of fitting is stronger, and computational efficiency is lower.For ease of description, the application will be refreshing Through network model simply according to Model Parameter quantity be divided into large-scale image identification model (abbreviation large-sized model) and Small-scale image recognition model (abbreviation mini Mod).Common large-scale image identification model include but be not limited only to ResNet with And visual geometric group network (Visual Geometry Group Net, VGGNet) etc., common small-scale image recognition model Include but is not limited only to Mobile Net, Shuffle Net and SqueezeNet.In addition, large-scale image identification model and Small-scale image recognition model can also be other kinds of CNN network.
Secondly, a kind of training method of large-scale image identification model is provided in the embodiment of the present application, it is each obtaining To corresponding to training image first to training image feature before, can also be obtained by large-scale image identification model to be trained It takes and each waits for training image feature to third corresponding to training image, wherein each third waits for that training image feature corresponds to One class weight vector, then according to each to training image corresponding to third wait for training image feature and class weight Vector is treated trained large-scale image identification model using Classification Loss function and is trained, and large-scale image identification mould is obtained Type.By the above-mentioned means, enable to it is each to training image by after large-scale image identification model, and it is corresponding true Error between class label is minimum, after repetitive exercise, enables the large-scale image identification model learnt have extremely strong Identity identification power.
Optionally, on the basis of above-mentioned Fig. 5 corresponding embodiment, image recognition model instruction provided by the embodiments of the present application In experienced second alternative embodiment of method, according to it is each to training image corresponding to first to training image feature, second To training image feature and class weight vector, treats the small-scale image recognition model of training and be trained, obtain small-scale Image recognition model may include:
According to it is each to training image corresponding to second to training image feature and each to corresponding to training image Class weight vector, determine first-loss function;
According to it is each to training image corresponding to first to training image feature and each to corresponding to training image Second to training image feature, determine the second loss function;
According to first-loss function and the second loss function, target loss function is determined;
The small-scale image recognition model of training is treated using target loss function to be trained, and obtains small-scale image recognition Model.
In the present embodiment, the training method of small-scale image recognition model is described.For the ease of introducing, referring to Fig. 7, Fig. 7 be the embodiment of the present application middle and small scale image recognition model a trained block schematic illustration, as shown, first acquisition to Training image set, by training image set be input to large-scale image identification model to training image after, obtain First characteristics of image, by training image set be input to small-scale image recognition model to training image after, obtain Second characteristics of image.According to it is each to training image corresponding to second to training image feature and each to training image institute Corresponding class weight vector determines first-loss function, i.e. Classification Loss function.Meanwhile according to each to training image institute Corresponding first, to training image feature and each to corresponding to training image second to training image feature, determines second Loss function, i.e. L2 loss function.Finally, generating target loss function in conjunction with first-loss function and the second loss function. The small-scale image recognition model of training is treated using target loss function to be trained, and obtains small-scale image recognition model.
It is understood that small-scale image recognition model can be Mobile Net, Shuffle Net or Squeeze Net can also be other lightweight networks, herein without limitation.
Table 1 is please referred to, table 1 is a structural representation table of ResNet.
Table 1
Residual error network is easier to optimize, and can improve accuracy rate by increasing comparable depth.Table 2 is please referred to, Table 2 is a structural representation table of Mobile Net.
Table 2
The basic unit of Mobile Net is that depth level separates convolution (depthwise separable Convolution), it has been used in Inception model before this structure.It is one in fact that depth level, which separates convolution, The decomposable convolution operation (factorized convolutions) of kind, can be decomposed into two smaller operations, as depth Convolution (depthwise convolution) and point-by-point convolution (pointwise convolution).Depth convolution sum standard volume Product is different, and for Standard convolution, its convolution kernel is used in all input channels (input channels), and depth convolution Different convolution kernels is used for each input channel, that is a convolution kernel corresponds to an input channel, thus depth Convolution is the other operation of depth level.And point-by-point convolution is exactly common convolution, only its convolution kernel for using 1 × 1.For depth Spend for grade separates convolution, be that convolution is carried out to different input channels using depth convolution respectively first, then using by Output above is combined by point convolution again, and overall effect and a Standard convolution are approximate, but can greatly reduce meter Calculation amount and model parameter amount.
Secondly, a kind of training method of small-scale image recognition model is provided in the embodiment of the present application, first according to every It is a to corresponding to training image second to training image feature and each to class weight vector corresponding to training image, Determine first-loss function, then according to each to training image corresponding to first to training image feature and each wait instruct Second to training image feature corresponding to white silk image, the second loss function is determined, further according to first-loss function and second Loss function determines target loss function, finally treats the small-scale image recognition model of training using target loss function and carries out Training, obtains small-scale image recognition model.By the above-mentioned means, using trained large-scale image identification model and greatly Image of the amount with identity label instructs small-scale image recognition model to carry out Classification and Identification training jointly.Learn by training Small-scale image recognition model has stronger identity identification power, while the feature learnt with large-scale image identification model With stronger comparativity.
Optionally, on the basis of above-mentioned Fig. 5 corresponding second embodiment, image recognition provided by the embodiments of the present application In the method third alternative embodiment of model training, according to it is each to training image corresponding to second to training image feature And it each determines first-loss function to class weight vector corresponding to training image, may include:
First-loss function is determined in the following way:
s.t.||FS(I) | |=1, | | W | |=1;
Wherein, LLCMLIndicate that first-loss function, N are indicated to the sum to training image in training image set, i table Show that i-th in training image set, to training image, j is indicated to j-th in training image set to training image, e Indicate the nature truth of a matter, cos () indicates two co sinus vector included angle values, and s and m indicate the hyper parameter of first-loss function, IiTable Show i-th to training image, FS(Ii) indicate i-th to corresponding to training image second to training image feature, WiIndicate the I to class weight vector, W corresponding to training imagejJ-th of expression to class weight vector, W corresponding to training image Indicate class weight vector, FS(I) second is indicated to training image feature, s.t. expression is confined to, | | | | indicate feature Mould is long, FS() indicates by the small-scale image recognition model extraction feature of training.
In the present embodiment, the determination method of first-loss function will be introduced.It is adopted in training large-scale image identification model Classification Loss function is consistent with first-loss function, does not repeat herein the content of Classification Loss function.In training Stage, the loss function of small-scale image recognition model include first-loss function (LCML loss function) and minimum two The second loss function (L2 loss function) of Euclidean distance between the aspect of model.Wherein the loss function of LCML can indicate Are as follows:
It can be calculated in the following way:
Wherein, above-mentioned first-loss function also has following requirement:
That is: s.t. | | FS(I) | |=1, | | W | |=1;
Wherein, LLCMLIndicate that first-loss function, N are indicated to the sum to training image in training image set, i table Show that i-th in training image set, to training image, j is indicated to j-th in training image set to training image, e Indicate the nature truth of a matter, cos () indicates two co sinus vector included angle values, and s and m indicate the hyper parameter of first-loss function, IiTable Show i-th to training image, FS(Ii) indicate i-th to corresponding to training image second to training image feature, WiIndicate the I to class weight vector, W corresponding to training imagejJ-th of expression to class weight vector, W corresponding to training image Indicate class weight vector, FS(I) second is indicated to training image feature, s.t. expression is confined to, | | | | indicate feature Mould is long, FS() indicates by the small-scale image recognition model extraction feature of training.
Again, in the embodiment of the present application, a kind of determination method of first-loss function is provided, i.e., according to each wait train Second to training image feature and each to class weight vector corresponding to training image corresponding to image, determines first Loss function.By the above-mentioned means, specific foundation can be provided for the realization of scheme, thus the feasibility of lifting scheme and can Operability.
Optionally, on the basis of above-mentioned Fig. 5 corresponding second embodiment, image recognition provided by the embodiments of the present application In the 4th alternative embodiment of method of model training, according to it is each to training image corresponding to first to training image feature And it each to corresponding to training image second to training image feature, determines the second loss function, may include:
The second loss function is determined in the following way:
Wherein, LL2Indicate that the second loss function, N indicate that, to the sum to training image in training image set, i is indicated To i-th in training image set to training image, IiI-th of expression to training image, FS(Ii) indicate i-th wait train Second to training image feature, F corresponding to imageB(Ii) indicate i-th to corresponding to training image first to training image Feature, | | | |2Indicate the L2 norm of vector, FS() indicates by the small-scale image recognition model extraction feature of training, FB() It indicates to extract feature by large-scale image identification model.
In the present embodiment, a kind of method for describing the second loss function of determination, specifically can using L2 loss function into Row calculate, obtain it is N number of to training image first to training image feature and second to training image feature after Obtain following second loss function:
Wherein, FS(Ii) indicate i-th to corresponding to training image second to training image feature, FB(Ii) indicate i-th It is a to corresponding to training image first to training image feature, LL2Indicate the second loss function.
L2 norm is a kind of norm of metric range Euclidean distance.Inside recurrence, it is widely used and is solving machine The excessively quasi- problem of study the inside is closed.
It is understood that the application carries out model training, the mould using the method for model distillation (Model Distill) The method of type distillation is comparison loss (Contrastive Loss) function based on L2 distance, it is to be understood that the distillation Method can be any other model distillating method, herein without limitation.
Model distillation is the general designation of model I Compression Strategies.CNN is made of model parameter in large scale, these Parameter is updated in the training stage according to the guidance of objective function (Object Function), to the effective of target problem Solution is fitted.Model Parameter scale is bigger, and the complexity for the solution that can be fitted is then higher, accuracy rate in practical applications It can be promoted.But the increase of parameter scale also results in the decline of model calculating speed, so that model is not suitable for calculating In the limited terminal system of ability.Model distillation is small to instruct with the large-sized model of high-performance poor efficiency by the online lower training stage Model to the learning process of goal task so that mini Mod possess with large-sized model recognition performance consistent as far as possible, while not losing The odds for effectiveness of mini Mod.
Again, in the embodiment of the present application, a kind of determination method of second loss function is provided, using L2 loss function pair The characteristics of image of characteristics of image and small-scale image recognition model extraction that large-scale image identification model is extracted is calculated, most Two extracted features of scale image recognition model of smallization, so that the image that small-scale image recognition model extraction obtains The characteristics of image that feature and large-scale image identification model are extracted is in identical feature space.
Optionally, above-mentioned Fig. 5 it is corresponding second to any one of the 4th embodiment on the basis of, the application is implemented In the 5th alternative embodiment of method for the image recognition model training that example provides, lost according to first-loss function and second Function determines target loss function, may include:
Target loss function is determined in the following way:
L=λ1LLCML2LL2
Wherein, L indicates target loss function, λ1Indicate the weight parameter of first-loss function, λ2Indicate the second loss function Weight parameter, LLCMLIndicate first-loss function, LL2Indicate the second loss function.
In the present embodiment, a kind of method of determination of target loss function is provided, is determining first-loss function and second After loss function, combines first-loss function and the second loss function generates target loss function, in order to control two Significance level between loss function, the weight parameter also set up, specifically, target loss function can indicate are as follows:
L=λ1LLCML2LL2
Wherein, L indicates target loss function, λ1Indicate the weight parameter of first-loss function, λ2Indicate the second loss function Weight parameter, LLCMLIndicate first-loss function, LL2Indicate the second loss function.
Model convergence sometimes may can not reach minimum value, for example, L2 loss function and LCML loss function this disappear that It is long, therefore, when two loss functions in certain iteration cycle all there is no when significant change (such as down or up), it is just complete At the convergence of model.
Further, in the embodiment of the present application, provide it is a kind of according to first-loss function and the second loss function, really Set the goal the method for loss function, and the weight parameter by controlling loss function can effectively instruct small-scale image recognition mould Type training, thus the flexibility and feasibility of lifting scheme, while the small-scale image recognition model extraction can be made to obtain Characteristics of image not only separating capacity with higher, but also the characteristics of image extracted with large-scale image identification model is in identical In feature space.
The pattern recognition device in the application is described in detail below, referring to Fig. 8, Fig. 8 is the embodiment of the present application Middle pattern recognition device one embodiment schematic diagram, pattern recognition device 30 include:
Module 301 is obtained, for obtaining images to be recognized;
The acquisition module 301 is also used to obtain the first of the images to be recognized by small-scale image recognition model Characteristics of image, wherein the small-scale image recognition model is deployed in terminal device;
Determining module 302, the first image feature and N number of second figure for being obtained according to the acquisition module 301 As feature, the image similarity between the first image feature and second characteristics of image is determined, wherein second figure As feature is that image to be matched passes through characteristics of image accessed by large-scale image identification model, the large-scale image identification mould The model parameter quantity of type is greater than the model parameter quantity of the small-scale image recognition model, and the N is more than or equal to 1 Integer;
The determining module 302 is also used to determine the image recognition of the images to be recognized according to described image similarity As a result.
In the present embodiment, obtains module 301 and obtain images to be recognized, the acquisition module 301 is known by small-scale image Other model obtains the first characteristics of image of the images to be recognized, wherein the small-scale image recognition model is deployed in terminal Equipment, the first image feature and N number of second characteristics of image of the determining module 302 according to acquisition module 301 acquisition, really Determine the image similarity between the first image feature and second characteristics of image, wherein second characteristics of image is Image to be matched passes through characteristics of image accessed by large-scale image identification model, the model of the large-scale image identification model Number of parameters is greater than the model parameter quantity of the small-scale image recognition model, and the N is the integer more than or equal to 1, institute State the image recognition result that determining module 302 determines the images to be recognized according to described image similarity.
In the embodiment of the present application, a kind of method of image recognition is provided, first acquisition images to be recognized, then by small First characteristics of image of scale image recognition model acquisition images to be recognized, wherein small-scale image recognition model is deployed in end End equipment determines the first characteristics of image and the second characteristics of image next according to the first characteristics of image and N number of second characteristics of image Between image similarity, wherein the second characteristics of image is image to be matched by accessed by large-scale image identification model Characteristics of image, N are the integer more than or equal to 1, and the image recognition result of images to be recognized is finally determined according to image similarity. By the above-mentioned means, extracting image spy all in database using large-scale image identification model in advance in the server Sign, and the characteristics of image of small-scale image recognition model extraction images to be recognized is used on the terminal device, utilize Large Scale Graphs As identification model extracts the characteristics of image of high quality, it is able to carry out efficient calculating using small-scale image recognition model, thus The recognition accuracy of small-scale image recognition model is promoted under the premise of guaranteeing operation efficiency.
Optionally, on the basis of the embodiment corresponding to above-mentioned Fig. 8, pattern recognition device provided by the embodiments of the present application In 30 another embodiment,
The determining module 302, if being specifically used for the N is equal to 1, according to the first image feature and described second Box counting algorithm obtains described image similarity;
If described image similarity reaches similarity threshold, it is determined that the images to be recognized and the image to be matched have There is identical identity label.
Secondly, a kind of method that image compares is provided, if N is equal to 1, according to the first image in the embodiment of the present application Feature and the second box counting algorithm obtain image similarity, if image similarity reaches similarity threshold, it is determined that be identified Image and image to be matched identity label having the same.By the above-mentioned means, in the scene of some image authentications, it can be pre- The facial image feature that all known identities in database are first extracted using large-scale image identification model, is needing to carry out face It when verifying, is calculated in real time using small-scale image recognition model, and obtains corresponding facial image feature, to be identified Facial image carry out 1 to 1 face verification, thus using asymmetrical feature carry out human face similarity degree calculating, Neng Gou The recognition accuracy of boosting algorithm under the premise of guarantee efficiency of algorithm.
Optionally, on the basis of the embodiment corresponding to above-mentioned Fig. 8, pattern recognition device provided by the embodiments of the present application In 30 another embodiment,
The determining module 302, if being specifically used for the N is greater than 1, according to the first image feature and each second N number of image similarity is calculated in characteristics of image;
Image to be matched corresponding to target image similarity is determined from N number of image similarity, wherein the mesh Logo image similarity is the maximum value in N number of image similarity;
Determine image to be matched body having the same corresponding to the images to be recognized and the target image similarity Part label.
Secondly, in the embodiment of the present application, a kind of method for having supplied image retrieval, if N is greater than 1, according to the first image spy Sign and each second characteristics of image, are calculated N number of image similarity, and target image is then determined from N number of image similarity Image to be matched corresponding to similarity, wherein target image similarity is the maximum value in N number of image similarity, last true Determine image to be matched identity label having the same corresponding to images to be recognized and target image similarity.Pass through above-mentioned side Formula can be extracted using large-scale image identification model all known in database in advance in the scene of some image retrievals The facial image feature of identity is calculated when needing to carry out face verification using small-scale image recognition model in real time, And corresponding facial image feature is obtained, 1 is carried out than more recognitions of face to facial image to be identified, thus using non-right The feature of title carries out human face similarity degree calculating, can under the premise of guaranteeing efficiency of algorithm boosting algorithm recognition accuracy.
Optionally, on the basis of the embodiment corresponding to above-mentioned Fig. 8, pattern recognition device provided by the embodiments of the present application In 30 another embodiment,
The determining module 302 is specifically used for calculating described image similarity in the following way:
Wherein, the S (Ip,Ig) indicate the image similarity of the images to be recognized and the image to be matched, the Ip Indicate the images to be recognized, the IgIndicate the image to be matched, the FS(Ip) indicate the first image feature, institute State FB(Ig) indicate second characteristics of image, described | | | | indicate that the mould of feature is long.
Again, it in the embodiment of the present application, provides and a kind of is obtained according to the first characteristics of image and the second box counting algorithm The method of image similarity.By the above-mentioned means, the image similarity between characteristics of image can accurately be calculated, thus Reliable foundation, and the feasibility and operability of lifting scheme are provided for subsequent identification.
The image recognition model training apparatus in the application is described in detail below, referring to Fig. 9, Fig. 9 is this Shen Please image recognition model training apparatus one embodiment schematic diagram in embodiment, image recognition model training apparatus 40 includes:
Module 401 is obtained, for obtaining to training image set, wherein described to include at least in training image set One, to training image, each corresponds to an identity label to training image;
The acquisition module 401 is also used to obtain by large-scale image identification model each to corresponding to training image First to training image feature;
The acquisition module 401 is also used to described each wait train by obtaining to the small-scale image recognition model of training Second to training image feature corresponding to image, wherein each second corresponds to a class weight to training image feature Vector, the class weight vector and the identity label have one-to-one relationship;
Training module 402, it is described each to described in corresponding to training image for being obtained according to the acquisition module First to training image feature, described second to training image feature and class weight vector, to described small-scale to training Image recognition model is trained, and obtains small-scale image recognition model, wherein the small-scale image recognition model is deployed in Terminal device, the model parameter quantity of the large-scale image identification model are greater than the model of the small-scale image recognition model Number of parameters.
In the present embodiment, obtains module 401 and obtain to training image set, wherein is described to be wrapped in training image set It includes at least one and waits for training image, each correspond to an identity label to training image, the acquisition module 401 is by advising greatly Mould image recognition model obtains each to corresponding to training image first to training image feature, and the acquisition module 401 is logical Cross obtained to the small-scale image recognition model of training it is described each to corresponding to training image second to training image feature, In, each second corresponds to a class weight vector, the class weight vector and the identity mark to training image feature Label have one-to-one relationship, and it is described each right to training image institute that training module 402 is obtained according to the acquisition module Described first answered to training image feature, described second to training image feature and class weight vector, to described wait instruct Practice small-scale image recognition model to be trained, obtains small-scale image recognition model, wherein the small-scale image recognition mould Type is deployed in terminal device, and the model parameter quantity of the large-scale image identification model is greater than the small-scale image recognition mould The model parameter quantity of type.
In the embodiment of the present application, a kind of method of image recognition model training is provided, is obtained first to training image collection It closes, is then obtained by large-scale image identification model each to corresponding to training image first to training image feature, and And it is obtained by large-scale image identification model each to corresponding to training image first to training image feature, last basis Each to corresponding to training image first to training image feature, second to training image feature and class weight vector, It treats the small-scale image recognition model of training to be trained, obtains small-scale image recognition model.By the above-mentioned means, designing A kind of model training method based on non-symmetrical features, it is special in the image that the training stage is extracted using large-scale image identification model Sign is used as label, to participate in instructing and supervising the training process of small-scale image recognition model, so that small-scale image recognition mould Input picture can be mapped in feature space locating for large-scale image identification model by type.
Optionally, on the basis of the embodiment corresponding to above-mentioned Fig. 9, image recognition model provided by the embodiments of the present application In another embodiment of training device 40,
The acquisition module 401 is also used to obtain by large-scale image identification model each to corresponding to training image First to training image feature before, by large-scale image identification model to be trained obtain each to corresponding to training image Third wait for training image feature, wherein each third wait for training image feature correspond to a class weight vector;
The training module 402, be also used to be obtained according to the acquisition module 401 is described each right to training image institute The third answered waits for training image feature and class weight vector, using Classification Loss function to the large-scale image to be trained Identification model is trained, and obtains the large-scale image identification model.
Secondly, a kind of training method of large-scale image identification model is provided in the embodiment of the present application, it is each obtaining To corresponding to training image first to training image feature before, can also be obtained by large-scale image identification model to be trained It takes and each waits for training image feature to third corresponding to training image, wherein each third waits for that training image feature corresponds to One class weight vector, then according to each to training image corresponding to third wait for training image feature and class weight Vector is treated trained large-scale image identification model using Classification Loss function and is trained, and large-scale image identification mould is obtained Type.By the above-mentioned means, enable to it is each to training image by after large-scale image identification model, and it is corresponding true Error between class label is minimum, after repetitive exercise, enables the large-scale image identification model learnt have extremely strong Identity identification power.
Optionally, on the basis of the embodiment corresponding to above-mentioned Fig. 9, image recognition model provided by the embodiments of the present application In another embodiment of training device 40,
The training module 402, specifically for according to it is described each to training image corresponding to described second wait train Characteristics of image and described each to class weight vector corresponding to training image, determines first-loss function;
According to it is described each to training image corresponding to described first to training image feature and described each wait instruct Described second to training image feature corresponding to white silk image, determines the second loss function;
According to the first-loss function and second loss function, target loss function is determined;
It is trained, is obtained described small to the small-scale image recognition model of training to described using the target loss function Scale image recognition model.
Secondly, a kind of training method of small-scale image recognition model is provided in the embodiment of the present application, first according to every It is a to corresponding to training image second to training image feature and each to class weight vector corresponding to training image, Determine first-loss function, then according to each to training image corresponding to first to training image feature and each wait instruct Second to training image feature corresponding to white silk image, the second loss function is determined, further according to first-loss function and second Loss function determines target loss function, finally treats the small-scale image recognition model of training using target loss function and carries out Training, obtains small-scale image recognition model.By the above-mentioned means, using trained large-scale image identification model and greatly Image of the amount with identity label instructs small-scale image recognition model to carry out Classification and Identification training jointly.Learn by training Small-scale image recognition model has stronger identity identification power, while the feature learnt with large-scale image identification model With stronger comparativity.
Optionally, on the basis of the embodiment corresponding to above-mentioned Fig. 9, image recognition model provided by the embodiments of the present application In another embodiment of training device 40,
The training module 402, specifically for determining the first-loss function in the following way:
s.t.||FS(I) | |=1, | | W | |=1;
Wherein, the LLCMLIndicate the first-loss function, the N indicate it is described in training image set wait instruct Practice the sum of image, the i indicate it is described to i-th in training image set to training image, the j indicates described wait instruct Practice j-th in image collection to training image, the e indicates the nature truth of a matter, and the cos () indicates two co sinus vector included angles Value, the s and the m indicate the hyper parameter of the first-loss function, the IiIndicate described i-th to training image, The FS(Ii) indicate it is described i-th to corresponding to training image second to training image feature, the WiIndicate described i-th It is a to class weight vector corresponding to training image, the WjIt indicates to weigh to classification corresponding to training image for described j-th Weight vector, the W indicate class weight vector, the FS(I) second is indicated to training image feature, and the s.t. indicates limited It is formed on, described | | | | indicate that the mould of feature is long, the FS() indicates by described to the small-scale image recognition model extraction of training Feature.
Again, in the embodiment of the present application, a kind of determination method of first-loss function is provided, i.e., according to each wait train Second to training image feature and each to class weight vector corresponding to training image corresponding to image, determines first Loss function.By the above-mentioned means, specific foundation can be provided for the realization of scheme, thus the feasibility of lifting scheme and can Operability.
Optionally, on the basis of the embodiment corresponding to above-mentioned Fig. 9, image recognition model provided by the embodiments of the present application In another embodiment of training device 40,
The training module 402, specifically for determining second loss function in the following way:
Wherein, the LL2Indicate second loss function, the N indicate it is described in training image set wait train The sum of image, the i indicate it is described to i-th in training image set to training image, the IiIt indicates described i-th To training image, the FS(Ii) indicate it is described i-th to corresponding to training image second to training image feature, the FB (Ii) indicate it is described i-th to corresponding to training image first to training image feature, it is described | | | |2Indicate the L2 model of vector Number, the FS() indicates by described to the small-scale image recognition model extraction feature of training, the FB() indicates by described Large-scale image identification model extracts feature.
Again, in the embodiment of the present application, a kind of determination method of second loss function is provided, using L2 loss function pair The characteristics of image of characteristics of image and small-scale image recognition model extraction that large-scale image identification model is extracted is calculated, most Two extracted features of scale image recognition model of smallization, so that the image that small-scale image recognition model extraction obtains The characteristics of image that feature and large-scale image identification model are extracted is in identical feature space.
Optionally, on the basis of the embodiment corresponding to above-mentioned Fig. 9, image recognition model provided by the embodiments of the present application In another embodiment of training device 40,
The training module 402, specifically for determining the target loss function in the following way:
L=λ1LLCML2LL2
Wherein, the L indicates the target loss function, the λ1Indicate the weight parameter of the first-loss function, The λ2Indicate the weight parameter of second loss function, the LLCMLIndicate the first-loss function, the LL2It indicates Second loss function.
Further, in the embodiment of the present application, provide it is a kind of according to first-loss function and the second loss function, really Set the goal the method for loss function, and the weight parameter by controlling loss function can effectively instruct small-scale image recognition mould Type training, thus the flexibility and feasibility of lifting scheme, while the small-scale image recognition model extraction can be made to obtain Characteristics of image not only separating capacity with higher, but also the characteristics of image extracted with large-scale image identification model is in identical In feature space.
The embodiment of the present application also provides another pattern recognition devices, as shown in Figure 10, for ease of description, only show Part relevant to the embodiment of the present application, it is disclosed by specific technical details, please refer to the embodiment of the present application method part.It should Terminal device can be include mobile phone, tablet computer, personal digital assistant (Personal Digital Assistant, PDA), Any terminal device equipment such as point-of-sale terminal equipment (Point of Sales, POS), vehicle-mounted computer, using terminal device as mobile phone For:
Figure 10 shows the block diagram of the part-structure of mobile phone relevant to terminal device provided by the embodiments of the present application.Ginseng Figure 10 is examined, mobile phone includes: radio frequency (Radio Frequency, RF) circuit 510, memory 520, input unit 530, display list First 540, sensor 550, voicefrequency circuit 560, Wireless Fidelity (wireless fidelity, WiFi) module 570, processor The components such as 580 and power supply 590.It will be understood by those skilled in the art that handset structure shown in Figure 10 does not constitute opponent The restriction of machine may include perhaps combining certain components or different component layouts than illustrating more or fewer components.
It is specifically introduced below with reference to each component parts of the Figure 10 to mobile phone:
RF circuit 510 can be used for receiving and sending messages or communication process in, signal sends and receivees, particularly, by base station After downlink information receives, handled to processor 580;In addition, the data for designing uplink are sent to base station.In general, RF circuit 510 Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier (Low Noise Amplifier, LNA), duplexer etc..In addition, RF circuit 510 can also be communicated with network and other equipment by wireless communication. Any communication standard or agreement, including but not limited to global system for mobile communications (Global can be used in above-mentioned wireless communication System of Mobile communication, GSM), general packet radio service (General Packet Radio Service, GPRS), CDMA (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE), Email, short message service (Short Messaging Service, SMS) etc..
Memory 520 can be used for storing software program and module, and processor 580 is stored in memory 520 by operation Software program and module, thereby executing the various function application and data processing of mobile phone.Memory 520 can mainly include Storing program area and storage data area, wherein storing program area can application journey needed for storage program area, at least one function Sequence (such as sound-playing function, image player function etc.) etc.;Storage data area can be stored to be created according to using for mobile phone Data (such as audio data, phone directory etc.) etc..It, can be in addition, memory 520 may include high-speed random access memory Including nonvolatile memory, for example, at least a disk memory, flush memory device or other volatile solid-states Part.
Input unit 530 can be used for receiving the number or character information of input, and generate with the user setting of mobile phone with And the related key signals input of function control.Specifically, input unit 530 may include that touch panel 531 and other inputs are set Standby 532.Touch panel 531, also referred to as touch screen, collect user on it or nearby touch operation (such as user use The operation of any suitable object or attachment such as finger, stylus on touch panel 531 or near touch panel 531), and root Corresponding attachment device is driven according to preset formula.Optionally, touch panel 531 may include touch detecting apparatus and touch Two parts of controller.Wherein, the touch orientation of touch detecting apparatus detection user, and touch operation bring signal is detected, Transmit a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and is converted into touching Point coordinate, then gives processor 580, and can receive order that processor 580 is sent and be executed.Furthermore, it is possible to using electricity The multiple types such as resistive, condenser type, infrared ray and surface acoustic wave realize touch panel 531.In addition to touch panel 531, input Unit 530 can also include other input equipments 532.Specifically, other input equipments 532 can include but is not limited to secondary or physical bond One of disk, function key (such as volume control button, switch key etc.), trace ball, mouse, operating stick etc. are a variety of.
Display unit 540 can be used for showing information input by user or be supplied to user information and mobile phone it is various Menu.Display unit 540 may include display panel 541, optionally, can use liquid crystal display (Liquid Crystal Display, LCD), the forms such as Organic Light Emitting Diode (Organic Light-Emitting Diode, OLED) it is aobvious to configure Show panel 541.Further, touch panel 531 can cover display panel 541, when touch panel 531 detect it is on it or attached After close touch operation, processor 580 is sent to determine the type of touch event, is followed by subsequent processing device 580 according to touch event Type corresponding visual output is provided on display panel 541.Although in Figure 10, touch panel 531 and display panel 541 It is that the input and input function of mobile phone are realized as two independent components, but in some embodiments it is possible to by touch-control Panel 531 and display panel 541 are integrated and that realizes mobile phone output and input function.
Mobile phone may also include at least one sensor 550, such as optical sensor, motion sensor and other sensors. Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ambient light Light and shade adjust the brightness of display panel 541, proximity sensor can close display panel 541 when mobile phone is moved in one's ear And/or backlight.As a kind of motion sensor, accelerometer sensor can detect (generally three axis) acceleration in all directions Size, can detect that size and the direction of gravity when static, can be used to identify the application of mobile phone posture, (for example horizontal/vertical screen is cut Change, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;May be used also as mobile phone The other sensors such as gyroscope, barometer, hygrometer, thermometer, the infrared sensor of configuration, details are not described herein.
Voicefrequency circuit 560, loudspeaker 561, microphone 562 can provide the audio interface between user and mobile phone.Audio-frequency electric Electric signal after the audio data received conversion can be transferred to loudspeaker 561, be converted to sound by loudspeaker 561 by road 560 Signal output;On the other hand, the voice signal of collection is converted to electric signal by microphone 562, is turned after being received by voicefrequency circuit 560 It is changed to audio data, then by after the processing of audio data output processor 580, such as another mobile phone is sent to through RF circuit 510, Or audio data is exported to memory 520 to be further processed.
WiFi belongs to short range wireless transmission technology, and mobile phone can help user's transceiver electronics postal by WiFi module 570 Part, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Figure 10 is shown WiFi module 570, but it is understood that, and it is not belonging to must be configured into for mobile phone, it can according to need do not changing completely Become in the range of the essence of invention and omits.
Processor 580 is the control centre of mobile phone, using the various pieces of various interfaces and connection whole mobile phone, is led to It crosses operation or executes the software program and/or module being stored in memory 520, and call and be stored in memory 520 Data execute the various functions and processing data of mobile phone, to carry out integral monitoring to mobile phone.Optionally, processor 580 can wrap Include one or more processing units;Optionally, processor 580 can integrate application processor and modem processor, wherein answer With the main processing operation system of processor, user interface and application program etc., modem processor mainly handles wireless communication. It is understood that above-mentioned modem processor can not also be integrated into processor 580.
Mobile phone further includes the power supply 590 (such as battery) powered to all parts, and optionally, power supply can pass through power supply pipe Reason system and processor 580 are logically contiguous, to realize management charging, electric discharge and power managed by power-supply management system Etc. functions.
Although being not shown, mobile phone can also include camera, bluetooth module etc., and details are not described herein.
In the embodiment of the present application, processor 580 included by the terminal device is also with the following functions:
Obtain images to be recognized;
The first characteristics of image of the images to be recognized is obtained by small-scale image recognition model, wherein the small rule Mould image recognition model is deployed in terminal device;
According to the first image feature and N number of second characteristics of image, the first image feature and described second are determined Image similarity between characteristics of image, wherein second characteristics of image is that image to be matched is identified by large-scale image Characteristics of image accessed by model, the model parameter quantity of the large-scale image identification model are greater than the small-scale image and know The model parameter quantity of other model, the N are the integer more than or equal to 1;
The image recognition result of the images to be recognized is determined according to described image similarity.
Optionally, processor 580 is specifically used for executing following steps:
If the N is equal to 1, described image is obtained according to the first image feature and second box counting algorithm Similarity;
If described image similarity reaches similarity threshold, it is determined that the images to be recognized and the image to be matched have There is identical identity label.
Optionally, processor 580 is specifically used for executing following steps:
If the N is greater than 1, according to the first image feature and each second characteristics of image, N number of image is calculated Similarity;
Image to be matched corresponding to target image similarity is determined from N number of image similarity, wherein the mesh Logo image similarity is the maximum value in N number of image similarity;
Determine image to be matched body having the same corresponding to the images to be recognized and the target image similarity Part label.
Optionally, processor 580 is specifically used for executing following steps:
Described image similarity is calculated in the following way:
Wherein, the S (Ip,Ig) indicate the image similarity of the images to be recognized and the image to be matched, the Ip Indicate the images to be recognized, the IgIndicate the image to be matched, the FS(Ip) indicate the first image feature, institute State FB(Ig) indicate second characteristics of image, described | | | | indicate that the mould of feature is long.
Figure 11 is a kind of server architecture schematic diagram provided by the embodiments of the present application, which can be because of configuration or property Energy is different and generates bigger difference, may include one or more central processing units (central processing Units, CPU) 622 (for example, one or more processors) and memory 632, one or more storages apply journey The storage medium 630 (such as one or more mass memory units) of sequence 642 or data 644.Wherein, 632 He of memory Storage medium 630 can be of short duration storage or persistent storage.The program for being stored in storage medium 630 may include one or one With upper module (diagram does not mark), each module may include to the series of instructions operation in server.Further, in Central processor 622 can be set to communicate with storage medium 630, execute on server 600 a series of in storage medium 630 Instruction operation.
Server 600 can also include one or more power supplys 626, one or more wired or wireless networks Interface 650, one or more input/output interfaces 658, and/or, one or more operating systems 641, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM etc..
The step as performed by server can be based on server architecture shown in the Figure 11 in above-described embodiment.
In the embodiment of the present application, CPU 622 included by the server is also with the following functions:
It obtains to training image set, wherein it is described to include that at least one waits for training image in training image set, often It is a to correspond to an identity label to training image;
It is obtained by large-scale image identification model each to corresponding to training image first to training image feature;
It is described each to corresponding to training image second wait train by being obtained to the small-scale image recognition model of training Characteristics of image, wherein each second to training image feature correspond to a class weight vector, the class weight vector with The identity label has one-to-one relationship;
According to it is described each to training image corresponding to described first to training image feature, described second to training scheme As feature and class weight vector, it is trained to described to the small-scale image recognition model of training, obtains small-scale image Identification model, wherein the small-scale image recognition model is deployed in terminal device, the mould of the large-scale image identification model Shape parameter quantity is greater than the model parameter quantity of the small-scale image recognition model.
Optionally, processor 580 is also used to execute following steps:
Training image each is waited for third corresponding to training image by large-scale image identification model to be trained acquisition Feature, wherein each third waits for that training image feature corresponds to a class weight vector;
According to it is described each to training image corresponding to third wait for training image feature and class weight vector, use Classification Loss function is trained the large-scale image identification model to be trained, and obtains the large-scale image identification mould Type.
Optionally, processor 580 is specifically used for executing following steps:
According to it is described each to training image corresponding to described second to training image feature and described each wait instruct Practice class weight vector corresponding to image, determines first-loss function;
According to it is described each to training image corresponding to described first to training image feature and described each wait instruct Described second to training image feature corresponding to white silk image, determines the second loss function;
According to the first-loss function and second loss function, target loss function is determined;
It is trained, is obtained described small to the small-scale image recognition model of training to described using the target loss function Scale image recognition model.
Optionally, processor 580 is specifically used for executing following steps:
The first-loss function is determined in the following way:
s.t.||FS(I) | |=1, | | W | |=1;
Wherein, the LLCMLIndicate the first-loss function, the N indicate it is described in training image set wait instruct Practice the sum of image, the i indicate it is described to i-th in training image set to training image, the j indicates described wait instruct Practice j-th in image collection to training image, the e indicates the nature truth of a matter, and the cos () indicates two co sinus vector included angles Value, the s and the m indicate the hyper parameter of the first-loss function, the IiIndicate described i-th to training image, The FS(Ii) indicate it is described i-th to corresponding to training image second to training image feature, the WiIndicate described i-th It is a to class weight vector corresponding to training image, the WjIt indicates to weigh to classification corresponding to training image for described j-th Weight vector, the W indicate class weight vector, the FS(I) second is indicated to training image feature, and the s.t. indicates limited It is formed on, described | | | | indicate that the mould of feature is long, the FS() indicates by described to the small-scale image recognition model extraction of training Feature.
Optionally, processor 580 is specifically used for executing following steps:
Second loss function is determined in the following way:
Wherein, the LL2Indicate second loss function, the N indicate it is described in training image set wait train The sum of image, the i indicate it is described to i-th in training image set to training image, the IiIt indicates described i-th To training image, the FS(Ii) indicate it is described i-th to corresponding to training image second to training image feature, the FB (Ii) indicate it is described i-th to corresponding to training image first to training image feature, it is described | | | |2Indicate the L2 model of vector Number, the FS() indicates by described to the small-scale image recognition model extraction feature of training, the FB() indicates by described Large-scale image identification model extracts feature.
Optionally, processor 580 is specifically used for executing following steps:
The target loss function is determined in the following way:
L=λ1LLCML2LL2
Wherein, the L indicates the target loss function, the λ1Indicate the weight parameter of the first-loss function, The λ2Indicate the weight parameter of second loss function, the LLCMLIndicate the first-loss function, the LL2It indicates Second loss function.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can store in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words It embodies, which is stored in a storage medium, including some instructions are used so that a computer Equipment (can be personal computer, server or the network equipment etc.) executes the complete of each embodiment the method for the application Portion or part steps.And storage medium above-mentioned include: USB flash disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic or disk etc. are various can store program The medium of code.
The above, above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although referring to before Embodiment is stated the application is described in detail, those skilled in the art should understand that: it still can be to preceding Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these It modifies or replaces, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution.

Claims (15)

1. a kind of method of image recognition characterized by comprising
Obtain images to be recognized;
The first characteristics of image of the images to be recognized is obtained by small-scale image recognition model, wherein the small-scale figure As identification model is deployed in terminal device;
According to the first image feature and N number of second characteristics of image, the first image feature and second image are determined Image similarity between feature, wherein second characteristics of image is that image to be matched passes through large-scale image identification model The model parameter quantity of accessed characteristics of image, the large-scale image identification model is greater than the small-scale image recognition mould The model parameter quantity of type, the N are the integer more than or equal to 1;
The image recognition result of the images to be recognized is determined according to described image similarity.
2. the method according to claim 1, wherein described according to the first image feature and N number of second figure As feature, the image similarity between the first image feature and second characteristics of image is determined, comprising:
If the N is equal to 1, it is similar to second box counting algorithm to obtain described image according to the first image feature Degree;
The image recognition result that the images to be recognized is determined according to described image similarity, comprising:
If described image similarity reaches similarity threshold, it is determined that the images to be recognized has phase with the image to be matched Same identity label.
3. the method according to claim 1, wherein described according to the first image feature and N number of second figure As feature, the image similarity between the first image feature and second characteristics of image is determined, comprising:
If the N is greater than 1, according to the first image feature and each second characteristics of image, it is similar that N number of image is calculated Degree;
The image recognition result that the images to be recognized is determined according to described image similarity, comprising:
Image to be matched corresponding to target image similarity is determined from N number of image similarity, wherein the target figure As similarity is the maximum value in N number of image similarity;
Determine image to be matched identity mark having the same corresponding to the images to be recognized and the target image similarity Label.
4. according to the method described in claim 2, it is characterized in that, described according to the first image feature and second figure As feature calculation obtains described image similarity, comprising:
Described image similarity is calculated in the following way:
Wherein, the S (Ip,Ig) indicate the image similarity of the images to be recognized and the image to be matched, the IpIt indicates The images to be recognized, the IgIndicate the image to be matched, the FS(Ip) indicate the first image feature, the FB (Ig) indicate second characteristics of image, described | | | | indicate that the mould of feature is long.
5. a kind of method of image recognition model training characterized by comprising
Obtain to training image set, wherein it is described in training image set include at least one wait for training image, each to Training image corresponds to an identity label;
It is obtained by large-scale image identification model each to corresponding to training image first to training image feature;
It is described each to corresponding to training image second to training image by being obtained to the small-scale image recognition model of training Feature, wherein each second to training image feature correspond to a class weight vector, the class weight vector with it is described Identity label has one-to-one relationship;
According to it is described each to training image corresponding to described first to training image feature, described second to training image spy Sign and class weight vector, are trained to the small-scale image recognition model of training to described, obtain small-scale image recognition Model, wherein the small-scale image recognition model is deployed in terminal device, the model ginseng of the large-scale image identification model Number quantity is greater than the model parameter quantity of the small-scale image recognition model.
6. according to the method described in claim 5, it is characterized in that, it is described by large-scale image identification model obtain each to Corresponding to training image first to training image feature before, the method also includes:
Training image feature each is waited for third corresponding to training image by large-scale image identification model to be trained acquisition, Wherein, each third waits for that training image feature corresponds to a class weight vector;
According to it is described each to training image corresponding to third wait for training image feature and class weight vector, using classification Loss function is trained the large-scale image identification model to be trained, and obtains the large-scale image identification model.
7. according to the method described in claim 5, it is characterized in that, it is described according to it is described each to training image corresponding to institute First is stated to training image feature, described second to training image feature and class weight vector, to described to the small rule of training Mould image recognition model is trained, and obtains small-scale image recognition model, comprising:
According to it is described each to training image corresponding to described second to training image feature and it is described each to training scheme As corresponding class weight vector, first-loss function is determined;
According to it is described each to training image corresponding to described first to training image feature and it is described each to training scheme As corresponding described second to training image feature, the second loss function is determined;
According to the first-loss function and second loss function, target loss function is determined;
It is trained, is obtained described small-scale to the small-scale image recognition model of training to described using the target loss function Image recognition model.
8. the method according to the description of claim 7 is characterized in that it is described according to it is described each to training image corresponding to institute Second is stated to training image feature and described each to class weight vector corresponding to training image, determines first-loss letter Number, comprising:
The first-loss function is determined in the following way:
s.t.||FS(I) | |=1, | | W | |=1;
Wherein, the LLCMLIndicate that the first-loss function, the N indicate described to scheme in training image set to training The sum of picture, the i indicate it is described to i-th in training image set to training image, the j indicates described to scheme to training Image set close in j-th to training image, the e indicates the nature truth of a matter, and the cos () indicates two co sinus vector included angle values, The s and m indicates the hyper parameter of the first-loss function, the IiIndicate described i-th to training image, it is described FS(Ii) indicate it is described i-th to corresponding to training image second to training image feature, the WiIndicate described i-th to Class weight vector corresponding to training image, the WjIndicate described j-th to class weight corresponding to training image to Amount, the W indicate class weight vector, the FS(I) second is indicated to training image feature, and the s.t. expression is restricted In described | | | | indicate that the mould of feature is long, the FS() indicates special to the small-scale image recognition model extraction of training by described Sign.
9. the method according to the description of claim 7 is characterized in that it is described according to it is described each to training image corresponding to institute First is stated to training image feature and described each to corresponding to training image described second to training image feature, is determined Second loss function, comprising:
Second loss function is determined in the following way:
Wherein, the LL2Indicate second loss function, the N indicate it is described in training image set to training image Sum, the i indicate it is described to i-th in training image set to training image, the IiIndicate described i-th wait instruct Practice image, the FS(Ii) indicate it is described i-th to corresponding to training image second to training image feature, the FB(Ii) Indicate described i-th to corresponding to training image first to training image feature, it is described | | | |2Indicate the L2 norm of vector, The FS() indicates by described to the small-scale image recognition model extraction feature of training, the FB() indicates to pass through the big rule Mould image recognition model extraction feature.
10. method according to any one of claims 7 to 9, which is characterized in that described according to the first-loss function And second loss function, determine target loss function, comprising:
The target loss function is determined in the following way:
L=λ1LLCML2LL2
Wherein, the L indicates the target loss function, the λ1Indicate the weight parameter of the first-loss function, the λ2 Indicate the weight parameter of second loss function, the LLCMLIndicate the first-loss function, the LL2Indicate described Two loss functions.
11. a kind of pattern recognition device characterized by comprising
Module is obtained, for obtaining images to be recognized;
The acquisition module, the first image for being also used to obtain the images to be recognized by small-scale image recognition model are special Sign, wherein the small-scale image recognition model is deployed in terminal device;
Determining module, the first image feature and N number of second characteristics of image for being obtained according to the acquisition module, determines Image similarity between the first image feature and second characteristics of image, wherein second characteristics of image be to Matching image passes through characteristics of image accessed by large-scale image identification model, the model ginseng of the large-scale image identification model Number quantity is greater than the model parameter quantity of the small-scale image recognition model, and the N is the integer more than or equal to 1;
The determining module is also used to determine the image recognition result of the images to be recognized according to described image similarity.
12. a kind of image recognition model training apparatus characterized by comprising
Module is obtained, for obtaining to training image set, wherein described to include that at least one waits instructing in training image set Practice image, each corresponds to an identity label to training image;
The acquisition module, be also used to by large-scale image identification model obtain each to corresponding to training image first to Training image feature, wherein;
The acquisition module, it is described each right to training image institute by obtaining to the small-scale image recognition model of training to be also used to Second answered is to training image feature, wherein each second corresponds to a class weight vector to training image feature, described Class weight vector and the identity label have one-to-one relationship;
Training module, for according to the acquisition module obtain it is described each to training image corresponding to described first wait instruct Practice characteristics of image, described second to training image feature and class weight vector, to described to the small-scale image recognition of training Model is trained, and obtains small-scale image recognition model, wherein the small-scale image recognition model is deployed in terminal and sets Standby, the model parameter quantity of the large-scale image identification model is greater than the model parameter number of the small-scale image recognition model Amount.
13. a kind of terminal device characterized by comprising memory, transceiver, processor and bus system;
Wherein, the memory is for storing program;
The processor is used to execute the program in the memory, includes the following steps:
Obtain images to be recognized;
The first characteristics of image of the images to be recognized is obtained by small-scale image recognition model, wherein the small-scale figure As identification model is deployed in terminal device;
According to the first image feature and N number of second characteristics of image, the first image feature and second image are determined Image similarity between feature, wherein second characteristics of image is that image to be matched passes through large-scale image identification model The model parameter quantity of accessed characteristics of image, the large-scale image identification model is greater than the small-scale image recognition mould The model parameter quantity of type, the N are the integer more than or equal to 1;
The image recognition result of the images to be recognized is determined according to described image similarity;
The bus system is for connecting the memory and the processor, so that the memory and the processor It is communicated.
14. a kind of server characterized by comprising memory, transceiver, processor and bus system;
Wherein, the memory is for storing program;
The processor is used to execute the program in the memory, includes the following steps:
Obtain to training image set, wherein it is described in training image set include at least one wait for training image, each to Training image corresponds to an identity label;
It is obtained by large-scale image identification model each to corresponding to training image first to training image feature;
It is described each to corresponding to training image second to training image by being obtained to the small-scale image recognition model of training Feature, wherein each second to training image feature correspond to a class weight vector, the class weight vector with it is described Identity label has one-to-one relationship;
According to it is described each to training image corresponding to described first to training image feature, described second to training image spy Sign and class weight vector, are trained to the small-scale image recognition model of training to described, obtain small-scale image recognition Model, wherein the small-scale image recognition model is deployed in terminal device, the model ginseng of the large-scale image identification model Number quantity is greater than the model parameter quantity of the small-scale image recognition model;
The bus system is for connecting the memory and the processor, so that the memory and the processor It is communicated.
15. a kind of computer readable storage medium, including instruction, when run on a computer, so that computer executes such as Method described in any one of Claims 1-4, or execute the method as described in any one of claim 5 to 10.
CN201910289986.1A 2019-04-11 2019-04-11 Image recognition method, image recognition model training method and device Active CN110009052B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910289986.1A CN110009052B (en) 2019-04-11 2019-04-11 Image recognition method, image recognition model training method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910289986.1A CN110009052B (en) 2019-04-11 2019-04-11 Image recognition method, image recognition model training method and device

Publications (2)

Publication Number Publication Date
CN110009052A true CN110009052A (en) 2019-07-12
CN110009052B CN110009052B (en) 2022-11-18

Family

ID=67171118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910289986.1A Active CN110009052B (en) 2019-04-11 2019-04-11 Image recognition method, image recognition model training method and device

Country Status (1)

Country Link
CN (1) CN110009052B (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110348422A (en) * 2019-07-18 2019-10-18 北京地平线机器人技术研发有限公司 Image processing method, device, computer readable storage medium and electronic equipment
CN110414581A (en) * 2019-07-19 2019-11-05 腾讯科技(深圳)有限公司 Picture detection method and device, storage medium and electronic device
CN110458217A (en) * 2019-07-31 2019-11-15 腾讯医疗健康(深圳)有限公司 Image-recognizing method and device, eye fundus image recognition methods and electronic equipment
CN110517771A (en) * 2019-08-29 2019-11-29 腾讯医疗健康(深圳)有限公司 A kind of medical image processing method, medical image recognition method and device
CN110569911A (en) * 2019-09-11 2019-12-13 深圳绿米联创科技有限公司 Image recognition method, device, system, electronic equipment and storage medium
CN110598019A (en) * 2019-09-11 2019-12-20 腾讯科技(深圳)有限公司 Repeated image identification method and device
CN111079517A (en) * 2019-10-31 2020-04-28 福建天泉教育科技有限公司 Face management and recognition method and computer-readable storage medium
CN111126573A (en) * 2019-12-27 2020-05-08 深圳力维智联技术有限公司 Model distillation improvement method and device based on individual learning and storage medium
CN111260056A (en) * 2020-01-17 2020-06-09 北京爱笔科技有限公司 Network model distillation method and device
CN111317653A (en) * 2020-02-24 2020-06-23 江苏大学 Interactive blind person intelligent auxiliary device and method
CN111462082A (en) * 2020-03-31 2020-07-28 重庆金山医疗技术研究院有限公司 Focus picture recognition device, method and equipment and readable storage medium
CN111476138A (en) * 2020-03-31 2020-07-31 万翼科技有限公司 Construction method and identification method of building drawing component identification model and related equipment
CN111582066A (en) * 2020-04-21 2020-08-25 浙江大华技术股份有限公司 Heterogeneous face recognition model training method, face recognition method and related device
CN111914908A (en) * 2020-07-14 2020-11-10 浙江大华技术股份有限公司 Image recognition model training method, image recognition method and related equipment
CN112101551A (en) * 2020-09-25 2020-12-18 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for training a model
CN112241648A (en) * 2019-07-16 2021-01-19 杭州海康威视数字技术股份有限公司 Image processing system and image device
CN112258363A (en) * 2020-10-16 2021-01-22 浙江大华技术股份有限公司 Identity information confirmation method and device, storage medium and electronic device
CN112308028A (en) * 2020-11-25 2021-02-02 四川省农业科学院蚕业研究所 Intelligent counting method for silkworm larvae and application thereof
CN112418170A (en) * 2020-12-11 2021-02-26 法赫光学科技(成都)有限公司 Oral examination and identification method based on 3D scanning
CN112528775A (en) * 2020-11-28 2021-03-19 西北工业大学 Underwater target classification method
CN112597984A (en) * 2021-03-04 2021-04-02 腾讯科技(深圳)有限公司 Image data processing method, image data processing device, computer equipment and storage medium
CN112733722A (en) * 2021-01-11 2021-04-30 深圳力维智联技术有限公司 Gesture recognition method, device and system and computer readable storage medium
CN112786057A (en) * 2021-02-23 2021-05-11 厦门熵基科技有限公司 Voiceprint recognition method and device, electronic equipment and storage medium
WO2021109867A1 (en) * 2019-12-04 2021-06-10 RealMe重庆移动通信有限公司 Image processing method and apparatus, computer readable storage medium and electronic device
CN113051962A (en) * 2019-12-26 2021-06-29 四川大学 Pedestrian re-identification method based on twin Margin-Softmax network combined attention machine
CN113065495A (en) * 2021-04-13 2021-07-02 深圳技术大学 Image similarity calculation method, target object re-identification method and system
WO2021216309A1 (en) * 2020-04-24 2021-10-28 Ul Llc Using machine learning to virtualize products tests
CN113658437A (en) * 2021-10-20 2021-11-16 枣庄智博智能科技有限公司 Traffic signal lamp control system
WO2021244521A1 (en) * 2020-06-04 2021-12-09 广州虎牙科技有限公司 Object classification model training method and apparatus, electronic device, and storage medium

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106170800A (en) * 2014-09-12 2016-11-30 微软技术许可有限责任公司 Student DNN is learnt via output distribution
CN106778684A (en) * 2017-01-12 2017-05-31 易视腾科技股份有限公司 deep neural network training method and face identification method
WO2017106996A1 (en) * 2015-12-21 2017-06-29 厦门中控生物识别信息技术有限公司 Human facial recognition method and human facial recognition device
CN107247989A (en) * 2017-06-15 2017-10-13 北京图森未来科技有限公司 A kind of neural network training method and device
CN107358293A (en) * 2017-06-15 2017-11-17 北京图森未来科技有限公司 A kind of neural network training method and device
CN108229532A (en) * 2017-10-30 2018-06-29 北京市商汤科技开发有限公司 Image-recognizing method, device and electronic equipment
CN108334644A (en) * 2018-03-30 2018-07-27 百度在线网络技术(北京)有限公司 Image-recognizing method and device
TW201828109A (en) * 2017-01-19 2018-08-01 阿里巴巴集團服務有限公司 Image search, image information acquisition and image recognition methods, apparatuses and systems effectively improving the image search accuracy, reducing the rearrangement filtering workload, and improving the search efficiency
US20180260665A1 (en) * 2017-03-07 2018-09-13 Board Of Trustees Of Michigan State University Deep learning system for recognizing pills in images
CN108665457A (en) * 2018-05-16 2018-10-16 腾讯科技(深圳)有限公司 Image-recognizing method, device, storage medium and computer equipment
CN108734283A (en) * 2017-04-21 2018-11-02 通用电气公司 Nerve network system
CN108830288A (en) * 2018-04-25 2018-11-16 北京市商汤科技开发有限公司 Image processing method, the training method of neural network, device, equipment and medium
CN108875533A (en) * 2018-01-29 2018-11-23 北京旷视科技有限公司 Method, apparatus, system and the computer storage medium of recognition of face
CN108875767A (en) * 2017-12-07 2018-11-23 北京旷视科技有限公司 Method, apparatus, system and the computer storage medium of image recognition
CN109002790A (en) * 2018-07-11 2018-12-14 广州视源电子科技股份有限公司 A kind of method, apparatus of recognition of face, equipment and storage medium
CN109165738A (en) * 2018-09-19 2019-01-08 北京市商汤科技开发有限公司 Optimization method and device, electronic equipment and the storage medium of neural network model
CN109241988A (en) * 2018-07-16 2019-01-18 北京市商汤科技开发有限公司 Feature extracting method and device, electronic equipment, storage medium, program product
CN109522872A (en) * 2018-12-04 2019-03-26 西安电子科技大学 A kind of face identification method, device, computer equipment and storage medium
CN109543817A (en) * 2018-10-19 2019-03-29 北京陌上花科技有限公司 Model distillating method and device for convolutional neural networks

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106170800A (en) * 2014-09-12 2016-11-30 微软技术许可有限责任公司 Student DNN is learnt via output distribution
WO2017106996A1 (en) * 2015-12-21 2017-06-29 厦门中控生物识别信息技术有限公司 Human facial recognition method and human facial recognition device
CN106778684A (en) * 2017-01-12 2017-05-31 易视腾科技股份有限公司 deep neural network training method and face identification method
TW201828109A (en) * 2017-01-19 2018-08-01 阿里巴巴集團服務有限公司 Image search, image information acquisition and image recognition methods, apparatuses and systems effectively improving the image search accuracy, reducing the rearrangement filtering workload, and improving the search efficiency
US20180260665A1 (en) * 2017-03-07 2018-09-13 Board Of Trustees Of Michigan State University Deep learning system for recognizing pills in images
CN108734283A (en) * 2017-04-21 2018-11-02 通用电气公司 Nerve network system
CN107358293A (en) * 2017-06-15 2017-11-17 北京图森未来科技有限公司 A kind of neural network training method and device
CN107247989A (en) * 2017-06-15 2017-10-13 北京图森未来科技有限公司 A kind of neural network training method and device
CN108229532A (en) * 2017-10-30 2018-06-29 北京市商汤科技开发有限公司 Image-recognizing method, device and electronic equipment
CN108875767A (en) * 2017-12-07 2018-11-23 北京旷视科技有限公司 Method, apparatus, system and the computer storage medium of image recognition
CN108875533A (en) * 2018-01-29 2018-11-23 北京旷视科技有限公司 Method, apparatus, system and the computer storage medium of recognition of face
CN108334644A (en) * 2018-03-30 2018-07-27 百度在线网络技术(北京)有限公司 Image-recognizing method and device
CN108830288A (en) * 2018-04-25 2018-11-16 北京市商汤科技开发有限公司 Image processing method, the training method of neural network, device, equipment and medium
CN108665457A (en) * 2018-05-16 2018-10-16 腾讯科技(深圳)有限公司 Image-recognizing method, device, storage medium and computer equipment
CN109002790A (en) * 2018-07-11 2018-12-14 广州视源电子科技股份有限公司 A kind of method, apparatus of recognition of face, equipment and storage medium
CN109241988A (en) * 2018-07-16 2019-01-18 北京市商汤科技开发有限公司 Feature extracting method and device, electronic equipment, storage medium, program product
CN109165738A (en) * 2018-09-19 2019-01-08 北京市商汤科技开发有限公司 Optimization method and device, electronic equipment and the storage medium of neural network model
CN109543817A (en) * 2018-10-19 2019-03-29 北京陌上花科技有限公司 Model distillating method and device for convolutional neural networks
CN109522872A (en) * 2018-12-04 2019-03-26 西安电子科技大学 A kind of face identification method, device, computer equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ADRIANA ROMERO 等: "FITNETS: HINTS FOR THIN DEEP NETS", 《ARXIV》 *
HAO WANG 等: "CosFace: Large Margin Cosine Loss for Deep Face Recognition", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
葛仕明 等: "基于深度特征蒸馏的人脸识别", 《北京交通大学学报》 *
魏彪 等: "基于移动端的高效人脸识别算法", 《现代计算机(专业版)》 *

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241648A (en) * 2019-07-16 2021-01-19 杭州海康威视数字技术股份有限公司 Image processing system and image device
CN110348422A (en) * 2019-07-18 2019-10-18 北京地平线机器人技术研发有限公司 Image processing method, device, computer readable storage medium and electronic equipment
CN110348422B (en) * 2019-07-18 2021-11-09 北京地平线机器人技术研发有限公司 Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN110414581A (en) * 2019-07-19 2019-11-05 腾讯科技(深圳)有限公司 Picture detection method and device, storage medium and electronic device
CN110414581B (en) * 2019-07-19 2023-05-30 腾讯科技(深圳)有限公司 Picture detection method and device, storage medium and electronic device
CN110458217B (en) * 2019-07-31 2024-04-19 腾讯医疗健康(深圳)有限公司 Image recognition method and device, fundus image recognition method and electronic equipment
CN110458217A (en) * 2019-07-31 2019-11-15 腾讯医疗健康(深圳)有限公司 Image-recognizing method and device, eye fundus image recognition methods and electronic equipment
CN110517771A (en) * 2019-08-29 2019-11-29 腾讯医疗健康(深圳)有限公司 A kind of medical image processing method, medical image recognition method and device
CN110569911A (en) * 2019-09-11 2019-12-13 深圳绿米联创科技有限公司 Image recognition method, device, system, electronic equipment and storage medium
CN110569911B (en) * 2019-09-11 2022-06-07 深圳绿米联创科技有限公司 Image recognition method, device, system, electronic equipment and storage medium
CN110598019A (en) * 2019-09-11 2019-12-20 腾讯科技(深圳)有限公司 Repeated image identification method and device
CN111079517A (en) * 2019-10-31 2020-04-28 福建天泉教育科技有限公司 Face management and recognition method and computer-readable storage medium
CN111079517B (en) * 2019-10-31 2023-02-28 福建天泉教育科技有限公司 Face management and recognition method and computer-readable storage medium
WO2021109867A1 (en) * 2019-12-04 2021-06-10 RealMe重庆移动通信有限公司 Image processing method and apparatus, computer readable storage medium and electronic device
CN113051962B (en) * 2019-12-26 2022-11-04 四川大学 Pedestrian re-identification method based on twin Margin-Softmax network combined attention machine
CN113051962A (en) * 2019-12-26 2021-06-29 四川大学 Pedestrian re-identification method based on twin Margin-Softmax network combined attention machine
CN111126573B (en) * 2019-12-27 2023-06-09 深圳力维智联技术有限公司 Model distillation improvement method, device and storage medium based on individual learning
CN111126573A (en) * 2019-12-27 2020-05-08 深圳力维智联技术有限公司 Model distillation improvement method and device based on individual learning and storage medium
CN111260056B (en) * 2020-01-17 2024-03-12 北京爱笔科技有限公司 Network model distillation method and device
CN111260056A (en) * 2020-01-17 2020-06-09 北京爱笔科技有限公司 Network model distillation method and device
CN111317653B (en) * 2020-02-24 2023-10-13 江苏大学 Interactive intelligent auxiliary device and method for blind person
CN111317653A (en) * 2020-02-24 2020-06-23 江苏大学 Interactive blind person intelligent auxiliary device and method
CN111476138A (en) * 2020-03-31 2020-07-31 万翼科技有限公司 Construction method and identification method of building drawing component identification model and related equipment
CN111462082A (en) * 2020-03-31 2020-07-28 重庆金山医疗技术研究院有限公司 Focus picture recognition device, method and equipment and readable storage medium
CN111476138B (en) * 2020-03-31 2023-08-18 万翼科技有限公司 Construction method, identification method and related equipment for building drawing component identification model
CN111582066A (en) * 2020-04-21 2020-08-25 浙江大华技术股份有限公司 Heterogeneous face recognition model training method, face recognition method and related device
CN111582066B (en) * 2020-04-21 2023-10-03 浙江大华技术股份有限公司 Heterogeneous face recognition model training method, face recognition method and related device
WO2021216309A1 (en) * 2020-04-24 2021-10-28 Ul Llc Using machine learning to virtualize products tests
WO2021244521A1 (en) * 2020-06-04 2021-12-09 广州虎牙科技有限公司 Object classification model training method and apparatus, electronic device, and storage medium
CN111914908B (en) * 2020-07-14 2023-10-24 浙江大华技术股份有限公司 Image recognition model training method, image recognition method and related equipment
CN111914908A (en) * 2020-07-14 2020-11-10 浙江大华技术股份有限公司 Image recognition model training method, image recognition method and related equipment
CN112101551A (en) * 2020-09-25 2020-12-18 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for training a model
CN112258363A (en) * 2020-10-16 2021-01-22 浙江大华技术股份有限公司 Identity information confirmation method and device, storage medium and electronic device
CN112308028A (en) * 2020-11-25 2021-02-02 四川省农业科学院蚕业研究所 Intelligent counting method for silkworm larvae and application thereof
CN112308028B (en) * 2020-11-25 2023-07-14 四川省农业科学院蚕业研究所 Intelligent silkworm larva counting method
CN112528775A (en) * 2020-11-28 2021-03-19 西北工业大学 Underwater target classification method
CN112418170A (en) * 2020-12-11 2021-02-26 法赫光学科技(成都)有限公司 Oral examination and identification method based on 3D scanning
CN112418170B (en) * 2020-12-11 2024-03-01 法赫光学科技(成都)有限公司 3D scanning-based oral examination and identification method
CN112733722A (en) * 2021-01-11 2021-04-30 深圳力维智联技术有限公司 Gesture recognition method, device and system and computer readable storage medium
CN112786057A (en) * 2021-02-23 2021-05-11 厦门熵基科技有限公司 Voiceprint recognition method and device, electronic equipment and storage medium
CN112786057B (en) * 2021-02-23 2023-06-02 厦门熵基科技有限公司 Voiceprint recognition method and device, electronic equipment and storage medium
CN112597984A (en) * 2021-03-04 2021-04-02 腾讯科技(深圳)有限公司 Image data processing method, image data processing device, computer equipment and storage medium
CN113065495B (en) * 2021-04-13 2023-07-14 深圳技术大学 Image similarity calculation method, target object re-recognition method and system
CN113065495A (en) * 2021-04-13 2021-07-02 深圳技术大学 Image similarity calculation method, target object re-identification method and system
CN113658437A (en) * 2021-10-20 2021-11-16 枣庄智博智能科技有限公司 Traffic signal lamp control system

Also Published As

Publication number Publication date
CN110009052B (en) 2022-11-18

Similar Documents

Publication Publication Date Title
CN110009052A (en) A kind of method of image recognition, the method and device of image recognition model training
CN107943860B (en) Model training method, text intention recognition method and text intention recognition device
JP7265003B2 (en) Target detection method, model training method, device, apparatus and computer program
CN110321965B (en) Training method of object re-recognition model, and object re-recognition method and device
WO2020182112A1 (en) Image region positioning method, model training method, and related apparatus
US11977851B2 (en) Information processing method and apparatus, and storage medium
CN111291190B (en) Training method of encoder, information detection method and related device
CN110163082A (en) A kind of image recognition network model training method, image-recognizing method and device
CN108280458A (en) Group relation kind identification method and device
CN108304388A (en) Machine translation method and device
CN109918684A (en) Model training method, interpretation method, relevant apparatus, equipment and storage medium
CN110738211A (en) object detection method, related device and equipment
CN110909630A (en) Abnormal game video detection method and device
CN112101329B (en) Video-based text recognition method, model training method and model training device
CN110069715A (en) A kind of method of information recommendation model training, the method and device of information recommendation
CN109670174A (en) A kind of training method and device of event recognition model
CN111009031B (en) Face model generation method, model generation method and device
CN111816159A (en) Language identification method and related device
CN110516113B (en) Video classification method, video classification model training method and device
CN113723378B (en) Model training method and device, computer equipment and storage medium
CN111709398A (en) Image recognition method, and training method and device of image recognition model
CN110135497A (en) Method, the method and device of Facial action unit intensity estimation of model training
CN109376781A (en) A kind of training method, image-recognizing method and the relevant apparatus of image recognition model
CN116935188B (en) Model training method, image recognition method, device, equipment and medium
CN112862021A (en) Content labeling method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant