CN114821658B - Face recognition method, operation control device, electronic equipment and storage medium - Google Patents

Face recognition method, operation control device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114821658B
CN114821658B CN202210508708.2A CN202210508708A CN114821658B CN 114821658 B CN114821658 B CN 114821658B CN 202210508708 A CN202210508708 A CN 202210508708A CN 114821658 B CN114821658 B CN 114821658B
Authority
CN
China
Prior art keywords
network
feature extraction
cow
feature
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210508708.2A
Other languages
Chinese (zh)
Other versions
CN114821658A (en
Inventor
唐子豪
刘莉红
刘玉宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202210508708.2A priority Critical patent/CN114821658B/en
Publication of CN114821658A publication Critical patent/CN114821658A/en
Application granted granted Critical
Publication of CN114821658B publication Critical patent/CN114821658B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a cow face recognition method, which comprises the following steps: acquiring image data of a cow to obtain a cow face data set, obtaining four feature extraction models according to the cow face data set, a first feature extraction network and a second feature extraction network, and obtaining four groups of features according to the cow face data set and the four feature extraction models; designing an integrated network to form a complete face recognition network by matching with four feature extraction models, inputting the four groups of features into the integrated network, and training the integrated network; carrying out face recognition according to the face recognition network and the face data set; for the cattle group of limited individuals, the method can obtain higher identification accuracy and has strong applicability.

Description

Face recognition method, operation control device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image recognition, and in particular, to a method for recognizing a cow face, an operation control device, an electronic device, and a storage medium.
Background
At present, identification cattle mainly depend on ear tags, but the situation that the ear tags are manually replaced cannot be eliminated, so that manual comparison is required for historical photos, a series of problems such as personnel management, identification difficulty, physical sign change and the like are generated, algorithms for animal identification mainly comprise two types, and one of the algorithms is modified by a traditional image feature extraction algorithm, so that the problems of large image angle limitation, difficult algorithm modification and poor applicability exist; the second kind is reformed by a deep learning classification algorithm, and the second kind is matched with a database after the image features are extracted, so that the problems of insufficient pertinence and insufficient precision exist.
Disclosure of Invention
The invention aims to at least solve one of the technical problems in the prior art, and provides a cow face recognition method, an operation control device, electronic equipment and a storage medium, which can obtain higher recognition accuracy and have strong applicability.
In a first aspect, an embodiment of the present invention provides a method for identifying a cow face, where the method includes:
Acquiring image data of a cow to obtain a cow face data set, wherein the cow face data set comprises a public data set, a first characteristic model training set, a second characteristic model training set and an integrated network training set;
Respectively inputting the public data set and the first feature model training set into a first feature extraction network and a second feature extraction network, training to obtain a first feature extraction model and a second feature extraction model, respectively inputting the public data set and the second feature model training set into the first feature extraction network and the second feature extraction network, and training to obtain a third feature extraction model and a fourth feature extraction model;
Inputting the public data set and the integrated network training set into the first feature extraction model, the second feature extraction model, the third feature extraction model and the fourth feature extraction model respectively to obtain four groups of features;
Designing an integrated network to form a complete face recognition network by matching the first feature extraction model, the second feature extraction model, the third feature extraction model and the fourth feature extraction model, inputting four groups of features into the integrated network, and training the integrated network;
And carrying out the face recognition according to the face recognition network and the face data set.
The cattle face recognition method provided by the first aspect of the invention has at least the following beneficial effects: for the cattle group of limited individuals, collecting image data of each cattle to obtain a cattle face data set, and reasonably distributing the cattle face data set; inputting a set of the public data set and the first feature model training set into a first feature extraction network, and training to obtain a first feature extraction model; inputting the public data set and the set of the first feature model training set into a second extraction network, and training to obtain a second feature extraction model; inputting a set of the public data set and the second feature model training set into a first feature extraction network, and training to obtain a third feature extraction model; inputting the public data set and the set of the second feature model training set into a second feature extraction network, and training to obtain a fourth feature extraction model; respectively inputting a public data set and an integrated network training set into a first feature extraction model, a second feature extraction model, a third feature extraction model and a fourth feature extraction model to obtain four groups of features, designing an integrated network, inputting the four groups of features into the integrated network, and training the integrated network; the first feature extraction model, the second feature extraction model, the third feature extraction model, the fourth feature extraction model and the integration network form a complete face recognition network, the face recognition network integrates a plurality of feature extraction models, integrates the advantages of the plurality of models, model training has network differences and structural differences, and pertinently distributes face data sets, the common data sets can ensure that training differences among the models are not overlarge, face recognition can be carried out according to the face recognition network and the face data sets, the recognition effect is more accurate, and the applicability is strong.
In one embodiment of the invention, the set of bovine face data further comprises a test set, the method further comprising:
After four sets of the features are input to the integrated network, the integrated network is trained, and the performance of the face recognition network is verified using the test set.
In one embodiment of the present invention, the performing face recognition according to the face recognition network and the face data set includes:
Establishing a database according to the cattle face data set and the cattle face recognition network, wherein the database comprises cattle information and feature vectors corresponding to all cattle;
Inputting an image to be identified;
Detecting and classifying the image to be identified to obtain a cow face image;
inputting the cow face image into the cow face recognition network to obtain a feature vector corresponding to the cow face image;
And comparing the Euclidean distance between the feature vector of the cow face image and the existing feature vector in the database to realize cow face recognition.
In one embodiment of the present invention, the first feature extraction network is a ZF network and the second feature extraction network is a GoogLeNet network.
In one embodiment of the present invention, the integrated network includes a feature stitching layer, a first full connection layer, a second full connection layer, and an output layer that are sequentially connected, where the feature stitching layer is configured to stitch the features extracted by the first feature extraction model, the second feature extraction model, the third feature extraction model, and the fourth feature extraction model into a feature array with a size of 1×1×512; the activation function of the first full-connection layer is tanh; the activation function of the second full connection layer is sigmoid; and the output layer is used for carrying out softmax normalization processing to obtain a recognition result.
In one embodiment of the invention, the common data set has a 20% ratio in the image data of each of the cow; the duty ratio of the integrated network training set is 20%; the first feature model training set accounts for 25%; the second feature model training set accounts for 25%; the test set had a 10% duty cycle.
In one embodiment of the invention, the loss functions of the first feature extraction network, the second feature extraction network, and the integrated network are triplets.
In a second aspect, an embodiment of the present invention provides an operation control device, where the operation control device includes at least one control processor and a memory for communicatively connecting with at least one of the control processors; the memory stores instructions executable by at least one of the control processors to enable the at least one control processor to perform a method of face recognition as an embodiment of the first aspect of the invention.
The operation control device provided by the embodiment of the second aspect of the invention has at least the following beneficial effects: the operation control device can execute the cow face recognition method according to the first aspect of the invention, the cow face recognition network used by the cow face recognition method integrates a plurality of feature extraction models, integrates the advantages of the plurality of models, has network differences and structural differences in model training, and can pertinently distribute cow face data sets, the common data sets can ensure that the training differences among the models are not too large, and for cow groups of limited individuals, cow face recognition can be performed according to the cow face recognition network and the cow face data sets, the recognition effect is more accurate, and the applicability is strong.
In a third aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes an operation control apparatus according to an embodiment of the second aspect of the present invention.
According to the electronic equipment provided by the third aspect of the invention, the electronic equipment has at least the following beneficial effects: the electronic equipment can execute the cow face recognition method according to the first aspect of the invention, the cow face recognition network used by the cow face recognition method integrates a plurality of feature extraction models, integrates the advantages of the plurality of models, has network differences and structural differences in model training, and can pertinently distribute cow face data sets, the common data sets can ensure that the training differences among the models are not too large, and for cow groups of limited individuals, cow face recognition can be performed according to the cow face recognition network and the cow face data sets, the recognition effect is more accurate, and the applicability is strong.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium storing computer-executable instructions for causing a computer to perform a face recognition method as in the embodiment of the first aspect of the present invention.
The computer-readable storage medium according to the fourth aspect of the present invention has at least the following advantageous effects: the computer readable storage medium can be executed so as to realize the method for recognizing the cow face, which is an embodiment of the first aspect of the invention, wherein a cow face recognition network used by the method for recognizing the cow face integrates a plurality of feature extraction models, integrates the advantages of the plurality of models, has network differences and structural differences in model training, and can pertinently distribute cow face data sets, the common data set can ensure that the training differences among the models are not overlarge, and for cow groups of limited individuals, the cow face recognition can be performed according to the cow face recognition network and the cow face data sets, and the method has more accurate recognition effect and strong applicability.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate and do not limit the invention.
The invention is further described below with reference to the drawings and examples;
fig. 1 is a flowchart of steps of a method for identifying a cow face according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a part of steps of a method for identifying a cow face according to another embodiment of the present invention;
fig. 3 is a flowchart of sub-steps of a face recognition method according to an embodiment of the present invention;
Fig. 4 is a schematic diagram of a part of a structure of a face recognition network according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an operation control device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the present embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein the accompanying drawings are used to supplement the description of the written description so that one can intuitively and intuitively understand each technical feature and overall technical scheme of the present invention, but not to limit the scope of the present invention.
In the description of the present invention, the description of the first, second, third, and fourth are only for the purpose of distinguishing technical features, and should not be construed as indicating or implying relative importance or implying the number of technical features indicated or the precedence of the technical features indicated.
In the description of the present invention, unless explicitly defined otherwise, terms such as arrangement, installation, connection, etc. should be construed broadly and the specific meaning of the terms in the present invention can be reasonably determined by a person skilled in the art in combination with the specific contents of the technical scheme.
At present, the price of the single animal breeding body in the modern animal husbandry is usually high, so that intelligent breeding, intelligent management, insurance business and the like are generated. For farmers, distinguishing single livestock such as cattle mainly depends on ear tags, but related service personnel cannot exclude the situation that the ear tags are manually replaced, so that manual comparison is required to be carried out on historical photos, a series of problems such as personnel management, recognition difficulty, physical sign change and the like are generated, two types of past algorithms for animal recognition exist, one of the algorithms is modified by a traditional image feature extraction algorithm, and the problems of large image angle limitation, difficult algorithm modification and poor applicability exist; the second kind is reformed by a deep learning classification algorithm, and the second kind is matched with a database after the image features are extracted, so that the problems of insufficient pertinence and insufficient precision exist.
Based on the above, the embodiment of the invention provides a cow face recognition method, which can obtain higher recognition accuracy and has strong applicability.
Embodiments of the present invention will be further described below with reference to the accompanying drawings.
Referring to fig. 1, an embodiment of the present invention provides a method for identifying a cow face, which includes, but is not limited to, step S100, step S200, step S300, step S400, and step S500, as shown in fig. 1.
And S100, acquiring image data of the cow to obtain a cow face data set, wherein the cow face data set comprises a public data set, a first characteristic model training set, a second characteristic model training set and an integrated network training set.
Note that the common data set may be denoted as C, the first feature model training set as E, the second feature model training set as F, and the integrated network training set as D.
In one embodiment, for a cow group of limited individuals, more than 10 cow face photographs are collected for each cow, and the sample size is guaranteed to be large enough to obtain a cow face data set of the cow group, wherein the cow face data set comprises a public data set, a first feature model training set, a second feature model training set and an integrated network training set.
It should be noted that the number of the face photos collected for each cow may be 16 or 20, and the embodiment does not specifically limit the number of the face photos collected, it is easy to understand that the number of the face photos collected for each cow may be 10 or less, for example, 8, which is also within the protection scope of the present invention.
Specifically, in an embodiment, the cow face data set is randomly divided into a public data set, a first feature model training set, a second feature model training set and an integrated network training set, wherein the proportion of the public data set, the first feature model training set, the second feature model training set and the integrated network training set is 25%.
It should be noted that, the cow face data set may also include other parts, such as a test set, and when the cow face data set includes the test set, the cow face data set is randomly allocated into a public data set, a first feature model training set, a second feature model training set, an integrated network training set and the test set according to a reasonable proportion.
Step 200, inputting the public data set and the first feature model training set into the first feature extraction network and the second feature extraction network respectively, training to obtain a first feature extraction model and a second feature extraction model, and inputting the public data set and the second feature model training set into the first feature extraction network and the second feature extraction network respectively, training to obtain a third feature extraction model and a fourth feature extraction model.
It should be noted that the first feature extraction network may be denoted as NN1, the second feature extraction network may be denoted as NN2, the set of the common data set C and the first feature model training set E may be denoted as data set a, and the set of the common data set C and the second feature model training set F may be denoted as data set B.
In an embodiment, the data set A is input into a first feature extraction network NN1, a first feature extraction model is obtained through training, and the first feature extraction model can be recorded as NN1-A; inputting the data set A into a second feature extraction network NN2, training to obtain a second feature extraction model, and marking the second feature extraction model as NN2-A; inputting the data set B into a first feature extraction network NN1, training to obtain a third feature extraction model, and marking the third feature extraction model as NN1-B; the data set B is input into a second feature extraction network NN2, and a fourth feature extraction model is obtained through training and is marked as NN2-B.
It is easy to understand that the first feature extraction network and the second feature extraction network are different neural networks, so that the model training between the first feature extraction model NN1-a, the second feature extraction model NN2-a, the third feature extraction model NN1-B and the fourth feature extraction model NN2-B has network differences; since the data set A and the data set B are not identical data sets, there is a structural difference in model training between the first extraction model NN1-A and the third feature extraction model NN1-B, and there is also a structural difference in model training between the second feature extraction model NN2-A and the fourth feature extraction model NN 2-B; and the common data set C is arranged in the data set A and the data set B, so that the training difference of each model can be ensured not to be too large.
It should be noted that, the first feature extraction network and the second feature extraction network may be artificial neural networks for identification, such as ZF network, googLeNet network, resNet network, or VGGNet network, and the types of the first feature extraction network and the second feature extraction network are not specifically limited in this embodiment, so long as features in the cow face image can be extracted.
And step S300, respectively inputting the public data set and the set of the integrated network training set into the first feature extraction model, the second feature extraction model, the third feature extraction model and the fourth feature extraction model to obtain four groups of features.
In an embodiment, after four feature extraction models are obtained by training, the four feature extraction models, namely a first feature extraction model NN1-a, a second feature extraction model NN2-a, a third feature extraction model NN1-B and a fourth feature extraction model NN2-B, are respectively input into the four feature extraction models from a common data set C and an integrated network training set D, so as to obtain four groups of features.
Step S400, designing an integrated network to form a complete face recognition network by matching the first feature extraction model, the second feature extraction model, the third feature extraction model and the fourth feature extraction model, inputting four groups of features into the integrated network, and training the integrated network.
In an embodiment, an integrated network is designed, and the integrated network is used for forming a complete face recognition network by matching with a first feature extraction model NN1-A, a second feature extraction model NN2-A, a third feature extraction model NN1-B and a fourth feature extraction model NN2-B, inputting four groups of features into the integrated network, training to obtain the integrated network, and the integrated network can integrate the four groups of features to obtain the integrated features, so that the face recognition network can integrate the advantages of each model, and further obtain a more accurate recognition result.
And S500, carrying out the face recognition according to the face recognition network and the face data set.
In an embodiment, according to a face data set, a face recognition network is obtained through training, the face recognition network comprises a first feature extraction model NN1-a, a second feature extraction model NN2-a, a third feature extraction model NN1-B and a fourth feature extraction model NN2-B, and an integrated network, the face recognition network integrates a plurality of feature models, more features can be obtained, features are integrated through the integrated network, the advantages of the plurality of models are integrated, the recognition result can be more accurate, and the face recognition network does not only rely on a single type of feature, but also performs recognition work by combining multiple groups of features, so that the applicability of the face recognition network can be improved.
In particular, in one embodiment, the loss function of the face recognition network is a triple loss,
Wherein [ x ] + = max {0, x }; Is the characteristic vector of a single cow; /(I) Other feature vectors for the cow; is the characteristic vector of other cattle; alpha is a boundary value.
The input to the loss function is Triplets, triplets, a triplet, indicating that the loss function is calculated by three parameters. The triples specifically refer to an anchor, a positive and a negative, wherein the anchor represents an anchor example, the positive represents a positive example, and the negative represents a negative example, which are feature vectors regularized by L2. Anchor and positive refer to two matching face thumbnails, wherein Anchor is a reference picture or photo during model training, positive refers to a picture or photo of the same cow as Anchor, and negative refers to a picture or photo of a cow different from Anchor, i.e., a non-matching face thumbnail. The cattle face recognition network can be trained by using a triple loss, so that Euclidean distances among feature vectors of different images of the same cattle are smaller, euclidean distances among the feature vectors of the images of different cattle are larger, finally, euclidean distances among all feature vectors of any cattle are constantly smaller than a boundary value, and Euclidean distances between any feature vector of the cattle and other cattle feature vectors are constantly larger than the boundary value, thereby realizing the cattle face recognition.
Specifically, in an embodiment, in order to train the face recognition network, two photos of the same cow and one photo of other cow may be used as a group of triplets, feature vectors corresponding to the two photos of the same cow are respectively an anchor example or a positive example, and feature vectors corresponding to the one photo of the other cow are respectively a negative example.
It is easy to understand that, one photo of each cow may be input to the first feature extraction network NN1 and the second feature extraction network NN2, so as to obtain feature vectors corresponding to the face pictures of each cow, and these feature vectors corresponding to each cow are used as anchor examples corresponding to each cow.
Specifically, in one embodiment, the loss functions of the first feature extraction network NN1, the second feature extraction network NN2, and the integrated network are all triplets.
It should be noted that the samples of the triplet loss can be classified into three types, i.e., a simple triplet, a difficult triplet, and a semi-difficult triplet, and the types of the samples used in the triplet loss are not particularly limited in this embodiment.
It can be understood by those skilled in the art that the method for recognizing the cow face provided by the embodiment of the invention is a method for mapping the cow face image into Euclidean space, only a small amount of processing is needed for the image, only the region of the cow face is cut, and no additional pretreatment is needed, so that the method is convenient to use and high in applicability, the accuracy of a cow face recognition network adopted by the method for recognizing the cow face on a data set is high, and the recognition precision of the cow face recognition network can be further improved by analyzing an incorrect sample. Therefore, for the cattle group of limited individuals, the identification accuracy of the cattle face identification method is higher, and the situations of cheating protection and the like can be effectively avoided.
It is worth noting that, in the face recognition method provided by the embodiment of the invention, the face image is mapped into the feature vector by learning the mapping from the face image to the Euclidean space through the artificial neural network, and the similarity between the face images is represented by using the reciprocal of the distance between the feature vectors, so that the Euclidean distance between the feature vectors of different images of the same cow is smaller, and the Euclidean distance between the feature vectors of the images of different cows is larger, thereby realizing the face recognition. Namely, the recognition principle of the face recognition network used by the face recognition method is as follows: and mapping the cow face image to Euclidean space to obtain feature vectors corresponding to the cow face image, setting a plurality of groups of feature extraction models to obtain a plurality of groups of feature vectors, comparing Euclidean distances among the feature vectors corresponding to the cow face image, wherein the Euclidean distances among the feature vectors of different images of the same cow are smaller than a boundary value, and the Euclidean distances among the feature vectors of the images of different cow are larger than the boundary value, so that cow face recognition is realized.
Referring to fig. 2, fig. 2 is a partial step flowchart of a face recognition method according to an embodiment of the present invention, where the face data set further includes a test set, and the test set may be denoted as V, and the method further includes, but is not limited to, step S600.
Step S600, after inputting the four sets of features into the integrated network and training the integrated network, the performance of the face recognition network is checked using the test set.
In an embodiment, the face data set includes a common data set C, a first feature model training set E, a second feature model training set F, an integrated network training set D, and a test set V, after four sets of features are input into the integrated network and the integrated network is trained, a complete face recognition network can be obtained, the performance of the face recognition network is checked by using the test set V, and the quality of the face recognition network performance can be evaluated according to the check result.
It will be appreciated that if the performance of the face recognition network is unsatisfactory, the face recognition network may be optimally tuned.
Specifically, in an embodiment, the test set V is a photo set, a label may be allocated to each photo, where the label represents a cow corresponding to the photo, photo data is input into the face recognition network, and it is easy to understand that the face recognition network is trained to enable euclidean distances between feature vectors corresponding to different pictures of the same cow to be smaller than a boundary value, and euclidean distances between feature vectors corresponding to pictures of different cows to be larger than the boundary value. Comparing whether the labels of different photos of the same cow are the same or not and whether the labels of the photos of different cow are different or not, the performance of the cow face recognition network, such as the recognition accuracy and the recognition speed, can be evaluated, and a test result is obtained.
Specifically, in one embodiment, the common data set C has a 20% ratio in the image data of each cow; the duty ratio of the integrated network training set D is 20%; the first feature model training set E has a 25% duty cycle; the second feature model training set F has a 25% duty cycle; the duty cycle of test set V was 10%. 20% of the image data can be extracted from the image data of each cow as a common data set C, the other 10% of the image data is extracted as a test set V, the other 20% of the image data is extracted as an integrated network training set D, and the rest 50% of the image data are randomly divided into a first characteristic model training set E and a second characteristic model training set F.
It should be noted that the extraction process is random extraction, and based on single cow extraction, the reasonable distribution of the image data of each cow is ensured.
Referring to fig. 3, an embodiment of the present invention further provides a method for identifying a cow face, and fig. 3 is a flowchart of sub-steps of step S500 in fig. 1, where step S500 includes, but is not limited to, step S510, step S520, step S530, step S540, and step S550.
Step S510, a database is established according to the cattle face data set and the cattle face recognition network, wherein the database comprises cattle information and feature vectors corresponding to all cattle.
In an embodiment, the feature vector of the existing cow face in the database is a feature vector corresponding to photo data of each cow, a tag is built for each cow, and the database is built, wherein the database comprises cow information, namely tag information, and the feature vector corresponding to each tag.
It is easy to understand that the feature vectors of the existing cow faces in the database include feature vectors corresponding to each cow, and the feature vector corresponding to each cow may be a feature vector corresponding to any cow face photo data of the cow.
In step S520, an image to be recognized is input.
Step S530, detecting and classifying the image to be identified to obtain the cow face image.
In an embodiment, before the image to be identified is input into the face identification network, the image to be identified may be detected and classified by using the detection network and the classification network, and whether the image to be identified is a face may be determined, so as to obtain a face image.
It can be understood that if the image to be recognized is not a cow face, the image to be recognized does not need to be input into the cow face recognition network, and if the image to be recognized is a cow face, the image to be recognized is subsequently input into the cow face recognition network for recognition.
It should be noted that, the detection network may use an SSD detection network, and use mobilenet V network backbones to accelerate; the classification network may use a standard mobilenet V network.
As can be appreciated by those skilled in the art, the mobilenet V network is a lightweight network, so that the network task of detecting and classifying the image to be identified is simpler, and the lightweight network can be used for accelerating and reducing the resource consumption.
It should be noted that the detection network may be a fast-RCNN target detection network or a YoloV2 target detection network; the classification network may also be ResNet networks, and the embodiment does not specifically limit the detection network and the classification network, so long as detection classification can be realized, that is, whether the image to be identified is a cow face image or not is judged.
Step S540, inputting the cow face image into a cow face recognition network to obtain the corresponding feature vector of the cow face image.
Step S550, comparing the Euclidean distance between the feature vector of the cow face image and the existing feature vector in the database, and realizing cow face recognition.
In an embodiment, inputting a face image into a face recognition network, so as to obtain feature vectors of the face image, comparing euclidean distances between the feature vectors of the face image and existing feature vectors in a database, namely, comparing euclidean distances between the feature vectors corresponding to the face image and feature vectors corresponding to each cow in the database, and if the euclidean distances are smaller than a boundary value, judging that the face image is matched with the cow in the database, namely, the recognition object is the cow in the database, so as to determine the identity of the cow; if the Euclidean distance is larger than the boundary value, judging that the identification object is not the cow in the database; the cattle face recognition can be realized according to the training-completed cattle face recognition network and database, the recognition accuracy is high, the applicability is strong, and the influence of factors such as fur color, shooting angle, light and the like of the cattle face image to be recognized on the recognition result can be reduced to a certain extent.
In another embodiment, step S500 includes, but is not limited to, step S510, step S540, and step S550, i.e., there is no need to detect and classify the image to be identified. It is easy to understand that only the cow face image can be used as the image to be recognized, and before the image to be recognized is input into the cow face recognition network, the image to be recognized is ensured to be the cow face image, specifically, when the image to be recognized is collected, only a photo or an image of the cow face position is selected.
The embodiment of the invention also provides a cow face recognition method, wherein a first feature extraction network NN1 in the cow face recognition method is a ZF network, namely Zeiler & Fergus Net network; the second feature extraction network NN2 is a GoogLeNet network. Those skilled in the art can understand that the ZF network can obtain better global features, and the GoogLeNet network gives consideration to the detail features, and the embodiment of the invention can obtain four feature extraction models, namely a first feature extraction model NN1-a, a second feature extraction model NN2-a, a third feature extraction model NN1-B and a fourth feature extraction model NN2-B, by using two feature extraction networks, the advantages of a plurality of models can be integrated by the face recognition method, so that a better recognition effect is obtained.
Specifically, the ZF network structure in the present embodiment is as follows:
a first layer: a convolution layer, the convolution kernel size is 7×7×3, and the step length is 2;
a second layer: pooling layer, pooling core size is 3×3×64, step length is 2;
Third layer: normalization layer, calculate norm;
Fourth layer: a convolution layer, the convolution kernel size is 1 x 64, and the step length is 1;
fifth layer: a convolution layer, the convolution kernel size is 3 x 64, and the step length is 1;
Sixth layer: normalization layer, calculate norm;
seventh layer: pooling layer, pooling core size is 3 x 192, step length is 2;
Eighth layer: a convolution layer, the convolution kernel size is 1 x 192, and the step length is 1;
Ninth layer: a convolution layer, the convolution kernel size is 3×3×192, and the step length is 1;
tenth layer: pooling layer, pooling core size of 3 x 384, step length of 2;
eleventh layer: a convolution layer, the convolution kernel size is 1 x 384, and the step length is 1;
twelfth layer: a convolution layer, the convolution kernel size is 3 x 384, and the step length is 1;
thirteenth layer: a convolution layer, the convolution kernel size is 1 x 256, and the step length is 1;
fourteenth layer: a convolution layer, the convolution kernel size is 3 x 256, and the step length is 1;
fifteenth layer: a convolution layer, the convolution kernel size is 1 x 256, and the step length is 1;
sixteenth layer: a convolution layer, the convolution kernel size is 3 x 256, and the step length is 1;
seventeenth layer: pooling layer, pooling core size of 3×3×256, step length of 2;
Eighteenth layer: a feature fusion layer;
Nineteenth layer: a full link layer, the output feature size is 1 x 32 x 128;
Twentieth layer: a full link layer, the output feature size is 1 x 32 x 128;
twenty-first layer: the feature size of the output of the fully connected layer is 1 x 128.
In the ZF network structure, "first layer, second layer..twenty-first layer" are connected in order.
Specifically, the GoogLeNet network structure in this embodiment is as follows:
a first module: the first module comprises a convolution layer and a pooling layer; wherein the convolution kernel size of the convolution layer is 7×7×64, and the step size is 2; the size of the pooling core of the pooling layer is 3*3, and the step length is 2;
a second module: the second module comprises two convolution layers and a pooling layer; the convolution kernel size of the first convolution layer is 1*1; the convolution kernel size of the second convolution layer is 3 x 192, and the step length is 1; the size of the pooling core of the pooling layer is 3 x 192, and the step length is 2;
And a third module: the third module includes Inception a layers and Inception b layers; the Inception a layer is divided into four branches, different branches adopt different scales, a first part is convolved, a second part is convolved, a third part is convolved, and a fourth part is pooled; then connecting the results of the four parts, and connecting the third dimension of the output results of the four parts in parallel; the Inception b layer is also divided into four branches, different branches adopt different scales, a first part is convolved, a second part is convolved, a third part is convolved, and a fourth part is pooled; then connecting the results of the four parts, and connecting the third dimension of the output results of the four parts in parallel; the convolution kernels used for Inception a and Inception b layers are different from the pooling kernels;
a fourth module: the fourth module comprises Inception a, acceptance 4b, acceptance 4c and acceptance 4e; wherein Inception a, acceptance 4b, acceptance 4c, acceptance 4e are similar to Inception a, acceptance 3 b;
A fifth module: the fifth module comprises Inception a and acceptance 5b; wherein Inception a, acceptance 5b are similar to Inception a, acceptance 3 b;
And an output module: the output module comprises a global average pooling layer and a full-connection layer, the pooling core of the global average pooling layer is 7 x 128, and the characteristic size of the output of the full-connection layer is 1 x 128.
It should be noted that, in the GoogLeNet network structure, "the first module, the second module, the third module, the fourth module, the fifth module, and the output module" are sequentially connected.
It should be further noted that, in the ZF network in this embodiment, no normalization layer is set after the full connection layer of the twentieth layer, and in the GoogLeNet network in this embodiment, no normalization layer is set after the full connection layer of the output module, but the feature is intercepted and used as the output of each feature extraction model, so that the features output by each feature extraction model are integrated through the integration network in the following.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a portion of a face recognition network, in which in one embodiment of the present invention, an integrated network structure in the face recognition network includes a feature stitching layer, a first full connection layer, a second full connection layer, and an output layer, which are sequentially connected, and the feature stitching layer is configured to stitch features extracted by the first feature extraction model, the second feature extraction model, the third feature extraction model, and the fourth feature extraction model into a feature array with a size of 1×1×512; the activation function of the first full connection layer is tanh; the activation function of the second full connection layer is sigmoid; the output layer is a softmax layer and is used for carrying out normalization processing to obtain a recognition result.
Specifically, the output of the first full-connection layer is 1×1×1024, the output of the second full-connection layer is 1×1×1024, and the output layer performs normalization processing, so as to obtain the identification result.
It should be noted that, the two full connection layers in the integrated network are provided to obtain a better integration effect and save network resources, but may be provided as three layers, four layers, etc., so long as the effect of integrating multiple groups of features can be achieved, and no matter how many connection layers are provided, the connection layers are all within the protection scope of the embodiment of the present invention.
Referring to fig. 5, an embodiment of the present invention also provides an operation control apparatus 500, the operation control apparatus 500 including at least one control processor 510 and a memory 520 for communication connection with the at least one control processor 510; the memory 520 stores instructions executable by the at least one control processor 510 to be executed by the at least one control processor 510 to enable the at least one control processor 510 to perform a method of face recognition as provided in any of the method embodiments described above.
It can be understood that the operation control device 500 can execute the face recognition method according to any of the embodiments of the present invention, and for a cow group of a limited individual, collect image data of each cow to obtain a face data set, and reasonably allocate the face data set; inputting a set of the public data set and the first feature model training set into a first feature extraction network, and training to obtain a first feature extraction model; inputting the public data set and the set of the first feature model training set into a second extraction network, and training to obtain a second feature extraction model; inputting a set of the public data set and the second feature model training set into a first feature extraction network, and training to obtain a third feature extraction model; inputting the public data set and the set of the second feature model training set into a second feature extraction network, and training to obtain a fourth feature extraction model; respectively inputting a public data set and an integrated network training set into a first feature extraction model, a second feature extraction model, a third feature extraction model and a fourth feature extraction model to obtain four groups of features, designing an integrated network, inputting the four groups of features into the integrated network, and training the integrated network; the first feature extraction model, the second feature extraction model, the third feature extraction model, the fourth feature extraction model and the integration network form a complete face recognition network, the face recognition network integrates a plurality of feature extraction models, integrates the advantages of the plurality of models, model training has network differences and structural differences, and pertinently distributes face data sets, the common data sets can ensure that training differences among the models are not overlarge, face recognition can be carried out according to the face recognition network and the face data sets, the recognition effect is more accurate, and the applicability is strong.
The embodiment of the invention also provides electronic equipment, which comprises the operation control device 500 provided by the embodiment, wherein the electronic equipment can execute the cow face recognition method provided by any method embodiment of the invention, and for a cow group of a limited individual, image data of each cow are collected to obtain a cow face data set, and the cow face data set is reasonably distributed; inputting a set of the public data set and the first feature model training set into a first feature extraction network, and training to obtain a first feature extraction model; inputting the public data set and the set of the first feature model training set into a second extraction network, and training to obtain a second feature extraction model; inputting a set of the public data set and the second feature model training set into a first feature extraction network, and training to obtain a third feature extraction model; inputting the public data set and the set of the second feature model training set into a second feature extraction network, and training to obtain a fourth feature extraction model; respectively inputting a public data set and an integrated network training set into a first feature extraction model, a second feature extraction model, a third feature extraction model and a fourth feature extraction model to obtain four groups of features, designing an integrated network, inputting the four groups of features into the integrated network, and training the integrated network; the first feature extraction model, the second feature extraction model, the third feature extraction model, the fourth feature extraction model and the integration network form a complete face recognition network, the face recognition network integrates a plurality of feature extraction models, integrates the advantages of the plurality of models, model training has network differences and structural differences, and pertinently distributes face data sets, the common data sets can ensure that training differences among the models are not overlarge, face recognition can be carried out according to the face recognition network and the face data sets, the recognition effect is more accurate, and the applicability is strong.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores computer executable instructions for causing a computer to execute the method for recognizing the face of the cow according to any one of the method embodiments. The face recognition network used by the face recognition method provided by the embodiment of the invention integrates a plurality of feature extraction models, integrates the advantages of the plurality of models, has network differences and structural differences in model training, and can pertinently distribute face data sets, the public data sets can ensure that the training differences among the models are not too large, and for a limited individual cow group, the face recognition can be performed according to the face recognition network and the face data sets, the recognition effect is more accurate, and the applicability is strong.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media or non-transitory media and communication media or transitory media. The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk DVD or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of one of ordinary skill in the art without departing from the spirit of the present invention.

Claims (10)

1. A method of face recognition, the method comprising:
Acquiring image data of a cow to obtain a cow face data set, wherein the cow face data set comprises a public data set, a first characteristic model training set, a second characteristic model training set and an integrated network training set;
Respectively inputting the public data set and the first feature model training set into a first feature extraction network and a second feature extraction network, training to obtain a first feature extraction model and a second feature extraction model, respectively inputting the public data set and the second feature model training set into the first feature extraction network and the second feature extraction network, and training to obtain a third feature extraction model and a fourth feature extraction model;
Inputting the public data set and the integrated network training set into the first feature extraction model, the second feature extraction model, the third feature extraction model and the fourth feature extraction model respectively to obtain four groups of features;
Designing an integrated network to form a complete face recognition network by matching the first feature extraction model, the second feature extraction model, the third feature extraction model and the fourth feature extraction model, inputting four groups of features into the integrated network, and training the integrated network;
And carrying out the face recognition according to the face recognition network and the face data set.
2. The method of claim 1, wherein the set of bovine face data further comprises a test set, the method further comprising:
After four sets of the features are input to the integrated network, the integrated network is trained, and the performance of the face recognition network is verified using the test set.
3. The method of claim 1, wherein said performing face recognition based on said face recognition network and said face dataset comprises:
Establishing a database according to the cattle face data set and the cattle face recognition network, wherein the database comprises cattle information and feature vectors corresponding to all cattle;
Inputting an image to be identified;
Detecting and classifying the image to be identified to obtain a cow face image;
inputting the cow face image into the cow face recognition network to obtain a feature vector corresponding to the cow face image;
And comparing the Euclidean distance between the feature vector of the cow face image and the existing feature vector in the database to realize cow face recognition.
4. The method of claim 1, wherein the first feature extraction network is a ZF network and the second feature extraction network is a GoogLeNet network.
5. The method for recognizing the cow face according to claim 1, wherein the integrated network comprises a feature stitching layer, a first full-connection layer, a second full-connection layer and an output layer which are sequentially connected, wherein the feature stitching layer is used for stitching the features extracted by the first feature extraction model, the second feature extraction model, the third feature extraction model and the fourth feature extraction model into a feature array with a size of 1 x 512; the activation function of the first full-connection layer is tanh; the activation function of the second full connection layer is sigmoid; and the output layer is used for carrying out softmax normalization processing to obtain a recognition result.
6. The method of claim 2, wherein the common data set has a 20% ratio in the image data of each of the cattle; the duty ratio of the integrated network training set is 20%; the first feature model training set accounts for 25%; the second feature model training set accounts for 25%; the test set had a 10% duty cycle.
7. The method of claim 1, wherein the loss functions of the first feature extraction network, the second feature extraction network, and the integrated network are triplets loss.
8. An operation control device comprising at least one control processor and a memory for communication with at least one of said control processors; the memory stores instructions executable by at least one of the control processors to enable the at least one control processor to perform the face recognition method of any one of claims 1 to 7.
9. An electronic apparatus comprising the operation control device according to claim 8.
10. A computer-readable storage medium storing computer-executable instructions for causing a computer to perform the method of face recognition according to any one of claims 1 to 7.
CN202210508708.2A 2022-05-11 2022-05-11 Face recognition method, operation control device, electronic equipment and storage medium Active CN114821658B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210508708.2A CN114821658B (en) 2022-05-11 2022-05-11 Face recognition method, operation control device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210508708.2A CN114821658B (en) 2022-05-11 2022-05-11 Face recognition method, operation control device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114821658A CN114821658A (en) 2022-07-29
CN114821658B true CN114821658B (en) 2024-05-14

Family

ID=82514005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210508708.2A Active CN114821658B (en) 2022-05-11 2022-05-11 Face recognition method, operation control device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114821658B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115457338B (en) * 2022-11-09 2023-03-28 中国平安财产保险股份有限公司 Method and device for identifying uniqueness of cow, computer equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142082A (en) * 2011-04-08 2011-08-03 南京邮电大学 Virtual sample based kernel discrimination method for face recognition
CN109583431A (en) * 2019-01-02 2019-04-05 上海极链网络科技有限公司 A kind of face Emotion identification model, method and its electronic device
CN110728179A (en) * 2019-09-04 2020-01-24 天津大学 Pig face identification method adopting multi-path convolutional neural network
CN111382727A (en) * 2020-04-02 2020-07-07 安徽睿极智能科技有限公司 Deep learning-based dog face identification method
CN111598182A (en) * 2020-05-22 2020-08-28 北京市商汤科技开发有限公司 Method, apparatus, device and medium for training neural network and image recognition
CN112149556A (en) * 2020-09-22 2020-12-29 南京航空航天大学 Face attribute recognition method based on deep mutual learning and knowledge transfer
CN112215157A (en) * 2020-10-13 2021-01-12 北京中电兴发科技有限公司 Multi-model fusion-based face feature dimension reduction extraction method
CN113449712A (en) * 2021-09-01 2021-09-28 武汉方芯科技有限公司 Goat face identification method based on improved Alexnet network
CN113569732A (en) * 2021-07-27 2021-10-29 厦门理工学院 Face attribute recognition method and system based on parallel sharing multitask network
CN113705469A (en) * 2021-08-30 2021-11-26 平安科技(深圳)有限公司 Face recognition method and device, electronic equipment and computer readable storage medium
CN113989836A (en) * 2021-10-20 2022-01-28 华南农业大学 Dairy cow face weight recognition method, system, equipment and medium based on deep learning

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142082A (en) * 2011-04-08 2011-08-03 南京邮电大学 Virtual sample based kernel discrimination method for face recognition
CN109583431A (en) * 2019-01-02 2019-04-05 上海极链网络科技有限公司 A kind of face Emotion identification model, method and its electronic device
CN110728179A (en) * 2019-09-04 2020-01-24 天津大学 Pig face identification method adopting multi-path convolutional neural network
CN111382727A (en) * 2020-04-02 2020-07-07 安徽睿极智能科技有限公司 Deep learning-based dog face identification method
CN111598182A (en) * 2020-05-22 2020-08-28 北京市商汤科技开发有限公司 Method, apparatus, device and medium for training neural network and image recognition
CN112149556A (en) * 2020-09-22 2020-12-29 南京航空航天大学 Face attribute recognition method based on deep mutual learning and knowledge transfer
CN112215157A (en) * 2020-10-13 2021-01-12 北京中电兴发科技有限公司 Multi-model fusion-based face feature dimension reduction extraction method
CN113569732A (en) * 2021-07-27 2021-10-29 厦门理工学院 Face attribute recognition method and system based on parallel sharing multitask network
CN113705469A (en) * 2021-08-30 2021-11-26 平安科技(深圳)有限公司 Face recognition method and device, electronic equipment and computer readable storage medium
CN113449712A (en) * 2021-09-01 2021-09-28 武汉方芯科技有限公司 Goat face identification method based on improved Alexnet network
CN113989836A (en) * 2021-10-20 2022-01-28 华南农业大学 Dairy cow face weight recognition method, system, equipment and medium based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Cattle face recognition based on a Two-Branch convolutional neural network;Zhi Weng,et al.;Computers and Electronics in Agriculture;20220331;第196卷;第1-9页 *

Also Published As

Publication number Publication date
CN114821658A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN108596277B (en) Vehicle identity recognition method and device and storage medium
Rachmadi et al. Vehicle color recognition using convolutional neural network
CN111191568B (en) Method, device, equipment and medium for identifying flip image
WO2020125057A1 (en) Livestock quantity identification method and apparatus
WO2020038138A1 (en) Sample labeling method and device, and damage category identification method and device
CN112633297B (en) Target object identification method and device, storage medium and electronic device
CN111046910A (en) Image classification, relation network model training and image annotation method and device
CN112307853A (en) Detection method of aerial image, storage medium and electronic device
CN112580657B (en) Self-learning character recognition method
CN111753697B (en) Intelligent pet management system and management method thereof
US20230215125A1 (en) Data identification method and apparatus
CN112613471B (en) Face living body detection method, device and computer readable storage medium
WO2024077781A1 (en) Convolutional neural network model-based image recognition method and apparatus, and terminal device
CN114821658B (en) Face recognition method, operation control device, electronic equipment and storage medium
Thomas et al. Smart car parking system using convolutional neural network
CN113780116A (en) Invoice classification method and device, computer equipment and storage medium
CN113052236A (en) Pneumonia image classification method based on NASN
CN112989869B (en) Optimization method, device, equipment and storage medium of face quality detection model
CN111079617A (en) Poultry identification method and device, readable storage medium and electronic equipment
CN114445788A (en) Vehicle parking detection method and device, terminal equipment and readable storage medium
CN112749731A (en) Bill quantity identification method and system based on deep neural network
CN111461248A (en) Photographic composition line matching method, device, equipment and storage medium
WO2019129293A1 (en) Feature data generation method and apparatus and feature matching method and apparatus
CN110880022A (en) Labeling method, labeling device and storage medium
CN111435451A (en) Method, device, server and storage medium for determining picture category

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant