CN110909618B - Method and device for identifying identity of pet - Google Patents

Method and device for identifying identity of pet Download PDF

Info

Publication number
CN110909618B
CN110909618B CN201911039645.5A CN201911039645A CN110909618B CN 110909618 B CN110909618 B CN 110909618B CN 201911039645 A CN201911039645 A CN 201911039645A CN 110909618 B CN110909618 B CN 110909618B
Authority
CN
China
Prior art keywords
pet
image
feature point
accurate positioning
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911039645.5A
Other languages
Chinese (zh)
Other versions
CN110909618A (en
Inventor
刘岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taikang Insurance Group Co Ltd
Original Assignee
Taikang Insurance Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taikang Insurance Group Co Ltd filed Critical Taikang Insurance Group Co Ltd
Priority to CN201911039645.5A priority Critical patent/CN110909618B/en
Publication of CN110909618A publication Critical patent/CN110909618A/en
Application granted granted Critical
Publication of CN110909618B publication Critical patent/CN110909618B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a method and a device for identifying the identity of a pet, wherein the method comprises the following steps: determining a pet image area in an image to be identified according to a target image detection network obtained through pre-training; determining a pet face image area in the pet image areas according to a face detection classifier obtained through training in advance; performing image alignment on the pet face image area to obtain an aligned image; extracting feature vectors in the aligned images, and identifying the pets in the images to be identified according to the feature vectors to obtain identity information of the pets. The method provided by the embodiment of the invention can accurately identify the identity of the pet, and solves the problem that the identity of the pet cannot be accurately identified when the insurance industry guarantees the pet.

Description

Method and device for identifying identity of pet
Technical Field
The invention relates to the field of pet identity recognition, in particular to a pet identity recognition method and device.
Background
Most of the current animal identification methods remain for identifying different species of animals. I.e., animals identified as cats, dogs, or other species. Identification techniques between different individuals of the same animal are not yet well established. With the gradual increase of the number of pets and the increase of the importance of people to the pets. So that pets are products of the product developed in the last two years, and mainly the health and accidents of various pets (such as cats and dogs) are guaranteed.
However, the identification accuracy of the existing identification technology between different individuals of the same animal is poor, so that the identification of the pet identity also encounters a great challenge.
Disclosure of Invention
The embodiment of the invention provides a method and a device for identifying the identity of a pet, which are used for solving the problem of poor accuracy in identifying the identity of the pet in the prior art.
According to one aspect of the present invention, there is provided a method of identifying the identity of a pet, the method comprising:
determining a pet image area in an image to be identified according to a target image detection network obtained through pre-training;
determining a pet face image area in the pet image areas according to a face detection classifier obtained through training in advance;
performing image alignment on the pet face image area to obtain an aligned image;
extracting feature vectors in the aligned images, and identifying the pets in the images to be identified according to the feature vectors to obtain identity information of the pets.
Optionally, the step of training to obtain the target image detection network includes:
acquiring specific training data determined for the pets in the images to be identified;
and training the target detection network in a two-classification mode according to the specific training data to obtain a trained target detection network serving as the target image detection network.
Optionally, the step of determining the pet face image region in the pet image region according to the face detection classifier obtained by training in advance includes:
removing the image area of which the area is smaller than a preset area threshold value in the pet image area to obtain a candidate area;
and inputting the candidate region into the face detection classifier, and determining the pet face image region in the candidate region.
Optionally, the step of training the face detection classifier includes:
acquiring a plurality of training images containing pet images and training images not containing the pet images;
acquiring Haar characteristics of each training image;
constructing a training sample feature set according to the Haar features and whether the training image contains a pet face image or not;
and training the training sample feature set by adopting an AdaBoost method to obtain the face detection classifier.
Optionally, the step of performing image alignment on the pet face image area to obtain an aligned image includes:
obtaining coarse positioning feature points of the positions of the facial organs of the pet in the facial image area of the pet according to a feature point detection network obtained through pre-training;
Dividing the pet facial image region into a plurality of local regions, wherein each local region comprises at least one coarse positioning feature point;
revising the characteristic point detection network according to the characteristic points of each local area to obtain a plurality of revised characteristic point detection networks, wherein each revised characteristic point detection network corresponds to one local area;
detecting a network according to each local area and the corresponding revised characteristic points to obtain accurate positioning characteristic points corresponding to the rough positioning characteristic points;
reversely mapping the plurality of local areas back to the pet face image area, and determining the position relation between the accurate positioning feature points;
and aligning the pet face image area according to the position relation among the accurate positioning feature points to obtain an aligned image.
Optionally, the coarse positioning feature points of the pet facial organ positions at least include: a rough positioning feature point on the left eye position, a rough positioning feature point on the right eye position, a rough positioning feature point on the nose tip position, three rough positioning feature points on the mouth position and two rough positioning feature points on the ear root position.
Optionally, the step of aligning the pet face image area according to the positional relationship between the precisely located feature points to obtain an aligned image includes:
determining whether the pet face image areas are aligned according to the position relation between the accurate positioning feature points;
if not, selecting at least three accurate positioning feature points, and calculating expected feature points corresponding to each selected accurate positioning feature point according to the relation between the organ positions of the selected accurate positioning feature points;
determining an affine transformation matrix according to the selected accurate positioning feature points and the expected feature points;
and carrying out affine transformation on the pixel points of the pet face image area according to the affine transformation matrix to obtain an aligned image.
Optionally, the step of selecting at least three accurate positioning feature points and calculating the expected feature point corresponding to each selected accurate positioning feature point according to the relationship between the organ positions where the selected accurate positioning feature points are located includes:
selecting a first accurate positioning feature point on a left eye position, a second accurate positioning feature point on a right eye position and a third accurate positioning feature point on a nose tip position in the pet face image area;
Calculating to obtain a first distance between the first accurate positioning feature point and the second accurate positioning feature point, wherein an included angle between a connecting line of the first accurate positioning feature point and the second accurate positioning feature point and a horizontal straight line is a rotation angle, and a second distance is a distance between a midpoint of the connecting line of the first accurate positioning feature point and the second accurate positioning feature point and the third accurate positioning feature point;
according to the first distance, the rotation angle and the second distance, a first expected feature point corresponding to the first accurate positioning feature point, a second expected feature point corresponding to the second accurate positioning feature point and a third expected feature point corresponding to the third accurate positioning feature point are obtained, wherein the second accurate positioning feature point is used as the second expected feature point at the same time, the first expected feature point is located on the right side of the second expected feature point and located on the same horizontal straight line, the distance between the first expected feature point and the second expected feature point is equal to the first distance, the third expected feature point is located below the connecting line midpoint, the distance between the third expected feature point and the connecting line midpoint is equal to the second distance, and the connecting line between the third expected feature point and the connecting line midpoint is equal to the horizontal straight line.
Optionally, the step of extracting the feature vector in the aligned image includes:
converting the size of the alignment image into a preset size;
and inputting the aligned images with the preset sizes into a preset residual error network model to obtain the multidimensional feature vector.
Optionally, the step of identifying the pet in the image to be identified according to the feature vector to obtain the identity information of the pet includes:
calculating the distance between each identity vector in a preset pet identity feature library and the feature vector to obtain a plurality of identity distance values;
if the minimum value in the identity distance values is smaller than a preset threshold value, the identity of the pet in the image to be identified is a warehousing identity, wherein the warehousing identity is identity information indicated by an identity vector corresponding to the minimum value in the identity distance values.
According to yet another aspect of the present invention there is provided a pet identification device, the device comprising:
the first region confirmation module is used for determining a pet image region in the image to be recognized according to a target image detection network obtained through pre-training;
the second region confirmation module is used for determining a pet face image region in the pet image region according to a face detection classifier obtained through training in advance;
The alignment module is used for carrying out image alignment on the pet face image area to obtain an aligned image;
the identification module is used for extracting the feature vector in the aligned image, and identifying the pet in the image to be identified according to the feature vector to obtain the identity information of the pet.
Optionally, the second area confirmation module includes:
the screening unit is used for removing the image areas with the areas smaller than a preset area threshold value in the pet image areas to obtain candidate areas;
and a region confirmation unit for inputting the candidate region into the face detection classifier and determining the pet face image region in the candidate region.
Optionally, the alignment module includes:
the first characteristic point unit is used for acquiring coarse positioning characteristic points of the position of the facial organ of the pet in the pet facial image area according to the characteristic point detection network obtained through pre-training;
a segmentation unit for segmenting the pet face image region into a plurality of local regions, wherein each local region comprises at least one coarse positioning feature point;
the network revising unit is used for revising the characteristic point detection network according to the characteristic point of each local area to obtain a plurality of revised characteristic point detection networks, wherein each revised characteristic point detection network corresponds to one local area;
The second feature point unit is used for detecting a network according to each local area and the corresponding revised feature point to obtain a precise positioning feature point corresponding to the rough positioning feature point;
the reflection unit is used for reversely mapping the plurality of local areas back to the pet face image area and determining the position relation between the accurate positioning feature points;
and the alignment unit is used for aligning the pet face image area according to the position relation among the accurate positioning feature points to obtain an aligned image.
Optionally, the coarse positioning feature points of the pet facial organ positions at least include: a rough positioning feature point on the left eye position, a rough positioning feature point on the right eye position, a rough positioning feature point on the nose tip position, three rough positioning feature points on the mouth position and two rough positioning feature points on the ear root position.
Optionally, the alignment unit is specifically configured to determine whether the pet face image area is aligned according to a positional relationship between the precisely located feature points;
if not, selecting at least three accurate positioning feature points, and calculating expected feature points corresponding to each selected accurate positioning feature point according to the relation between the organ positions of the selected accurate positioning feature points;
Determining an affine transformation matrix according to the selected accurate positioning feature points and the expected feature points;
and carrying out affine transformation on the pixel points of the pet face image area according to the affine transformation matrix to obtain an aligned image.
Optionally, the identification module includes:
a conversion unit configured to convert the size of the alignment image into a preset size;
the extraction unit is used for inputting the aligned images with preset sizes into a preset residual error network model to obtain the multidimensional feature vector.
Optionally, the identification module includes:
the computing unit is used for computing the distance between each identity vector and the feature vector in the preset pet identity feature library to obtain a plurality of identity distance values;
the identification unit is used for identifying the identity of the pet in the image to be identified as a warehousing identity if the minimum value in the identity distance values is smaller than a preset threshold value, wherein the warehousing identity is identity information indicated by an identity vector corresponding to the minimum value in the identity distance values.
According to a further aspect of the present invention there is provided an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the method of identifying the identity of a pet as described above when the computer program is executed.
According to a further aspect of the present invention there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps in the method of identifying the identity of a pet as described above.
In the embodiment of the invention, firstly, a pet image area in an image to be identified is determined according to a target image detection network obtained by training; then determining a pet face image area in the pet image area according to a face detection classifier obtained by training in advance; the method has the advantages that the target area is gradually narrowed, so that the pet face image area is positioned, the interference of surrounding pixel points is reduced, and the accuracy of positioning the pet face image area is improved. After the pet face image area in the pet image area is determined, carrying out image alignment on the pet face image area to obtain an aligned image; extracting feature vectors in the alignment images, and identifying pets in the images to be identified according to the feature vectors to obtain identity information of the pets. The method can accurately identify the pets in the images to be identified, obtain the identity information of the pets, and solve the problem that the identity of the pets cannot be accurately identified when the insurance industry guarantees the pets.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of steps of a method for identifying identity of a pet according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps for training a face detection classifier according to an embodiment of the present invention;
FIG. 3 is a block diagram of a device for identifying identity of a pet according to an embodiment of the present invention;
fig. 4 is a block diagram of a second area confirmation module according to an embodiment of the present invention;
FIG. 5 is a block diagram of an alignment module according to an embodiment of the present invention;
FIG. 6 is a block diagram illustrating an identification module according to an embodiment of the present invention;
fig. 7 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In various embodiments of the present invention, it should be understood that the sequence numbers of the following processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
The method for identifying the identity of the pet provided by the invention is suitable for most pets, such as cats, dogs and the like, but is not limited to the method. When the identity of the pet is identified for a type of pet, different individuals in the type of pet can be identified. For example, the Persian cats in the cats are not much different from each other in the human eyes, and even some of the similar Persian cats cannot be distinguished from each other by the human eyes, but the identity of different individuals in the Persian cats can be accurately identified by the method provided by the invention. The pet in the following embodiments is exemplified by, but not limited to, pet cat identification.
Referring to fig. 1, an embodiment of the present invention provides a method for identifying a pet identity, where the method includes:
step 101: determining a pet image area in an image to be identified according to a target image detection network obtained through pre-training;
it should be noted that the target image detection network may identify pet image regions in the image. For example, after an image with a pet cat is detected by the target image detection network, the pet cat in the image can be selected by the rectangular frame, so that the pet image area is determined to be the selected part of the rectangular frame. In order to accurately identify the pet image region in the image to be identified, training is required to be performed on the target detection network for detecting the target, and the target detection network after training is the target image detection network. The target detection network may be a conventional YOLO-V3 network, a res net network, a google net network, or the like.
Specifically, the step of training to obtain the target image detection network includes:
acquiring specific training data determined for pets in images to be identified;
and training the target detection network in a two-classification mode according to the specific training data to obtain a trained target detection network serving as a target image detection network.
The specific training data is specific to pets in an image to be identified, for example, the identity of the pets identified by the method provided by the embodiment of the invention is the identity of a pet cat, so that the common pet cat of a preset number of types can be classified into one type, the rest of the other pets are classified into one type, and then the specific training data is acquired for the two types of the classified pets. The preset number may be set by itself, for example, 42, that is, the common preset type pet cats may be 42 common pet cats, but is not limited thereto.
Step 102: determining a pet face image area in the pet image area according to a face detection classifier obtained through pre-training;
it should be noted that the face detection classifier may frame-select the pet face image region in the pet image region by a frame selection method. The pet face in the image area of the pet face selected by the frame occupies a main part.
Since the number of pet image areas may be plural, the number of the determined pet face image areas may be plural, and in order to reduce the interference, the pet face image areas with smaller areas may be filtered out, and only the pet face image areas with larger areas may be retained. Specifically, the step of determining the pet face image region in the pet image region according to the face detection classifier obtained by training in advance includes:
Removing the image area of which the area is smaller than a preset area threshold value in the pet image area to obtain a candidate area;
the candidate regions are input into a face detection classifier, and pet face image regions in the candidate regions are determined.
The preset area threshold may be set by itself, for example, a quarter of the area of the image to be recognized, but is not limited thereto.
Step 103: performing image alignment on the pet face image area to obtain an aligned image;
it should be noted that the features extracted from such images may not accurately represent features of the pet in the image, as the resulting pet face displayed in the pet face image area may be a side face or other non-aligned condition. The resulting pet image regions can be image aligned using image alignment techniques.
Step 104: extracting feature vectors in the alignment images, and identifying pets in the images to be identified according to the feature vectors to obtain identity information of the pets.
It should be noted that a database may be preset, in which feature vectors of known pet identity information are stored, that is, the database stores pet identity information of a plurality of pets, and each piece of pet identity information corresponds to a feature vector, and preferably, the feature vector is extracted in the same manner as the feature vector in step 104.
When the pets in the image to be identified are identified through the extracted feature vectors, the obtained feature vectors are matched with the feature vectors in the database, and if the matched first feature vectors are obtained, the identity information of the pets in the image to be identified can be determined to be the identity of the pets corresponding to the first feature vectors; if the matched feature vector is not obtained, the identity information of the pet in the image to be identified can be determined to be not known in the database.
In the embodiment of the invention, firstly, a pet image area in an image to be identified is determined according to a target image detection network obtained by training; then determining a pet face image area in the pet image area according to a face detection classifier obtained by training in advance; the method has the advantages that the target area is gradually narrowed, so that the pet face image area is positioned, the interference of surrounding pixel points is reduced, and the accuracy of positioning the pet face image area is improved. After the pet face image area in the pet image area is determined, carrying out image alignment on the pet face image area to obtain an aligned image; extracting feature vectors in the alignment images, and identifying pets in the images to be identified according to the feature vectors to obtain identity information of the pets. The method can accurately identify the pets in the images to be identified, obtain the identity information of the pets, and solve the problem that the identity of the pets cannot be accurately identified when the insurance industry guarantees the pets.
As shown in fig. 2, in order to obtain a face detection classifier with higher accuracy, in the embodiment of the present invention, the step of training to obtain the face detection classifier includes:
step 201: acquiring a plurality of training images containing pet images and training images not containing the pet images;
it should be noted that, when the identification of the pet is directed to the pet cat, the training image includes a training image including the pet cat and a training image not including the pet cat.
Step 202: acquiring Haar characteristics of each training image;
it should be noted that Haar features include four types of features: edge features, linear features, center features, and diagonal features. One or more types of features may be acquired. For example, but not limited to, 4 edge features, 4 linear features, 2 center features, 4 diagonal features, and a total of 14 features may be acquired.
Step 203: constructing a training sample feature set according to Haar features and whether the training image contains a pet face image;
step 204: and training the training sample feature set by adopting an AdaBoost method to obtain the face detection classifier.
It should be noted that, for ease of understanding, the embodiment of the present invention only uses 14 features as examples to show how the face detection classifier is obtained. And the pet in this embodiment is exemplified by a pet cat. M samples, i.e., m training images, are first acquired, and then the selected 14 features are acquired for each training image. And constructing an initial weak classifier for each feature, constructing a training sample feature set, and training by an AdaBoost method to obtain a weak classifier. Finally, 14 weak classifiers are obtained, and then a strong classifier is generated according to the obtained 14 weak classifiers. The strong classifier is the face detection classifier.
The training steps for one feature for embodiments of the present invention are as follows:
assuming that X represents a feature set of sample data of m samples, and Y represents a tag set of cat face sample features, since whether a cat face belongs to a two-class problem, y= { -1,1}. Let S = { (x) i ,y i ) I=1, 2,..m } is the training sample feature set, where x i ∈X,y i ∈Y。
Initializing weights of m samples as: d (D) t (i)=1/m,D t (i) Representing the assignment of samples (x) in the training sample feature set to the t-th training iteration i ,y i ) The weight of the sample is updated once every training round, and the weak classifier of the t th round is obtained by training on the training sample feature set S, namely h t Then the weights of the samples in the training sample feature set need to be updated before the next training, and the weight updating rule is as follows:
the sum of the weights of the misclassified samples is calculated as:
Figure BDA0002252483410000111
wherein ε t Representing the sum of the weights of the misclassified samples; h is a t A weak classifier representing the t-th round; x is x i An x value representing an ith sample in the training sample feature set; y is i A y value representing an ith sample in the training sample feature set;
order the
Figure BDA0002252483410000112
Then the weight of the training samples for the t+1-th round is:
Figure BDA0002252483410000113
wherein Z is t Is a regularization factor used to ensure sigma i D t+1 (i)=1。
The weak classifier corresponding to each characteristic can be obtained by adopting the method, the training mode of fixing the number of rounds can be adopted in the training process, namely, the preset number of training rounds is set, and training or errors are stopped when the training rounds reach the preset number. Of course, the error can also be detected in the training process, and the training is stopped when the error is smaller than a preset threshold value.
Then, the strong classifiers that are ultimately generated from all weak classifiers are:
Figure BDA0002252483410000114
wherein the method comprises the steps of
Figure BDA0002252483410000115
Representing sample x i The weight values output after passing through all weak classifiers are accumulated and summed; n represents the number of weak classifiers, i.e. the number of Haar features selected; q represents a classification threshold; h is a t Representing the weak classifier of the t-th round.
The two classification threshold Q is calculated as follows:
assuming that the set of k positive samples in the training sample feature set is denoted as P, and the set of f negative samples is denoted as N, all positive samples in P pass through V (x i ) The value range set obtained is Pv, all samples in N pass through V (x i ) The value range set obtained is Nv, then
Mean value of Pv:
Figure BDA0002252483410000121
mean value of Nv:
Figure BDA0002252483410000122
then the threshold Q is:
Figure BDA0002252483410000123
in order to improve accuracy of image alignment, in the embodiment of the present invention, the step of obtaining an aligned image by performing image alignment on a pet face image area includes:
according to the feature point detection network obtained through pre-training, coarse positioning feature points of the positions of the facial organs of the pet in the pet facial image area are obtained;
dividing the pet facial image region into a plurality of local regions, wherein each local region comprises at least one coarse positioning feature point;
Revising the characteristic point detection network according to the characteristic point of each local area to obtain a plurality of revised characteristic point detection networks, wherein each revised characteristic point detection network corresponds to one local area;
according to each local area and the corresponding revised characteristic point detection network, obtaining a precise positioning characteristic point corresponding to the rough positioning characteristic point;
reversely mapping the plurality of local areas back to the pet face image area, and determining the position relation between the accurate positioning feature points;
and aligning the pet face image area according to the position relation among the accurate positioning feature points to obtain an aligned image.
It should be noted that, the accuracy of the finally obtained feature points is improved by adopting the mode of locating the feature points twice. When the coarse positioning feature points are determined, a soft max layer with a preset dimension is cascaded behind the last full-connection layer of the ResNet network based on the ResNet network, then the modified ResNet network is trained based on calibrated training data, and the trained ResNet network is used for determining the coarse positioning feature points. Wherein, the rough positioning characteristic points of the facial organ position of the pet at least comprise: one coarse positioning feature point on the left eye position, one coarse positioning feature point on the right eye position, three coarse positioning feature points on the mouth position, and two coarse positioning feature points on the ear root position.
When the pet face image area is divided into a plurality of local areas, preferably, the pet face image area can be divided into 5 local areas, namely, a first local area only comprising one rough positioning characteristic point on the nose tip position; a second local area containing only three coarse positioning feature points on the mouth position; a third local area containing only one coarse positioning feature point at the right eye position; a fourth local area containing only one coarse positioning feature point at the left eye position; only the fifth local area containing two coarse localization feature points at the location of the auricle. And revising the trained residual error network when the coarse positioning characteristic points are determined according to the quantity of the characteristic points of each local area, and determining the characteristic points of accurate positioning by utilizing the revised residual error network. For convenience of description, the residual network trained in determining the coarse positioning feature points is also called a positioning network.
Specifically, when determining the accurate positioning feature points corresponding to the coarse positioning feature points in the first local area:
and adjusting the input of the positioning network according to the size of the first local area, defining the output of the positioning network as 1 characteristic point, 2-dimensional vectors, and revising the softmax of the output layer of the positioning network as 2-dimensional output. And retraining the adjusted positioning network, inputting the first local area into the retrained positioning network, and obtaining the accurate positioning characteristic points corresponding to the coarse positioning characteristic points in the corresponding first local area.
Specifically, when determining the accurate positioning feature points corresponding to the coarse positioning feature points in the second local area:
the input of the positioning network is adjusted according to the size of the second local area, then the positioning network output is defined as 3 feature points, a 6-dimensional vector, and then the positioning network output layer softmax is revised as a 6-dimensional output. And retraining the adjusted positioning network, and inputting the second local area into the retrained positioning network to obtain the accurate positioning characteristic points corresponding to the coarse positioning characteristic points in the second local area.
Specifically, when determining the accurate positioning feature point corresponding to the coarse positioning feature point in the third local area:
and adjusting the input of the positioning network according to the size of the third local area, defining the output of the positioning network as 1 feature point, 2-dimensional vectors, and revising the softmax of the output layer of the positioning network as 2-dimensional output. And inputting the third local area into the retrained positioning network to obtain the accurate positioning characteristic points corresponding to the rough positioning characteristic points in the third local area.
Specifically, when determining the accurate positioning feature point corresponding to the coarse positioning feature point in the fourth local area:
and adjusting the input of the positioning network according to the size of the fourth local area, defining the output of the positioning network as 1 feature point, 2-dimensional vectors, and revising the softmax of the output layer of the positioning network as 2-dimensional output. Inputting the fourth local area into the retrained positioning network to obtain the accurate positioning characteristic points corresponding to the rough positioning characteristic points in the fourth local area.
Specifically, when determining the accurate positioning feature point corresponding to the coarse positioning feature point in the fifth local area:
the input of the positioning network is adjusted according to the size of the fifth local area, then the positioning network output is defined as 2 feature points, 4-dimensional vectors, and then the positioning network output layer softmax is revised as 4-dimensional output. And inputting the fifth local area into the retrained positioning network to obtain the accurate positioning characteristic points corresponding to the coarse positioning characteristic points in the fifth local area.
Based on the above embodiments of the present invention, in the embodiment of the present invention, the step of aligning the pet face image area according to the positional relationship between the precisely located feature points, to obtain an aligned image includes:
determining whether the pet face image areas are aligned according to the position relation between the accurate positioning feature points;
if not, selecting at least three accurate positioning feature points, and calculating expected feature points corresponding to each selected accurate positioning feature point according to the relation between the organ positions of the selected accurate positioning feature points;
determining an affine transformation matrix according to the selected accurate positioning feature points and the expected feature points;
and carrying out affine transformation on the pixel points of the pet face image area according to the affine transformation matrix to obtain an aligned image.
It should be noted that, when judging whether the pet face image area is aligned, it is possible to judge whether the included angle formed by the nose tip and the two eye connecting lines respectively can be halved by the straight line passing through the nose tip and vertical to the horizontal line or whether the included angle between the two eye connecting lines and the horizontal line is smaller than the preset angle; if the images are aligned, the feature vectors in the aligned images are directly extracted, and the pets in the images to be identified are identified according to the feature vectors, so that the identity information of the pets is obtained.
The accurate positioning of the feature points and the desired feature points corresponding thereto refer to the points of a certain position of the pet's face on the unaligned image and the points on the aligned image, respectively. The step of selecting at least three accurate positioning feature points and calculating the expected feature point corresponding to each selected accurate positioning feature point according to the relation between the organ positions of the selected accurate positioning feature points comprises the following steps:
selecting a first accurate positioning feature point on a left eye position, a second accurate positioning feature point on a right eye position and a third accurate positioning feature point on a nose tip position in the pet face image area;
calculating to obtain a first distance between a first accurate positioning feature point and a second accurate positioning feature point, wherein an included angle between a connecting line of the first accurate positioning feature point and the second accurate positioning feature point and a horizontal straight line is a rotation angle, and a second distance is a distance between a midpoint of the connecting line of the first accurate positioning feature point and the second accurate positioning feature point and a third accurate positioning feature point;
According to the first distance, the rotation angle and the second distance, a first expected feature point corresponding to the first accurate positioning feature point, a second expected feature point corresponding to the second accurate positioning feature point and a third expected feature point corresponding to the third accurate positioning feature point are obtained, wherein the second accurate positioning feature point is simultaneously used as the second expected feature point, the first expected feature point is positioned on the right side of the second expected feature point and is positioned on the same horizontal line, the distance between the first expected feature point and the second expected feature point is equal to the first distance, the third expected feature point is positioned below the midpoint of the connecting line, the distance between the third expected feature point and the midpoint of the connecting line is equal to the second distance, and the connecting line between the third expected feature point and the midpoint of the connecting line is equal to the horizontal line.
Taking 3 accurate positioning feature points and corresponding expected feature points as examples, the steps for solving the affine transformation matrix are as follows:
assume that the affine transformation matrix is:
Figure BDA0002252483410000151
six feature point coordinates are now known, namely the 3 precisely located feature points described above and the corresponding desired feature points.
Assuming that the input point is a first accurate positioning feature point b= { x, y }, a corresponding first expected feature point b1= { x 1 ,y 1 Then B is affine transformed to:
a 00 x+a 01 y+b 00 =x 1
a 10 x+a 11 y+b 10 =y 1
Similarly, four other equations can be obtained through the other four feature points, and six unknowns in the affine transformation matrix, namely a, can be obtained through the total of six equations 00 、a 01 、b 00 、a 10 、a 11 、b 10 Thereby determining an affine transformation matrix. The method for obtaining the alignment image by carrying out affine transformation on the pixel points of the pet face image area through the known affine transformation matrix is the conventional method and is not repeated here.
Based on the above embodiments of the present invention, in the embodiment of the present invention, the step of extracting feature vectors in the aligned image includes:
converting the size of the alignment image into a preset size;
and inputting the aligned images with the preset sizes into a preset residual error network model to obtain the multidimensional feature vector.
It should be noted that converting the size of the aligned image to the preset size facilitates direct input of the preset residual network model. The residual network model can be obtained by training based on the ResNet network, and the dimension of the feature vector can be defined by self, and the full connection layer is used as output by removing the last classification layer of the ResNet network. For example, a 1000-dimensional feature vector can be obtained by presetting a residual network model.
Based on the above embodiments of the present invention, in the embodiment of the present invention, the step of identifying the pet in the image to be identified according to the feature vector to obtain the identity information of the pet includes:
Calculating the distance between each identity vector and the feature vector in a preset pet identity feature library to obtain a plurality of identity distance values;
if the minimum value in the identity distance values is smaller than the preset threshold value, the identity of the pet in the image to be identified is a warehousing identity, wherein the warehousing identity is identity information indicated by an identity vector corresponding to the minimum value in the identity distance values.
It should be noted that the distance between the identity vector and the feature vector may be a euclidean distance, a cosine distance, etc.
Having described the method for identifying the identity of the pet provided by the embodiment of the invention, the device for identifying the identity of the pet provided by the embodiment of the invention is described below with reference to the accompanying drawings.
Referring to fig. 3, the embodiment of the invention further provides a device for identifying the identity of a pet, which comprises:
a first region confirmation module 31, configured to determine a pet image region in an image to be identified according to a target image detection network obtained by training in advance;
a second region confirmation module 32, configured to determine a pet face image region in the pet image region according to the face detection classifier obtained by training in advance;
an alignment module 33, configured to perform image alignment on the pet face image area to obtain an aligned image;
The identifying module 34 is configured to extract the feature vector in the aligned image, and identify the pet in the image to be identified according to the feature vector, so as to obtain the identity information of the pet.
It should be noted that the step of training the first area confirmation module 31 to obtain the target image detection network includes:
acquiring specific training data determined for pets in images to be identified;
and training the target detection network in a two-classification mode according to the specific training data to obtain a trained target detection network serving as a target image detection network.
Referring to fig. 4, the second area confirmation module 32 includes:
a screening unit 321, configured to remove an image area in the pet image area that is smaller than a preset area threshold, to obtain a candidate area;
the region confirming unit 322 is configured to input the candidate region into the face detection classifier, and determine the pet face image region in the candidate region.
The step of training to obtain the face detection classifier comprises the following steps:
acquiring a plurality of training images containing pet images and training images not containing the pet images; acquiring Haar characteristics of each training image; constructing a training sample feature set according to Haar features and whether the training image contains a pet face image; and training the training sample feature set by adopting an AdaBoost method to obtain the face detection classifier.
Referring to fig. 5, the alignment module 33 includes:
a first feature point unit 331, configured to obtain coarse positioning feature points of the position of the facial organ of the pet in the image area of the pet face according to the feature point detection network obtained by training in advance;
a segmentation unit 332 for segmenting the pet face image region into a plurality of local regions, wherein each local region comprises at least one coarse positioning feature point;
a network revising unit 333, configured to revise the feature point detection network according to the feature point of each local area, to obtain a plurality of revised feature point detection networks, where each revised feature point detection network corresponds to one local area;
a second feature point unit 334, configured to obtain a precise positioning feature point corresponding to the coarse positioning feature point according to each local area and the corresponding revised feature point detection network;
a reflection unit 335 for reversely mapping the plurality of local areas back to the pet face image area, and determining a positional relationship between the precisely located feature points;
an alignment unit 336 for aligning the pet face image region according to the positional relationship between the precisely located feature points, to obtain an aligned image.
Wherein, the rough positioning characteristic points of the facial organ position of the pet at least comprise: a rough positioning feature point on the left eye position, a rough positioning feature point on the right eye position, a rough positioning feature point on the nose tip position, three rough positioning feature points on the mouth position and two rough positioning feature points on the ear root position.
An alignment unit 336, specifically configured to determine whether the pet face image area is aligned according to the positional relationship between the precisely located feature points;
if not, selecting at least three accurate positioning feature points, and calculating expected feature points corresponding to each selected accurate positioning feature point according to the relation between the organ positions of the selected accurate positioning feature points;
determining an affine transformation matrix according to the selected accurate positioning feature points and the expected feature points;
and carrying out affine transformation on the pixel points of the pet face image area according to the affine transformation matrix to obtain an aligned image.
Wherein the alignment unit 336 is specifically configured to select a first accurate positioning feature point at a left eye position, a second accurate positioning feature point at a right eye position, and a third accurate positioning feature point at a nose tip position in the pet face image region;
Calculating to obtain a first distance between a first accurate positioning feature point and a second accurate positioning feature point, wherein an included angle between a connecting line of the first accurate positioning feature point and the second accurate positioning feature point and a horizontal straight line is a rotation angle, and a second distance is a distance between a midpoint of the connecting line of the first accurate positioning feature point and the second accurate positioning feature point and a third accurate positioning feature point;
according to the first distance, the rotation angle and the second distance, a first expected feature point corresponding to the first accurate positioning feature point, a second expected feature point corresponding to the second accurate positioning feature point and a third expected feature point corresponding to the third accurate positioning feature point are obtained, wherein the second accurate positioning feature point is simultaneously used as the second expected feature point, the first expected feature point is positioned on the right side of the second expected feature point and is positioned on the same horizontal line, the distance between the first expected feature point and the second expected feature point is equal to the first distance, the third expected feature point is positioned below the midpoint of the connecting line, the distance between the third expected feature point and the midpoint of the connecting line is equal to the second distance, and the connecting line between the third expected feature point and the midpoint of the connecting line is equal to the horizontal line.
Referring to fig. 6, the identification module 34 includes:
A conversion unit 341, configured to convert the size of the alignment image into a preset size;
the extracting unit 342 is configured to input the aligned image with the preset size into a preset residual network model, so as to obtain a multidimensional feature vector.
A calculating unit 343, configured to calculate a distance between each identity vector and a feature vector in a preset pet identity feature library, so as to obtain a plurality of identity distance values;
the identifying unit 344 is configured to identify the identity of the pet in the image to be identified as a warehouse-in identity if the minimum value in the identity distance values is smaller than the preset threshold value, where the warehouse-in identity is identity information indicated by an identity vector corresponding to the minimum value in the identity distance values.
The pet identity recognition device provided by the embodiment of the invention can realize each process in the pet identity recognition method embodiment, and in order to avoid repetition, the description is omitted.
In the embodiment of the invention, firstly, a pet image area in an image to be identified is determined according to a target image detection network obtained by training; then determining a pet face image area in the pet image area according to a face detection classifier obtained by training in advance; the method has the advantages that the target area is gradually narrowed, so that the pet face image area is positioned, the interference of surrounding pixel points is reduced, and the accuracy of positioning the pet face image area is improved. After the pet face image area in the pet image area is determined, carrying out image alignment on the pet face image area to obtain an aligned image; extracting feature vectors in the alignment images, and identifying pets in the images to be identified according to the feature vectors to obtain identity information of the pets. The method can accurately identify the pets in the images to be identified, obtain the identity information of the pets, and solve the problem that the identity of the pets cannot be accurately identified when the insurance industry guarantees the pets.
On the other hand, the embodiment of the invention also provides electronic equipment, which comprises a memory, a processor, a bus and a computer program stored on the memory and capable of running on the processor, wherein the steps in the method for identifying the identity of the pet are realized when the processor executes the program.
For example, fig. 7 shows a schematic physical structure of an electronic device.
As shown in fig. 7, the electronic device may include: a processor 1010, a communication interface (Communications Interface) 1020, a memory 1030, and a communication bus 1040, wherein the processor 1010, the communication interface 1020, and the memory 1030 communicate with each other via the communication bus 1040. Processor 1010 may call logic instructions in memory 1030 to perform the following methods:
determining a pet image area in an image to be identified according to a target image detection network obtained through pre-training;
determining a pet face image area in the pet image areas according to a face detection classifier obtained through training in advance;
performing image alignment on the pet face image area to obtain an aligned image;
extracting feature vectors in the aligned images, and identifying the pets in the images to be identified according to the feature vectors to obtain identity information of the pets.
Further, the logic instructions in the memory 1030 described above may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In still another aspect, an embodiment of the present invention further provides a computer readable storage medium having stored thereon a computer program, where the computer program is implemented when executed by a processor to perform the method for identifying a pet identity provided in the foregoing embodiments, for example, including:
Determining a pet image area in an image to be identified according to a target image detection network obtained through pre-training;
determining a pet face image area in the pet image areas according to a face detection classifier obtained through training in advance;
performing image alignment on the pet face image area to obtain an aligned image;
extracting feature vectors in the aligned images, and identifying the pets in the images to be identified according to the feature vectors to obtain identity information of the pets.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (11)

1. A method for identifying the identity of a pet, the method comprising:
determining a pet image area in an image to be identified according to a target image detection network obtained through pre-training;
determining a pet face image area in the pet image areas according to a face detection classifier obtained through training in advance;
performing image alignment on the pet face image area to obtain an aligned image;
extracting feature vectors in the aligned images, and identifying pets in the images to be identified according to the feature vectors to obtain identity information of the pets;
the step of obtaining an aligned image includes:
Obtaining coarse positioning feature points of the positions of the facial organs of the pet in the facial image area of the pet according to a feature point detection network obtained through pre-training; dividing the pet facial image region into a plurality of local regions, wherein each local region comprises at least one coarse positioning feature point; revising the characteristic point detection network according to the characteristic points of each local area to obtain a plurality of revised characteristic point detection networks, wherein each revised characteristic point detection network corresponds to one local area; detecting a network according to each local area and the corresponding revised characteristic points to obtain accurate positioning characteristic points corresponding to the rough positioning characteristic points; reversely mapping the plurality of local areas back to the pet face image area, and determining the position relation between the accurate positioning feature points; aligning the pet face image area according to the position relation among the accurate positioning feature points to obtain an aligned image;
the step of aligning the pet face image area according to the position relation between the accurate positioning feature points to obtain an aligned image comprises the following steps:
determining whether the pet face image areas are aligned according to the position relation between the accurate positioning feature points; if not, selecting at least three accurate positioning feature points, and calculating expected feature points corresponding to each selected accurate positioning feature point according to the relation between the organ positions of the selected accurate positioning feature points; determining an affine transformation matrix according to the selected accurate positioning feature points and the expected feature points; and carrying out affine transformation on the pixel points of the pet face image area according to the affine transformation matrix to obtain an aligned image.
2. The method of claim 1, wherein training the target image detection network comprises:
acquiring specific training data determined for the pets in the images to be identified;
and training the target detection network in a two-classification mode according to the specific training data to obtain a trained target detection network serving as the target image detection network.
3. The method of claim 1, wherein the step of determining the pet face image region in the pet image region based on a pre-trained face detection classifier comprises:
removing the image area of which the area is smaller than a preset area threshold value in the pet image area to obtain a candidate area;
and inputting the candidate region into the face detection classifier, and determining the pet face image region in the candidate region.
4. A method according to claim 1 or 3, wherein the step of training the face detection classifier comprises:
acquiring a plurality of training images containing pet images and training images not containing the pet images;
acquiring Haar characteristics of each training image;
Constructing a training sample feature set according to the Haar features and whether the training image contains a pet face image or not;
and training the training sample feature set by adopting an AdaBoost method to obtain the face detection classifier.
5. The method of claim 1, wherein the coarsely located feature points of the pet's facial organ locations include at least: a rough positioning feature point on the left eye position, a rough positioning feature point on the right eye position, a rough positioning feature point on the nose tip position, three rough positioning feature points on the mouth position and two rough positioning feature points on the ear root position.
6. The method according to claim 1, wherein the steps of selecting at least three pinpointing feature points and calculating the desired feature point corresponding to each of the selected pinpointing feature points based on the relationship between the organ positions at which the selected pinpointing feature points are located, comprise:
selecting a first accurate positioning feature point on a left eye position, a second accurate positioning feature point on a right eye position and a third accurate positioning feature point on a nose tip position in the pet face image area;
Calculating to obtain a first distance between the first accurate positioning feature point and the second accurate positioning feature point, wherein an included angle between a connecting line of the first accurate positioning feature point and the second accurate positioning feature point and a horizontal straight line is a rotation angle, and a second distance is a distance between a midpoint of the connecting line of the first accurate positioning feature point and the second accurate positioning feature point and the third accurate positioning feature point;
according to the first distance, the rotation angle and the second distance, a first expected feature point corresponding to the first accurate positioning feature point, a second expected feature point corresponding to the second accurate positioning feature point and a third expected feature point corresponding to the third accurate positioning feature point are obtained, wherein the second accurate positioning feature point is used as the second expected feature point at the same time, the first expected feature point is located on the right side of the second expected feature point and located on the same horizontal straight line, the distance between the first expected feature point and the second expected feature point is equal to the first distance, the third expected feature point is located below the connecting line midpoint, the distance between the third expected feature point and the connecting line midpoint is equal to the second distance, and the connecting line between the third expected feature point and the connecting line midpoint is equal to the horizontal straight line.
7. The method of claim 1, wherein the step of extracting feature vectors in the aligned image comprises:
converting the size of the alignment image into a preset size;
and inputting the aligned images with the preset sizes into a preset residual error network model to obtain the multidimensional feature vector.
8. The method according to claim 1, wherein the step of identifying the pet in the image to be identified based on the feature vector, and obtaining the identity information of the pet comprises:
calculating the distance between each identity vector in a preset pet identity feature library and the feature vector to obtain a plurality of identity distance values;
if the minimum value in the identity distance values is smaller than a preset threshold value, the identity of the pet in the image to be identified is a warehousing identity, wherein the warehousing identity is identity information indicated by an identity vector corresponding to the minimum value in the identity distance values.
9. A pet identification device, the device comprising:
the first region confirmation module is used for determining a pet image region in the image to be recognized according to a target image detection network obtained through pre-training;
the second region confirmation module is used for determining a pet face image region in the pet image region according to a face detection classifier obtained through training in advance;
The alignment module is used for carrying out image alignment on the pet face image area to obtain an aligned image;
the identification module is used for extracting the feature vector in the aligned image, identifying the pet in the image to be identified according to the feature vector and obtaining the identity information of the pet;
the step of obtaining an aligned image includes:
obtaining coarse positioning feature points of the positions of the facial organs of the pet in the facial image area of the pet according to a feature point detection network obtained through pre-training; dividing the pet facial image region into a plurality of local regions, wherein each local region comprises at least one coarse positioning feature point; revising the characteristic point detection network according to the characteristic points of each local area to obtain a plurality of revised characteristic point detection networks, wherein each revised characteristic point detection network corresponds to one local area; detecting a network according to each local area and the corresponding revised characteristic points to obtain accurate positioning characteristic points corresponding to the rough positioning characteristic points; reversely mapping the plurality of local areas back to the pet face image area, and determining the position relation between the accurate positioning feature points; aligning the pet face image area according to the position relation among the accurate positioning feature points to obtain an aligned image;
The step of aligning the pet face image area according to the position relation between the accurate positioning feature points to obtain an aligned image comprises the following steps:
determining whether the pet face image areas are aligned according to the position relation between the accurate positioning feature points; if not, selecting at least three accurate positioning feature points, and calculating expected feature points corresponding to each selected accurate positioning feature point according to the relation between the organ positions of the selected accurate positioning feature points; determining an affine transformation matrix according to the selected accurate positioning feature points and the expected feature points; and carrying out affine transformation on the pixel points of the pet face image area according to the affine transformation matrix to obtain an aligned image.
10. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, characterized in that the computer program when executed by the processor implements the steps of the method of identifying the identity of a pet as claimed in any one of claims 1 to 8.
11. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the method for identifying the identity of a pet as claimed in any one of claims 1 to 8.
CN201911039645.5A 2019-10-29 2019-10-29 Method and device for identifying identity of pet Active CN110909618B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911039645.5A CN110909618B (en) 2019-10-29 2019-10-29 Method and device for identifying identity of pet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911039645.5A CN110909618B (en) 2019-10-29 2019-10-29 Method and device for identifying identity of pet

Publications (2)

Publication Number Publication Date
CN110909618A CN110909618A (en) 2020-03-24
CN110909618B true CN110909618B (en) 2023-04-21

Family

ID=69814679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911039645.5A Active CN110909618B (en) 2019-10-29 2019-10-29 Method and device for identifying identity of pet

Country Status (1)

Country Link
CN (1) CN110909618B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444965B (en) * 2020-03-27 2024-03-12 泰康保险集团股份有限公司 Data processing method based on machine learning and related equipment
CN111753697A (en) * 2020-06-17 2020-10-09 新疆爱华盈通信息技术有限公司 Intelligent pet management system and management method thereof
KR102497805B1 (en) * 2020-07-31 2023-02-10 주식회사 펫타버스 System and method for companion animal identification based on artificial intelligence
CN112926479A (en) * 2021-03-08 2021-06-08 新疆爱华盈通信息技术有限公司 Cat face identification method and system, electronic device and storage medium
CN113076886A (en) * 2021-04-09 2021-07-06 深圳市悦保科技有限公司 Face individual identification device and method for cat
CN113673422A (en) * 2021-08-19 2021-11-19 苏州中科先进技术研究院有限公司 Pet type identification method and identification system
CN115393904B (en) * 2022-10-20 2023-05-02 星宠王国(北京)科技有限公司 Dog nose line identification method and system
US11948390B1 (en) 2023-06-30 2024-04-02 Xingchong Kingdom (Beijing) Technology Co., Ltd Dog nose print recognition method and system

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060027482A (en) * 2004-09-23 2006-03-28 전자부품연구원 Method for authenticating human face
CN101261678A (en) * 2008-03-18 2008-09-10 中山大学 A method for normalizing face light on feature image with different size
CN103218610A (en) * 2013-04-28 2013-07-24 宁波江丰生物信息技术有限公司 Formation method of dogface detector and dogface detection method
CN103400105A (en) * 2013-06-26 2013-11-20 东南大学 Method identifying non-front-side facial expression based on attitude normalization
CN103914676A (en) * 2012-12-30 2014-07-09 杭州朗和科技有限公司 Method and apparatus for use in face recognition
CN104504376A (en) * 2014-12-22 2015-04-08 厦门美图之家科技有限公司 Age classification method and system for face images
CN106295567A (en) * 2016-08-10 2017-01-04 腾讯科技(深圳)有限公司 The localization method of a kind of key point and terminal
RU2610682C1 (en) * 2016-01-27 2017-02-14 Общество с ограниченной ответственностью "СТИЛСОФТ" Face recognition method
CN107545249A (en) * 2017-08-30 2018-01-05 国信优易数据有限公司 A kind of population ages' recognition methods and device
CN107563328A (en) * 2017-09-01 2018-01-09 广州智慧城市发展研究院 A kind of face identification method and system based under complex environment
CN107609459A (en) * 2016-12-15 2018-01-19 平安科技(深圳)有限公司 A kind of face identification method and device based on deep learning
CN109190477A (en) * 2018-08-02 2019-01-11 平安科技(深圳)有限公司 Settlement of insurance claim method, apparatus, computer equipment and storage medium based on the identification of ox face
CN109344693A (en) * 2018-08-13 2019-02-15 华南理工大学 A kind of face multizone fusion expression recognition method based on deep learning
CN109558864A (en) * 2019-01-16 2019-04-02 苏州科达科技股份有限公司 Face critical point detection method, apparatus and storage medium
CN109784219A (en) * 2018-12-28 2019-05-21 广州海昇计算机科技有限公司 A kind of face identification method, system and device based on concentration cooperated learning
CN109829380A (en) * 2018-12-28 2019-05-31 北京旷视科技有限公司 A kind of detection method, device, system and the storage medium of dog face characteristic point
CN109858435A (en) * 2019-01-29 2019-06-07 四川大学 A kind of lesser panda individual discrimination method based on face image
CN109919048A (en) * 2019-02-21 2019-06-21 北京以萨技术股份有限公司 A method of face critical point detection is realized based on cascade MobileNet-V2
CN110309706A (en) * 2019-05-06 2019-10-08 深圳市华付信息技术有限公司 Face critical point detection method, apparatus, computer equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6975750B2 (en) * 2000-12-01 2005-12-13 Microsoft Corp. System and method for face recognition using synthesized training images
WO2010006367A1 (en) * 2008-07-16 2010-01-21 Imprezzeo Pty Ltd Facial image recognition and retrieval

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060027482A (en) * 2004-09-23 2006-03-28 전자부품연구원 Method for authenticating human face
CN101261678A (en) * 2008-03-18 2008-09-10 中山大学 A method for normalizing face light on feature image with different size
CN103914676A (en) * 2012-12-30 2014-07-09 杭州朗和科技有限公司 Method and apparatus for use in face recognition
CN103218610A (en) * 2013-04-28 2013-07-24 宁波江丰生物信息技术有限公司 Formation method of dogface detector and dogface detection method
CN103400105A (en) * 2013-06-26 2013-11-20 东南大学 Method identifying non-front-side facial expression based on attitude normalization
CN104504376A (en) * 2014-12-22 2015-04-08 厦门美图之家科技有限公司 Age classification method and system for face images
RU2610682C1 (en) * 2016-01-27 2017-02-14 Общество с ограниченной ответственностью "СТИЛСОФТ" Face recognition method
CN106295567A (en) * 2016-08-10 2017-01-04 腾讯科技(深圳)有限公司 The localization method of a kind of key point and terminal
CN107609459A (en) * 2016-12-15 2018-01-19 平安科技(深圳)有限公司 A kind of face identification method and device based on deep learning
CN107545249A (en) * 2017-08-30 2018-01-05 国信优易数据有限公司 A kind of population ages' recognition methods and device
CN107563328A (en) * 2017-09-01 2018-01-09 广州智慧城市发展研究院 A kind of face identification method and system based under complex environment
CN109190477A (en) * 2018-08-02 2019-01-11 平安科技(深圳)有限公司 Settlement of insurance claim method, apparatus, computer equipment and storage medium based on the identification of ox face
CN109344693A (en) * 2018-08-13 2019-02-15 华南理工大学 A kind of face multizone fusion expression recognition method based on deep learning
CN109784219A (en) * 2018-12-28 2019-05-21 广州海昇计算机科技有限公司 A kind of face identification method, system and device based on concentration cooperated learning
CN109829380A (en) * 2018-12-28 2019-05-31 北京旷视科技有限公司 A kind of detection method, device, system and the storage medium of dog face characteristic point
CN109558864A (en) * 2019-01-16 2019-04-02 苏州科达科技股份有限公司 Face critical point detection method, apparatus and storage medium
CN109858435A (en) * 2019-01-29 2019-06-07 四川大学 A kind of lesser panda individual discrimination method based on face image
CN109919048A (en) * 2019-02-21 2019-06-21 北京以萨技术股份有限公司 A method of face critical point detection is realized based on cascade MobileNet-V2
CN110309706A (en) * 2019-05-06 2019-10-08 深圳市华付信息技术有限公司 Face critical point detection method, apparatus, computer equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
V-Head: Face Detection and Alignment for Facial Augmented Reality Applications;Zhiwei Wang et al;《23rd International Conference on MultiMedia Modeling (MMM)》;20180108;450-454页 *
基于卷积神经网络的低分辨率行人与人脸检测和识别研究;姜亚东;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20190415;第2019年卷(第04期);37页第3段、38页第2段、39页第4段、45页第2段、46页第4段 *
基于红外视频的驾驶员脸部检测与跟踪方法研究;王学彬;《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》;20170215;第2017年卷(第02期);27-39页 *

Also Published As

Publication number Publication date
CN110909618A (en) 2020-03-24

Similar Documents

Publication Publication Date Title
CN110909618B (en) Method and device for identifying identity of pet
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
WO2019100724A1 (en) Method and device for training multi-label classification model
KR100647322B1 (en) Apparatus and method of generating shape model of object and apparatus and method of automatically searching feature points of object employing the same
US10262214B1 (en) Learning method, learning device for detecting lane by using CNN and testing method, testing device using the same
CN110363047B (en) Face recognition method and device, electronic equipment and storage medium
Shotton et al. Textonboost: Joint appearance, shape and context modeling for multi-class object recognition and segmentation
Karlinsky et al. The chains model for detecting parts by their context
CN110674874B (en) Fine-grained image identification method based on target fine component detection
CN105740780B (en) Method and device for detecting living human face
JP2018506788A (en) How to re-identify objects
US10275667B1 (en) Learning method, learning device for detecting lane through lane model and testing method, testing device using the same
CN111652317B (en) Super-parameter image segmentation method based on Bayes deep learning
Wang et al. Head pose estimation with combined 2D SIFT and 3D HOG features
US20240087368A1 (en) Companion animal life management system and method therefor
CN111709313B (en) Pedestrian re-identification method based on local and channel combination characteristics
CN108596195B (en) Scene recognition method based on sparse coding feature extraction
Demirkus et al. Hierarchical temporal graphical model for head pose estimation and subsequent attribute classification in real-world videos
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
CN110414541B (en) Method, apparatus, and computer-readable storage medium for identifying an object
CN111160169A (en) Face detection method, device, equipment and computer readable storage medium
CN113160276A (en) Target tracking method, target tracking device and computer readable storage medium
Palaniswamy et al. Automatic identification of landmarks in digital images
KR102325250B1 (en) companion animal identification system and method therefor
Chen et al. Image segmentation based on mathematical morphological operator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant