CN114299546A - Method and device for identifying pet identity, storage medium and electronic equipment - Google Patents

Method and device for identifying pet identity, storage medium and electronic equipment Download PDF

Info

Publication number
CN114299546A
CN114299546A CN202111645814.7A CN202111645814A CN114299546A CN 114299546 A CN114299546 A CN 114299546A CN 202111645814 A CN202111645814 A CN 202111645814A CN 114299546 A CN114299546 A CN 114299546A
Authority
CN
China
Prior art keywords
pet
quality
nose
print image
quality score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111645814.7A
Other languages
Chinese (zh)
Inventor
彭永鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New Ruipeng Pet Healthcare Group Co Ltd
Original Assignee
New Ruipeng Pet Healthcare Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New Ruipeng Pet Healthcare Group Co Ltd filed Critical New Ruipeng Pet Healthcare Group Co Ltd
Priority to CN202111645814.7A priority Critical patent/CN114299546A/en
Publication of CN114299546A publication Critical patent/CN114299546A/en
Pending legal-status Critical Current

Links

Images

Abstract

The embodiment of the application discloses a method, a device, a storage medium and electronic equipment for identifying pet identity, wherein the embodiment of the application acquires a face image of a pet to be identified and acquires a nose print image from the face image; determining the quality score of the nose print image according to a preset quality evaluation model, wherein the preset quality evaluation model is obtained by training according to a sample nose print image and a corresponding quality score label; when the quality score is smaller than a preset threshold value, outputting prompt information that the nasal print is not acquired; and when the quality score is greater than or equal to the preset threshold value, identifying the identity of the pet to be identified according to the nose print image. By adopting the scheme of the embodiment of the application, the accuracy of pet identity recognition is improved.

Description

Method and device for identifying pet identity, storage medium and electronic equipment
Technical Field
The application relates to the technical field of identity recognition, in particular to a method, a device, a storage medium and electronic equipment for recognizing pet identity.
Background
The nasal print of pet such as cat and dog has uniqueness as the fingerprint of human. Therefore, the identity verification of the pet can be realized through the collection of the pet nose print, and the accuracy of the nose print collection determines the accuracy and efficiency of the nose print identification. The conventional nasal print acquisition mode is generally through shooing, but because the pet is different with the human being, when gathering its nasal print, the pet can not cooperate mostly, and the nasal print image that obtains of shooing probably leads to being difficult to accurate discernment pet identity.
Disclosure of Invention
The embodiment of the application provides a method and a device for identifying pet identity, a storage medium and electronic equipment, which can improve the accuracy of pet identity identification.
In a first aspect, an embodiment of the present application provides a method for identifying an identity of a pet, including:
acquiring a face image of a pet to be identified, and acquiring a nose print image from the face image;
determining the quality score of the nose print image according to a preset quality evaluation model, wherein the preset quality evaluation model is obtained by training according to a sample nose print image and a corresponding quality score label;
when the quality score is smaller than a preset threshold value, outputting prompt information that the nasal print is not acquired;
and when the quality score is greater than or equal to the preset threshold value, identifying the identity of the pet to be identified according to the nose print image.
In a second aspect, an embodiment of the present application further provides a device for identifying an identity of a pet, including:
the system comprises an image acquisition module, a recognition module and a recognition module, wherein the image acquisition module is used for acquiring a face image of a pet to be recognized and acquiring a nose print image from the face image;
the quality evaluation module is used for determining the quality score of the nose print image according to a preset quality evaluation model, wherein the preset quality evaluation model is obtained by training according to a sample nose print image and a corresponding quality score label;
the information prompting module is used for outputting prompting information of not acquired nasal veins when the quality score is smaller than a preset threshold value;
and the identity recognition module is used for recognizing the identity of the pet to be recognized according to the nose print image when the quality score is greater than or equal to the preset threshold value.
In a third aspect, embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the method for identifying the identity of a pet provided in any embodiment of the present application.
In a fourth aspect, an embodiment of the present application further provides an electronic device, including a processor and a memory, where the memory has a computer program, and the processor is configured to execute the method for identifying an identity of a pet provided in any embodiment of the present application by calling the computer program.
According to the technical scheme, the face image of the pet to be recognized is obtained, the nose print image is obtained from the face image, the quality score of the nose print image is determined according to the preset quality evaluation model, the preset quality evaluation model is obtained according to the sample nose print image and the corresponding quality score label in a training mode, when the determined quality score is smaller than a preset threshold value, prompt information that the nose print is not collected is output, and when the quality score is larger than or equal to the preset threshold value, the identity of the pet to be recognized is recognized according to the nose print image. Adopt the scheme of this application, carry out quality evaluation to the rhinoprint image of gathering, when quality score satisfies unsatisfying the requirement, instruct the user to gather the image again, and when quality score satisfies the requirement, just can carry out identification, the degree of accuracy of pet identification that has improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for identifying pet identity according to an embodiment of the present application.
Fig. 2 is a schematic flowchart of a second method for identifying pet identity according to an embodiment of the present application.
Fig. 3 is a schematic view of an application scenario of the method for identifying pet identity according to the embodiment of the present application.
Fig. 4 is a schematic structural diagram of a device for identifying pet identity according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a first electronic device according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a second electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The embodiment of the application provides a method for identifying pet identity, an execution subject of the method for identifying pet identity can be the device for identifying pet identity provided by the embodiment of the application, or electronic equipment integrated with the device for identifying pet identity, wherein the device for identifying pet identity can be realized in a hardware or software mode. The electronic device may be a smart phone, a tablet computer, a palm computer, a notebook computer, or a desktop computer.
Referring to fig. 1, fig. 1 is a first flowchart illustrating a method for identifying pet identity according to an embodiment of the present application. The specific process of the method for identifying the pet identity provided by the embodiment of the application can be as follows:
101. acquiring a face image of a pet to be identified, and acquiring a nose print image from the face image;
in the scheme of the embodiment of the application, the pet to be identified is a pet which is subjected to identity identification by the collected face image, and comprises a pet cat, a pet dog, a pet pig and the like.
For example, when a pet needs to be identified, a face image of the pet to be identified is acquired. For example, when a pet hospital identifies a pet to be treated, the pet to be treated is used as a pet to be identified, and a face image of the pet to be identified is collected. For example, an image or a video of the pet to be identified is obtained through an image acquisition device, and the image or the video contains a face area of the pet. If the video information of the pet to be identified is collected, the image of the face area of the pet is taken as the face image from the video information or a frame.
After acquiring the face image of the pet to be identified, intercepting a nose print image from the face image, for example, in an embodiment, the step of acquiring the nose print image from the face image may include: inputting the face image into a pre-trained target detection model for detection, outputting the position information of the pet identification area, and intercepting the nose print image of the pet to be identified from the face image according to the position information.
In this embodiment, a sample image carrying labeling data of a nasal region is used to train a pre-constructed convolutional neural network, so as to obtain a target detection model. After the face image is acquired, the face image is input into the target detection model for detection, the position information of the nose region is output, for example, the position information of the nose region is output in the form of a rectangular marking frame, and the nose pattern image of the pet to be identified is intercepted from the face image based on the position information.
102. And determining the quality score of the nose print image according to a preset quality evaluation model, wherein the preset quality evaluation model is obtained by training according to the sample nose print image and a corresponding quality score label.
And after obtaining the nose pattern image, inputting the nose pattern image into a pre-trained quality evaluation model for calculation to obtain the quality score of the nose pattern image.
The preset quality evaluation model in this embodiment is obtained by machine learning training using the sample nose print image. For example, a preset number of pet nose print images are prepared as sample nose print images, and for example, if the pet to be identified is a pet cat, a plurality of sample nose print images are obtained by shooting a plurality of pet cats of different breeds. Then, for each sample nose print image, adding a quality score label, which may be a score given by an expert by observing the image in at least one quality evaluation dimension, and taking the score as the quality score label of the sample nose print image. Wherein the quality evaluation dimension comprises at least one of the following multiple dimensions: definition, integrity, exposure, occlusion, etc.
And after the quality scoring label of each sample nose print image is added, taking the sample nose print image with the quality scoring label as training data, training a pre-constructed convolutional neural network, determining model parameters, and obtaining a preset quality evaluation model.
In one embodiment, the preset quality evaluation model comprises a feature extraction network and a full connection layer. Determining the quality score of the nose print image according to a preset quality evaluation model, wherein the quality score comprises the following steps: carrying out feature extraction processing on the nose pattern image according to a feature extraction network to obtain a nose pattern feature image; and inputting the nose pattern feature graph into the full-connection layer for calculation to obtain the quality score of the nose pattern image.
In this embodiment, the preset quality evaluation model includes a feature extraction network and a full connection layer, where the feature extraction network is used to extract features and includes a plurality of convolution layers and a pooling layer, the convolution layers perform convolution operation on an input image and output a feature map, and the pooling layer is used to perform dimensionality reduction processing on input data. And inputting the nose pattern image into a feature extraction network, and performing layer-by-layer operation to obtain a nose pattern feature image. And inputting the nose pattern feature map into a full-connection layer for operation to obtain the quality score of the nose pattern image.
For example, the quality score is a quality grade. Inputting the nose pattern feature diagram into the full-connection layer for calculation to obtain a probability value corresponding to each preset quality level; and taking the preset quality grade with the highest probability value as the quality grade of the nose print image.
The dimension of the fully-connected layer output data is equal to the number of quality levels, for example, assuming that the quality levels of the nose-print image are divided into 5 preset quality levels of Q1, Q2, Q3, Q4, and Q5, the larger the value after Q, the better the quality of the nose-print image. And the full connection layer outputs the probability value of the nose pattern image on each quality level through calculation, and the preset quality level with the highest probability value is used as the quality level of the nose pattern image.
For another example, in one embodiment, there are multiple fully-connected layers, each fully-connected layer corresponding to a quality assessment dimension; inputting the nose pattern feature map into the full-connection layer for calculation to obtain the quality score of the nose pattern image, wherein the quality score comprises the following steps: inputting the nose pattern feature map into the full-connection layer corresponding to each quality evaluation dimension for calculation to obtain a dimension score corresponding to each quality evaluation dimension; determining a quality score for the nose print image based on a plurality of dimension scores for a plurality of quality evaluation dimensions.
In this embodiment, in order to improve the accuracy of image quality evaluation, a plurality of fully-connected layers are set for the preset quality evaluation model, and each fully-connected layer corresponds to one quality evaluation dimension. For example, the quality assessment dimensions are four as follows: definition, integrity, exposure, shielding. Four fully connected layers are provided. After the nose print characteristic diagram is obtained through calculation, the nose print characteristic diagram is input into each full-connection layer to be calculated, and the dimension score corresponding to each quality evaluation dimension is obtained, so that four dimension scores are obtained. And after the dimension score corresponding to each quality evaluation dimension is obtained, determining the quality score of the rhytideogram image according to the plurality of dimension scores. For example, a weight corresponding to each quality evaluation dimension is obtained; and calculating to obtain the quality score of the nose print image according to the weight corresponding to each quality evaluation dimension and the dimension score. Specifically, after a dimension score corresponding to each quality evaluation dimension is obtained, a weight of each quality evaluation dimension is obtained, weighting calculation is performed on a plurality of dimension scores and corresponding weights, and the calculated weighting score is used as the quality score of the nose print image, wherein the weight of each quality evaluation dimension can be preset as required, and the higher the dimension has a greater influence on the identification accuracy of the nose print, the higher the weight is. For another example, the lowest dimension score among the plurality of dimension scores of the plurality of quality evaluation dimensions is determined as the quality score of the nose print image. In this way, the nasal print image finally used for pet identification can have high quality in all dimensions. For another example, the average value of the plurality of dimensional scores is used as the quality score of the nose print image.
103. And when the quality score is smaller than a preset threshold value, outputting prompt information that the nasal print is not acquired.
104. And when the quality score is greater than or equal to a preset threshold value, identifying the identity of the pet to be identified according to the nose print image.
And after the quality score is obtained, comparing the quality score with a preset threshold, and if the quality score of the nose print image is smaller than the preset threshold, indicating that if the current nose print image is used for an identity recognition algorithm, the identity of the pet to be recognized is difficult to accurately judge. Prompt information that no nose print is acquired can be output to prompt the user to acquire the nose print image again.
On the contrary, if the quality score of the nose print image is greater than or equal to the preset threshold, it is indicated that the quality of the currently shot nose print image is better and can be used for identity recognition, and the identity of the pet to be recognized is verified based on the nose print image.
In particular implementation, the present application is not limited by the execution sequence of the described steps, and some steps may be performed in other sequences or simultaneously without conflict.
As can be seen from the above, the method for identifying the pet identity provided in the embodiment of the application obtains the face image of the pet to be identified, obtains the nose print image from the face image, determines the quality score of the nose print image according to the preset quality evaluation model, the preset quality evaluation model is obtained according to the sample nose print image and the corresponding quality score label, outputs the prompt information that the nose print is not acquired when the determined quality score is smaller than the preset threshold, and identifies the identity of the pet to be identified according to the nose print image when the quality score is greater than or equal to the preset threshold. Adopt the scheme of this application, carry out quality evaluation to the rhinoprint image of gathering, when quality score satisfies unsatisfying the requirement, instruct the user to gather the image again, and when quality score satisfies the requirement, just can carry out identification, the degree of accuracy of pet identification that has improved.
The method according to the preceding embodiment is illustrated in further detail below by way of example.
Referring to fig. 2, fig. 2 is a second flow chart of the method for identifying pet identity according to the embodiment of the invention. The method comprises the following steps:
201. the method comprises the steps of obtaining a face image of a pet to be identified, and obtaining a nose print image from the face image.
Referring to fig. 3, fig. 3 is a schematic view of an application scenario of the method for identifying pet identity according to the embodiment of the present application. When a pet to be identified enters a pet hospital, identity identification is carried out at a registration place, and a face image of the pet to be identified is acquired through an image acquisition device. After the face image of the pet to be recognized is obtained, the nose print image is intercepted from the face image, for example, the face image is input into a pre-trained target detection model for detection, the position information of the specific area of the recognized pet is output, and the nose print image of the pet to be recognized is intercepted from the face image according to the position information.
202. And inputting the nose pattern image into a preset quality evaluation model, and calculating to obtain a dimension score corresponding to each quality evaluation dimension.
And after obtaining the nose pattern image, inputting the nose pattern image into a pre-trained quality evaluation model for calculation to obtain the quality score of the nose pattern image. In this embodiment, in order to improve the accuracy of image quality evaluation, a plurality of fully-connected layers are set for the preset quality evaluation model, and each fully-connected layer corresponds to one quality evaluation dimension. For example, the quality assessment dimensions are four as follows: definition, integrity, exposure, shielding. Four fully connected layers are provided. After the nose print characteristic diagram is obtained through calculation, the nose print characteristic diagram is input into each full-connection layer to be calculated, and the dimension score corresponding to each quality evaluation dimension is obtained, so that four dimension scores are obtained. And after the dimension score corresponding to each quality evaluation dimension is obtained, determining the quality score of the rhytideogram image according to the plurality of dimension scores.
203. And acquiring the weight corresponding to each quality evaluation dimension.
204. And calculating to obtain the quality score of the nose print image according to the weight corresponding to each quality evaluation dimension and the dimension score.
Specifically, after a dimension score corresponding to each quality evaluation dimension is obtained, a weight of each quality evaluation dimension is obtained, weighting calculation is performed on a plurality of dimension scores and corresponding weights, and the calculated weighting score is used as the quality score of the nose print image, wherein the weight of each quality evaluation dimension can be preset as required, and the higher the dimension has a greater influence on the identification accuracy of the nose print, the higher the weight is.
205. And when the quality score is smaller than a preset threshold value, outputting prompt information that the nasal print is not acquired.
And after the quality score is obtained, comparing the quality score with a preset threshold, and if the quality score of the nose print image is smaller than the preset threshold, indicating that if the current nose print image is used for an identity recognition algorithm, the identity of the pet to be recognized is difficult to accurately judge. Prompt information that no nose print is acquired can be output to prompt the user to acquire the nose print image again.
206. And when the quality score is greater than or equal to a preset threshold value, extracting the nasal print characteristic vector of the pet to be identified from the nasal print image.
On the contrary, if the quality score of the nose print image is greater than or equal to the preset threshold, it is indicated that the quality of the currently shot nose print image is better and can be used for identity recognition, and the identity of the pet to be recognized is verified based on the nose print image.
For example, a nasal print feature vector of the pet to be identified is extracted from the nasal print image based on a pre-trained feature extraction network.
The feature extraction network comprises a plurality of feature extraction modules and a full connection layer which are connected in sequence, wherein each feature extraction module comprises at least one convolution layer and at least one pooling layer. The input data of the first feature extraction module is a nose pattern image, the feature map data output by the last feature extraction module is the input data of the next feature extraction module, and the feature map data output by the last feature extraction module is input into the full-connection layer for dimension reduction processing and is expanded into a 1 multiplied by N vector, namely a nose pattern feature vector.
207. And respectively calculating Euclidean distances between the nose pattern feature vector and the reference feature vector of each pet stored in the database.
Taking a pet hospital as an example, for a registered pet, the nose print feature data of the extractor is used as a reference feature vector corresponding to the pet and is stored in the database.
And for the pet to be identified, after the nasal print characteristic vector is obtained, respectively calculating the Euclidean distance between the nasal print characteristic vector and the reference characteristic vector of each pet stored in the database.
208. And when the calculated Euclidean distance is smaller than a preset threshold value, determining the pet corresponding to the reference characteristic vector of which the Euclidean distance between the reference characteristic vector and the nasal print characteristic vector is smaller than the preset threshold value as a target pet, and judging that the pet to be identified and the target pet are the same pet.
When the calculated Euclidean distance is smaller than a preset threshold value, determining that the Euclidean distance between the calculated Euclidean distance and the nasal print feature vector of the pet to be identified is smaller than a reference feature quantity of the preset threshold value, and judging that the pet corresponding to the reference feature vector and the current pet to be identified are the same pet.
Therefore, the method for identifying the pet identity provided by the embodiment of the invention evaluates the quality of the acquired nose print image, instructs the user to acquire the nose print image again when the quality score does not meet the requirement, extracts the nose print characteristic vector from the nose print image when the quality score meets the requirement, identifies the identity information of the pet to be identified by calculating the Euclidean distance between the nose print characteristic vector and the reference characteristic vector of each pet stored in the database, avoids using the nose print image with poor quality to extract the nose print characteristic vector and identify the identity, and improves the accuracy of the identity identification of the pet.
In one embodiment, a device for identifying the identity of a pet is also provided. Referring to fig. 4, fig. 4 is a schematic structural diagram of a device 300 for identifying pet identities according to an embodiment of the present application. The device 300 for identifying pet identity is applied to electronic equipment, and the device 300 for identifying pet identity includes an image acquisition module 301, a quality evaluation module 302, an information prompt module 303 and an identity identification module 304, as follows:
the image acquisition module 301 is configured to acquire a face image of a pet to be identified, and acquire a nose print image from the face image;
a quality evaluation module 302, configured to determine a quality score of the nose print image according to a preset quality evaluation model, where the preset quality evaluation model is obtained by training according to a sample nose print image and a corresponding quality score label;
the information prompting module 303 is configured to output prompting information that the nose print is not acquired when the quality score is smaller than a preset threshold;
and the identity recognition module 304 is configured to recognize the identity of the pet to be recognized according to the nose print image when the quality score is greater than or equal to the preset threshold.
In some embodiments, the preset quality evaluation model comprises a feature extraction network and a full connectivity layer; the quality evaluation module 302 is configured to perform feature extraction processing on the nose print image according to the feature extraction network to obtain a nose print feature map; and inputting the nose pattern feature map into the full-connection layer for calculation to obtain the quality score of the nose pattern image.
In some embodiments, the quality score is a quality rating, and the fully-connected layer comprises a plurality of preset quality ratings; the quality evaluation module 302 is configured to input the nose pattern feature map into the full-link layer for calculation, so as to obtain a probability value corresponding to each preset quality level; and taking the preset quality grade with the highest probability value as the quality grade of the nose print image.
In some embodiments, there are multiple fully-connected layers, each fully-connected layer corresponding to a quality assessment dimension; the quality evaluation module 302 is configured to input the nose pattern feature map into a full-connected layer corresponding to each quality evaluation dimension to perform calculation, so as to obtain a dimension score corresponding to each quality evaluation dimension; determining a quality score for the nose print image according to a plurality of dimension scores of the plurality of quality evaluation dimensions.
In some embodiments, the quality evaluation module 302 is configured to obtain a weight corresponding to each quality evaluation dimension; and calculating the quality score of the nose print image according to the weight corresponding to each quality evaluation dimension and the dimension score.
In some embodiments, the quality evaluation module 302 is configured to determine a lowest dimension score of a plurality of dimension scores of the plurality of quality evaluation dimensions as the quality score of the nose print image.
In some embodiments, the identity recognition module 304 is configured to extract a nasal print feature vector of the pet to be recognized from the nasal print image when the quality score is greater than or equal to the preset threshold;
respectively calculating Euclidean distances between the nose pattern feature vectors and the reference feature vectors of the pets stored in the database;
and when the calculated Euclidean distance is smaller than a preset threshold value, determining the pet corresponding to the reference characteristic vector of which the Euclidean distance between the reference characteristic vector and the nasal print characteristic vector is smaller than the preset threshold value as a target pet, and judging that the pet to be identified and the target pet are the same pet.
It should be noted that the device for identifying a pet identity provided in the embodiment of the present application and the method for identifying a pet identity in the above embodiments belong to the same concept, and any method provided in the method for identifying a pet identity may be implemented by the device for identifying a pet identity.
Therefore, the device for identifying the pet identity, provided by the embodiment of the application, acquires the face image of the pet to be identified, acquires the nose print image from the face image, determines the quality score of the nose print image according to the preset quality evaluation model, the preset quality evaluation model is obtained according to the sample nose print image and the corresponding quality score label, when the determined quality score is smaller than the preset threshold, the prompt information of the non-collected nose print is output, and when the quality score is larger than or equal to the preset threshold, the identity of the pet to be identified is identified according to the nose print image. Adopt the scheme of this application, carry out quality evaluation to the rhinoprint image of gathering, when quality score satisfies unsatisfying the requirement, instruct the user to gather the image again, and when quality score satisfies the requirement, just can carry out identification, the degree of accuracy of pet identification that has improved.
The embodiment of the application also provides the electronic equipment. The electronic device can be a smart phone, a tablet computer and the like. Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device 400 comprises a processor 401 and a memory 402. The processor 401 is electrically connected to the memory 402.
The processor 401 is a control center of the electronic device 400, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or calling a computer program stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device.
Memory 402 may be used to store computer programs and data. The memory 402 stores computer programs containing instructions executable in the processor. The computer program may constitute various functional modules. The processor 401 executes various functional applications and data processing by calling a computer program stored in the memory 402.
In this embodiment, the processor 401 in the electronic device 400 loads instructions corresponding to one or more processes of the computer program into the memory 402 according to the following steps, and the processor 401 runs the computer program stored in the memory 402, so as to implement various functions:
acquiring a face image of a pet to be identified, and acquiring a nose print image from the face image;
determining the quality score of the nose print image according to a preset quality evaluation model, wherein the preset quality evaluation model is obtained by training according to a sample nose print image and a corresponding quality score label;
when the quality score is smaller than a preset threshold value, outputting prompt information that the nasal print is not acquired;
and when the quality score is greater than or equal to the preset threshold value, identifying the identity of the pet to be identified according to the nose print image.
In some embodiments, please refer to fig. 6, and fig. 6 is a second structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device 400 further comprises: radio frequency circuit 403, display 404, control circuit 405, input unit 406, audio circuit 407, sensor 408, and power supply 409. The processor 401 is electrically connected to the radio frequency circuit 403, the display 404, the control circuit 405, the input unit 406, the audio circuit 407, the sensor 408, and the power source 409.
The radio frequency circuit 403 is used for transceiving radio frequency signals to communicate with a network device or other electronic devices through wireless communication.
The display screen 404 may be used to display information entered by or provided to the user as well as various graphical user interfaces of the electronic device, which may be comprised of images, text, icons, video, and any combination thereof.
The control circuit 405 is electrically connected to the display screen 404, and is configured to control the display screen 404 to display information.
The input unit 406 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control. The input unit 406 may include a fingerprint recognition module.
The audio circuit 407 may provide an audio interface between the user and the electronic device through a speaker, microphone. Wherein the audio circuit 407 comprises a microphone. The microphone is electrically connected to the processor 401. The microphone is used for receiving voice information input by a user.
The sensor 408 is used to collect external environmental information. The sensors 408 may include one or more of ambient light sensors, acceleration sensors, gyroscopes, etc.
The power supply 409 is used to power the various components of the electronic device 400. In some embodiments, the power source 409 may be logically connected to the processor 401 through a power management system, so that functions of managing charging, discharging, and power consumption are implemented through the power management system.
Although not shown in the drawings, the electronic device 400 may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
In this embodiment, the processor 401 in the electronic device 400 loads instructions corresponding to one or more processes of the computer program into the memory 402 according to the following steps, and the processor 401 runs the computer program stored in the memory 402, so as to implement various functions:
acquiring a face image of a pet to be identified, and acquiring a nose print image from the face image;
determining the quality score of the nose print image according to a preset quality evaluation model, wherein the preset quality evaluation model is obtained by training according to a sample nose print image and a corresponding quality score label;
when the quality score is smaller than a preset threshold value, outputting prompt information that the nasal print is not acquired;
and when the quality score is greater than or equal to the preset threshold value, identifying the identity of the pet to be identified according to the nose print image.
Therefore, the electronic device acquires a face image of a pet to be identified, acquires a nose print image from the face image, determines a quality score of the nose print image according to a preset quality evaluation model, trains the preset quality evaluation model according to a sample nose print image and a corresponding quality score label, outputs prompt information that no nose print is acquired when the determined quality score is smaller than a preset threshold value, and identifies the identity of the pet to be identified according to the nose print image when the quality score is larger than or equal to the preset threshold value. Adopt the scheme of this application, carry out quality evaluation to the rhinoprint image of gathering, when quality score satisfies unsatisfying the requirement, instruct the user to gather the image again, and when quality score satisfies the requirement, just can carry out identification, the degree of accuracy of pet identification that has improved.
The embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer executes the method for identifying the identity of a pet according to any of the above embodiments.
It should be noted that, all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer readable storage medium, which may include, but is not limited to: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Furthermore, the terms "first", "second", and "third", etc. in this application are used to distinguish different objects, and are not used to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to only those steps or modules listed, but rather, some embodiments may include other steps or modules not listed or inherent to such process, method, article, or apparatus.
The method, the apparatus, the storage medium, and the electronic device for identifying pet identity provided in the embodiments of the present application are described in detail above. The principle and the implementation of the present application are explained herein by applying specific examples, and the above description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method for identifying an identity of a pet, comprising:
acquiring a face image of a pet to be identified, and acquiring a nose print image from the face image;
determining the quality score of the nose print image according to a preset quality evaluation model, wherein the preset quality evaluation model is obtained by training according to a sample nose print image and a corresponding quality score label;
when the quality score is smaller than a preset threshold value, outputting prompt information that the nasal print is not acquired;
and when the quality score is greater than or equal to the preset threshold value, identifying the identity of the pet to be identified according to the nose print image.
2. The method of claim 1, wherein the preset quality evaluation model comprises a feature extraction network and a full connectivity layer; the determining the quality score of the nose print image according to a preset quality evaluation model comprises the following steps:
carrying out feature extraction processing on the nose pattern image according to the feature extraction network to obtain a nose pattern feature image;
and inputting the nose pattern feature map into the full-connection layer for calculation to obtain the quality score of the nose pattern image.
3. The method of claim 2, wherein the quality score is a quality rating, the fully-connected layer comprising a plurality of preset quality ratings; the inputting the nose pattern feature map into the full-connection layer for calculation to obtain the quality score of the nose pattern image comprises the following steps:
inputting the nose pattern feature map into the full-connection layer for calculation to obtain a probability value corresponding to each preset quality level;
and taking the preset quality grade with the highest probability value as the quality grade of the nose print image.
4. The method of claim 2, wherein there are a plurality of fully-connected layers, each fully-connected layer corresponding to a quality assessment dimension; the inputting the nose pattern feature map into the full-connection layer for calculation to obtain the quality score of the nose pattern image comprises the following steps:
inputting the nose pattern feature map into a full-connection layer corresponding to each quality evaluation dimension for calculation to obtain a dimension score corresponding to each quality evaluation dimension;
determining a quality score for the nose print image according to a plurality of dimension scores of the plurality of quality evaluation dimensions.
5. The method of claim 4, wherein determining the quality score for the nose print image from a plurality of dimensional scores for the plurality of quality assessment dimensions comprises:
acquiring the weight corresponding to each quality evaluation dimension;
and calculating the quality score of the nose print image according to the weight corresponding to each quality evaluation dimension and the dimension score.
6. The method of claim 4, wherein determining the quality score for the nose print image from a plurality of dimensional scores for the plurality of quality assessment dimensions comprises:
determining a lowest dimension score of a plurality of dimension scores of the plurality of quality assessment dimensions as a quality score of the nose print image.
7. The method of any one of claims 1 to 6, wherein when the quality score is greater than or equal to the preset threshold, identifying the identity of the pet to be identified according to the nose print image comprises:
when the quality score is larger than or equal to the preset threshold value, extracting the nasal print feature vector of the pet to be identified from the nasal print image;
respectively calculating Euclidean distances between the nose pattern feature vectors and the reference feature vectors of the pets stored in the database;
and when the calculated Euclidean distance is smaller than a preset threshold value, determining the pet corresponding to the reference characteristic vector of which the Euclidean distance between the reference characteristic vector and the nasal print characteristic vector is smaller than the preset threshold value as a target pet, and judging that the pet to be identified and the target pet are the same pet.
8. An apparatus for identifying an identity of a pet, comprising:
the system comprises an image acquisition module, a recognition module and a recognition module, wherein the image acquisition module is used for acquiring a face image of a pet to be recognized and acquiring a nose print image from the face image;
the quality evaluation module is used for determining the quality score of the nose print image according to a preset quality evaluation model, wherein the preset quality evaluation model is obtained by training according to a sample nose print image and a corresponding quality score label;
the information prompting module is used for outputting prompting information of not acquired nasal veins when the quality score is smaller than a preset threshold value;
and the identity recognition module is used for recognizing the identity of the pet to be recognized according to the nose print image when the quality score is greater than or equal to the preset threshold value.
9. A computer-readable storage medium, on which a computer program is stored, which, when run on a computer, causes the computer to carry out a method of identifying the identity of a pet according to any one of claims 1 to 7.
10. An electronic device comprising a processor and a memory, said memory storing a computer program, wherein said processor is adapted to perform the method of identifying the identity of a pet of any one of claims 1 to 7 by invoking said computer program.
CN202111645814.7A 2021-12-30 2021-12-30 Method and device for identifying pet identity, storage medium and electronic equipment Pending CN114299546A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111645814.7A CN114299546A (en) 2021-12-30 2021-12-30 Method and device for identifying pet identity, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111645814.7A CN114299546A (en) 2021-12-30 2021-12-30 Method and device for identifying pet identity, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN114299546A true CN114299546A (en) 2022-04-08

Family

ID=80971797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111645814.7A Pending CN114299546A (en) 2021-12-30 2021-12-30 Method and device for identifying pet identity, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114299546A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115240230A (en) * 2022-09-19 2022-10-25 星宠王国(北京)科技有限公司 Canine face detection model training method and device, and detection method and device
CN115299366A (en) * 2022-06-21 2022-11-08 新瑞鹏宠物医疗集团有限公司 Intelligent feeding method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115299366A (en) * 2022-06-21 2022-11-08 新瑞鹏宠物医疗集团有限公司 Intelligent feeding method and device, electronic equipment and storage medium
CN115299366B (en) * 2022-06-21 2024-02-13 新瑞鹏宠物医疗集团有限公司 Smart feeding method, smart feeding device, electronic equipment and storage medium
CN115240230A (en) * 2022-09-19 2022-10-25 星宠王国(北京)科技有限公司 Canine face detection model training method and device, and detection method and device

Similar Documents

Publication Publication Date Title
CN107633204B (en) Face occlusion detection method, apparatus and storage medium
CN107679447A (en) Facial characteristics point detecting method, device and storage medium
CN110163082A (en) A kind of image recognition network model training method, image-recognizing method and device
CN112348117A (en) Scene recognition method and device, computer equipment and storage medium
CN108205684B (en) Image disambiguation method, device, storage medium and electronic equipment
CN107679448A (en) Eyeball action-analysing method, device and storage medium
CN111414946B (en) Artificial intelligence-based medical image noise data identification method and related device
CN114299546A (en) Method and device for identifying pet identity, storage medium and electronic equipment
CN112052186A (en) Target detection method, device, equipment and storage medium
CN107633205A (en) lip motion analysis method, device and storage medium
CN111694954B (en) Image classification method and device and electronic equipment
CN111340213B (en) Neural network training method, electronic device, and storage medium
CN108614987A (en) The method, apparatus and robot of data processing
CN111126515B (en) Model training method based on artificial intelligence and related device
CN113190646A (en) User name sample labeling method and device, electronic equipment and storage medium
CN114495241A (en) Image identification method and device, electronic equipment and storage medium
CN111382791B (en) Deep learning task processing method, image recognition task processing method and device
CN114255321A (en) Method and device for collecting pet nose print, storage medium and electronic equipment
CN111950507A (en) Data processing and model training method, device, equipment and medium
CN111797849A (en) User activity identification method and device, storage medium and electronic equipment
CN116259083A (en) Image quality recognition model determining method and related device
CN111797656B (en) Face key point detection method and device, storage medium and electronic equipment
CN112115751A (en) Training method and device for animal mood recognition model
CN111783519A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111797878A (en) Data processing method, data processing device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination