WO2022088626A1 - 基于猫鼻纹特征提取模型的猫鼻纹识别方法及装置 - Google Patents

基于猫鼻纹特征提取模型的猫鼻纹识别方法及装置 Download PDF

Info

Publication number
WO2022088626A1
WO2022088626A1 PCT/CN2021/089559 CN2021089559W WO2022088626A1 WO 2022088626 A1 WO2022088626 A1 WO 2022088626A1 CN 2021089559 W CN2021089559 W CN 2021089559W WO 2022088626 A1 WO2022088626 A1 WO 2022088626A1
Authority
WO
WIPO (PCT)
Prior art keywords
nose print
nose
cat
image
print
Prior art date
Application number
PCT/CN2021/089559
Other languages
English (en)
French (fr)
Inventor
徐强
李凌
宋凯旋
喻辉
陈宇桥
Original Assignee
苏州中科先进技术研究院有限公司
苏州中科华影健康科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州中科先进技术研究院有限公司, 苏州中科华影健康科技有限公司 filed Critical 苏州中科先进技术研究院有限公司
Publication of WO2022088626A1 publication Critical patent/WO2022088626A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present invention relates to the technical field of image recognition, and in particular, to a method and device for identifying cat nose prints based on a cat nose print feature extraction model.
  • biometrics based on deep learning and image processing are widely used in identity authentication, and biometrics-based identity authentication has also become one of the most popular frontier fields at home and abroad.
  • identity authentication based on human biometrics is the most widely used, including fingerprint recognition, iris recognition, face recognition and voiceprint recognition, etc., which have been widely used in our daily life.
  • animal biometrics can also be used for identity verification.
  • the nose prints of animals have been shown to be used as identification.
  • nose print recognition has developed slowly and has fewer related applications. As early as 1982, Japanese police used rubbing paper with nose prints to identify stolen cattle and arrest thieves accordingly. Since then, a series of studies have been carried out on the nose pattern of cattle at home and abroad, including identifying the type of cattle through the nose pattern, identifying the quality of milk and milk, and identifying the quality of beef and beef. As the application of nose print recognition in animal husbandry is becoming more and more mature, people have begun to focus on small and medium-sized carnivores, including foxes, raccoons and pet cats. Pet cats are the best friends of human beings, and their attention is also Highest.
  • Embodiments of the present invention provide a cat nose print recognition method and device based on a cat nose print feature extraction model, so as to at least solve the technical problem of low recognition accuracy in traditional recognition techniques.
  • a cat nose print recognition method based on a cat nose print feature extraction model comprising the following steps:
  • identify the image data to be recognized according to the nose print image recognition mode, and the step of outputting the nose print identification result includes:
  • identify the image data to be recognized according to the nose print image recognition mode and the step of outputting the nose print identification result also includes:
  • the third to-be-recognized nose print image and a group of nose print images are respectively input into the trained nose print feature extraction model for feature extraction operation, and the third nose print feature vector and N nose print feature vectors are obtained, where N is greater than 0. positive integer;
  • identify the image data to be recognized according to the nose print image recognition mode and the step of outputting the nose print identification result also includes:
  • identify the image data to be recognized according to the nose print image recognition mode and the step of outputting the nose print identification result also includes:
  • the output database does not have the same information as the cat in the sixth to-be-recognized nose print image
  • the method also includes:
  • the layer from the input layer to the output feature in the nose print classification model is intercepted as the nose print feature extraction model.
  • each nose print image is preprocessed to obtain a processed nose print training image and a nose print test image;
  • the nose print training image and the nose print test image are stored in the database corresponding to the ID identification to obtain the nose print data set.
  • a cat nose print recognition device based on a cat nose print feature extraction model comprising:
  • the request receiving module is used to receive the cat nose pattern recognition request, and the cat nose pattern recognition request carries at least the image data to be recognized;
  • a recognition mode selection module used for selecting a nose print image recognition mode that matches the image data to be recognized
  • the recognition result output module is used to recognize the image data to be recognized according to the nose print image recognition mode, and output the nose print recognition result.
  • the device also includes:
  • Network building blocks for building basic nose print deep learning networks
  • the data set acquisition module is used to label the cat images in the database to obtain a segmentation data set for training
  • the network training module is used to input the nose print data set into the basic nose print deep learning network for iterative training operation to obtain a trained nose print classification model
  • the model interception module is used to intercept the layers from the input layer to the output feature in the noseprint classification model as a noseprint feature extraction model.
  • the device also includes:
  • the nose print image acquisition module is used to collect nose print images of different cats in group A, where A is a positive integer greater than 0;
  • the identification setting module is used to set an ID identification for each cat
  • the data set saving module is used to save the nose print training image and the nose print test image corresponding to the ID identification in the database to obtain the nose print dataset.
  • the method and device for cat nose print recognition based on a cat nose print feature extraction model in the embodiments of the present invention obtain the image data to be recognized in the cat nose print recognition request;
  • the noseprint image recognition mode that matches the data to realize the application of the specific noseprint recognition scene, and then identify the image data to be recognized according to the noseprint image recognition mode, and output the noseprint recognition result to ensure that the nose in the specific noseprint recognition scene
  • the recognition accuracy rate of cat nose pattern is improved, and the cat nose pattern recognition method and device based on the cat nose pattern feature extraction model of the present invention can improve the accuracy rate of cat nose pattern recognition.
  • Fig. 1 is the scene schematic diagram of the cat nose print identification method based on the cat nose print feature extraction model of the present invention
  • Fig. 2 is the flow chart of the cat nose print identification method based on the cat nose print feature extraction model of the present invention
  • Fig. 3 is a flow chart of identifying the image data to be recognized according to the noseprint image recognition mode of the cat noseprint recognition method based on the cat noseprint feature extraction model of the present invention
  • Fig. 5 is another flowchart of the identification of the image data to be recognized according to the noseprint image recognition mode of the cat noseprint recognition method based on the cat noseprint feature extraction model of the present invention
  • Fig. 6 is another flow chart of identifying the image data to be recognized according to the noseprint image recognition mode of the cat nose print recognition method based on the cat nose print feature extraction model of the present invention
  • FIG. 9 is a block diagram of a cat nose print recognition device based on a cat nose print feature extraction model of the present invention.
  • Fig. 10 is the module diagram that the present invention is based on the cat nose print recognition device of the cat nose print feature extraction model to be identified according to the nose print image recognition mode to be recognized image data;
  • Fig. 11 is another module diagram that the image data to be recognized is recognized according to the nose pattern image recognition mode of the cat nose pattern recognition device based on the cat nose pattern feature extraction model of the present invention
  • Fig. 12 is another module diagram that the image data to be recognized is recognized according to the nose pattern image recognition mode of the cat nose pattern recognition device based on the cat nose pattern feature extraction model of the present invention
  • Fig. 13 is another module diagram of the recognition image data to be recognized according to the noseprint image recognition mode of the cat noseprint recognition device based on the cat noseprint feature extraction model of the present invention
  • Fig. 14 is the module diagram of obtaining the nose print feature extraction model of the cat nose print recognition device based on the cat nose print feature extraction model of the present invention
  • Fig. 15 is the module diagram of obtaining the nose print data set of the cat nose print identification method based on the cat nose print feature extraction model of the present invention.
  • a cat nose print recognition method based on a cat nose print feature extraction model comprising the following steps:
  • S1 Receive a cat nose pattern recognition request, and the cat nose pattern recognition request carries at least the image data to be recognized.
  • the epidermis, dermis and stromal layer all grow together during the development of the cat nose skin, the soft subcutaneous tissue grows relatively faster than the hard epidermis, so the epidermis will be continuously produced.
  • the pressure on the top of the skin forces the slower-growing epidermis to shrink and collapse toward the inner tissue, and gradually become curved and wrinkled to relieve the pressure exerted on it by the subcutaneous tissue. In this way, on the one hand, it is forced to attack upwards, and on the other hand, it is forced to retreat, causing the skin to grow tortuous and uneven, thus forming lines.
  • This process of bending and wrinkling fluctuates with the change of the upper layer pressure generated by the inner layer, forming uneven ridges or folds, until the development process is stopped, and finally it is shaped into the unchanged nose line until death.
  • the formation of cat nose prints is exactly the same as the formation of human fingerprints.
  • the nose is the most peculiar part of cats. They will pay special attention to the safety of their nose, because that is the foundation for their survival. Once they smell a dangerous breath, they will immediately respond accordingly to defend, and the passage of time will not Affects the shape of the texture.
  • the present embodiment performs nose print recognition on the cat nose print image based on the cat nose print feature extraction model to ensure the accuracy of cat nose print recognition, thereby ensuring the accuracy of cat identity authentication.
  • the cat nose print recognition request is an operation request input by the user according to the application of the actual specific nose print recognition scene, and the recognition operation needs to be performed; and the cat nose print recognition request carries at least the image data to be recognized, and the image data to be recognized is the actual image data to be recognized.
  • receiving a cat nose pattern recognition request input by a user from a client and obtaining the cat nose pattern recognition request carrying at least the image data to be recognized, enables subsequent analysis and recognition of the to-be-recognized image data to accurately identify the to-be-recognized image data
  • the middle cat nose pattern so as to accurately obtain the cat identity of the specific nose pattern recognition scene.
  • S2 Select a nose print image recognition mode matching the image data to be recognized.
  • the nose print image recognition mode simulates and experiments several recognition methods suitable for cat nose print images according to actual specific nose print recognition scenarios.
  • the subsequent noseprint image recognition mode obtained according to the index can be enabled. Recognition and analysis are performed on the image data to be recognized, so as to accurately identify the cat nose print in the image data to be recognized, so as to accurately obtain the cat identity of the specific nose print recognition scene.
  • S3 Recognize the image data to be recognized according to the nose print image recognition mode, and output the nose print recognition result.
  • the nose print recognition result is a result of whether the nose prints in the image data to be identified are consistent or inconsistent, and can be used to indicate whether the image data to be identified are the same.
  • the image data to be recognized is recognized according to the nose print image recognition mode.
  • a deep learning model suitable for cat nose print feature extraction that has been constructed and trained can be used, and a feature matching algorithm is used to perform nose print on the image data to be recognized.
  • Pattern recognition to output the result of whether the cat nose pattern is consistent or inconsistent, can ensure the accuracy and guarantee of the acquired nose pattern features, thereby ensuring the accuracy of cat nose pattern recognition.
  • the image data to be recognized in the cat nose print recognition request is obtained; then, the image data to be recognized is selected according to the image data to be recognized.
  • the matching noseprint image recognition mode is used to realize the application of the specific noseprint recognition scene, and then the image data to be recognized is recognized according to the noseprint image recognition mode, and the noseprint recognition result is output to ensure the noseprint recognition in the specific noseprint recognition scene.
  • the cat nose print identification method based on the cat nose print feature extraction model of the present invention can improve the accuracy of cat nose print identification; the present invention has low computational complexity, simplicity and practicality, and low cost.
  • the nose print recognition technology adopted in this embodiment not only has low cost, no additional equipment costs, and high recognition accuracy, but also can avoid disadvantages such as harming the pet's body, and the nose print recognition is simple to operate, which can help pet service agencies. Save a lot of identity authentication time and improve work efficiency; secondly, the nose print recognition scheme adopted in this implementation can help cats better perform activities such as competition, insurance, and medical care.
  • the image data to be recognized is the first nose print image to be identified and the second nose print image to be identified, referring to FIG. 3 , step S3 identifies the image data to be identified according to the nose print image recognition mode, and outputs the nose print
  • the steps to identify results include:
  • S31 Input the first to-be-recognized noseprint image and the second to-be-recognized noseprint image into the trained noseprint feature extraction model to perform a feature extraction operation to obtain a first noseprint feature vector and a second noseprint feature vector.
  • the trained nose print feature extraction model is continuously iteratively trained on a pre-built basic nose print deep learning network by using a large number of cat nose print image data sets, and the obtained model has high accuracy A model of nose print feature recognition rate.
  • the image data to be recognized are the first nose print image to be identified and the second nose print image to be identified.
  • the specific nose print recognition scene is to determine whether the cat in the two nose print images is the same, and the two nose print images are the same.
  • the nose print images are the first to-be-recognized nose-print image and the second to-be-recognized nose print image.
  • the first nose print feature vector is a feature sequence output after performing nose print feature extraction on the first to-be-recognized nose print image by using the trained nose print feature extraction model, which is usually presented in the form of a vector;
  • the pattern feature vector is a feature sequence output after performing nose pattern feature extraction on the second to-be-recognized nose pattern image by using the trained nose pattern feature extraction model.
  • the first to-be-recognized noseprint image and the second to-be-recognized noseprint image are respectively input into a trained noseprint feature extraction model to perform noseprint feature extraction operations, and the respectively output and first to-be-identified noseprint images are
  • the feature sequence corresponding to the image, that is, the first nose print feature vector, and the feature sequence corresponding to the second nose print image to be recognized, that is, the second nose print feature vector can accurately obtain the nose print features in the image data. To a certain extent, the accuracy of cat nose print recognition is guaranteed.
  • S32 Calculate the first similarity between the first nose print feature vector and the second nose print feature vector.
  • the first similarity is used to quantify the similarity between the first nose print feature vector and the second nose print feature vector, that is, the similarity is digitized.
  • the first nose print image to be identified and the second nose print image to be identified corresponding to the first nose print feature vector and the second nose print feature vector are more similar to the cat nose prints, that is, the cats in the images are more similar.
  • calculating the first similarity between the first nose print feature vector and the second nose print feature vector can specifically be calculated by calculating the cosine value between the two feature vectors, that is, a cosine similarity algorithm, or by calculating two features.
  • the dot product between the vectors is used to represent the first similarity, and other calculation methods may also be used, which are not specifically limited here.
  • the dot product between the first nose print feature vector and the second nose print feature vector is calculated, and the value of the dot product is used as the first similarity.
  • the first comparison condition is used to measure whether the first similarity reaches a standard that can determine whether the first nose print feature vector and the second nose print feature vector are consistent. Specifically, it can be set according to actual application requirements. There are no specific restrictions.
  • the first comparison condition is whether the first similarity is greater than a preset first threshold
  • it is determined whether the first similarity meets the preset first comparison condition that is, the first similarity obtained in step S32 It is compared with a preset first threshold to determine whether the first similarity meets the preset first comparison condition.
  • S331 If the first similarity meets the preset first comparison condition, output information that the first to-be-recognized noseprint image and the second to-be-recognized noseprint image are the same cat.
  • the first nose print feature vector and the second nose print feature vector are the nose print feature vector of the same cat, that is, the cat nose print in the first to-be-recognized nose-print image and the second to-be-recognized nose print image It is the cat nose print of the same cat, so it can be determined that the cat in the first to-be-recognized nose-print image and the second to-be-recognized nose-print image are the same, then the first to-be-recognized nose print image and the second to-be-recognized nose print image can be included.
  • the information of identifying the same cat in the nose print image is output to the client for use or management by the user.
  • S332 If the first similarity does not meet the preset first comparison condition, output information that the cats in the first to-be-recognized noseprint image and the second to-be-recognized noseprint image are not the same cat.
  • the comparison result of comparing the first similarity with the preset first threshold in step S33 when the result is that the first similarity is less than or equal to the preset first threshold, that is, the first similarity does not meet the preset
  • the first nose print feature vector and the second nose print feature vector are not the nose print feature vector of the same cat, that is, the cat nose in the first nose print image to be identified and the second nose print image to be identified
  • the pattern is not the cat nose pattern of the same cat, it can be determined that the cat in the first to-be-recognized nose-print image and the second to-be-recognized nose-print image are not the same cat, then the first to-be-recognized nose print image and the second
  • the information that the cats in the nose print image to be recognized are not the same one is output to the client for use or management by the user.
  • the present embodiment obtains nose print features by inputting two nose print images into the trained nose print feature extraction model respectively, and then calculates the dot product of the two nose print feature vectors to indicate the similarity, and compare the similarity with the set first threshold to determine whether it is the same cat, which can ensure the accuracy and guarantee of the acquired nose print features, thereby ensuring the accuracy of cat nose print recognition, and at the same time , the operation is simple and the computational complexity is low, which can improve the efficiency of cat nose print recognition to a certain extent.
  • the image data to be recognized is the third noseprint image to be recognized and the first ID mark
  • step S3 recognizes the image data to be recognized according to the noseprint image recognition mode, and outputs the result of the noseprint recognition. Steps also include:
  • the image data to be identified is the third nose print image to be identified and the first ID identification, which can be specifically understood as the specific nose print recognition scene is the judgment based on the single nose print image to be identified and the ID under one ID. Whether the cats in a group of noseprint images are the same, the single noseprint image to be identified is the third noseprint image to be identified, and a group of noseprint images matching the first ID identifier.
  • an index can be performed in the database according to the acquired first ID identifier, so as to obtain the corresponding ID number.
  • the first ID identifies a matched set of nose print images.
  • S42 Input the third to-be-recognized nose print image and a group of nose print images respectively into the trained nose print feature extraction model for feature extraction operation, and obtain a third nose print feature vector and N nose print feature vectors, where N is greater than A positive integer of 0.
  • the third nose print feature vector is a feature sequence that is output after performing nose print feature extraction on the third to-be-recognized nose print image by using a trained nose print feature extraction model, and is usually presented in the form of a vector;
  • the N nose print feature vectors are N feature sequences that are output after performing nose print feature extraction on a group of nose print images by using a trained nose print feature extraction model.
  • the third to-be-recognized nose-print image and a group of nose-print images are respectively input into the trained nose-print feature extraction model to perform the nose-print feature extraction operation, and the output respectively corresponds to the third to-be-identified nose print image
  • the feature sequence of , the third nose print feature vector, and the N feature sequences corresponding to a group of nose print images, that is, the N nose print feature vectors can accurately obtain the nose print features in the image data. To a certain extent, the accuracy of cat nose print recognition is guaranteed.
  • S43 Calculate the second similarity between the third nose print feature vector and each nose print feature vector respectively.
  • the second similarity is used to quantify the similarity between the third nose print feature vector and the N nose print feature vectors.
  • the second similarity between the third nose print feature vector and each nose print feature vector can be calculated two by two in sequence, and the specific calculation method is the same as the method of calculating the first similarity in step S32. No further elaboration here.
  • the second comparison condition is used to measure whether the second similarity reaches a standard that can determine whether the third nose print feature vector and which of the N nose print feature vectors are consistent. Set according to application requirements, and no specific restrictions are imposed here.
  • the second comparison condition is whether the second similarity is greater than a preset second threshold
  • it is determined whether the second similarity meets the preset second comparison condition that is, the N second similarity obtained in step S43
  • the similarity is respectively compared with a preset second threshold, so as to determine whether the second similarity meets the preset second comparison condition.
  • the comparison result of comparing the N second similarities with the preset second threshold in step S44 when the result is that there are M second similarities greater than the preset second threshold, that is, there are M second similarities.
  • the second similarity complies with the preset second comparison condition, which can be understood as the M second nose print feature vectors and the third nose print feature vectors are the nose print feature vectors of the same cat, that is, the M second nose print feature vectors
  • the cat nose print in the corresponding nose print image and the third to-be-recognized nose print image is the cat's nose print of the same cat, so that the nose print images corresponding to the M second nose print feature vectors and the third to-be-identified nose print can be determined.
  • the cat in the image is the same, then the information that the nose pattern image corresponding to the M nose pattern feature vectors and the cat in the third to-be-recognized nose pattern image are the same can be output to the client for the user to use or manage.
  • this embodiment extracts the noseprint features of a single noseprint image to be identified and a group of noseprint images under a certain ID, and then extracts the to-be-identified noseprint features.
  • the feature vector of a single nose print image and multiple feature vectors of a group of nose print images are successively dotted to calculate the similarity, and then the calculated similarity value is compared with the preset second threshold in turn, and the conformity is recorded. The number is greater than the second threshold, so that whether it is the same cat is finally determined by judging the similar number.
  • the image data to be identified is the fourth nose print image to be identified and the second ID mark, referring to FIG. 5 , step S3 identifies the image data to be identified according to the nose print image recognition mode, and outputs the nose print recognition result. Steps also include:
  • the image data to be identified is the fourth nose print image to be identified and the second ID identifier, which can be specifically understood as the specific nose print recognition scene is the judgment based on the single nose print image to be identified and the ID under one ID. Whether the cats corresponding to one nose print feature vector are the same, the single nose print image to be identified is the fourth nose print image to be identified, and the fifth nose print feature vector that matches the second ID identifier.
  • the present embodiment can perform an index in the database according to the acquired second ID, so as to obtain the corresponding ID number.
  • the second ID identifies the matching fifth nose print feature vector.
  • the fourth nose print feature vector is a feature sequence output after performing nose print feature extraction on the fourth to-be-recognized nose print image by using a trained nose print feature extraction model, and is usually presented in the form of a vector.
  • the noseprint feature extraction operation is performed by inputting the fourth to-be-recognized noseprint image into the trained noseprint feature extraction model, and the feature sequence corresponding to the fourth to-be-recognized noseprint image is output, that is, the fourth
  • the nose print feature vector can accurately obtain the nose print features in the image data, thereby ensuring the accuracy of cat nose print recognition to a certain extent.
  • S53 Calculate the third similarity between the fourth nose print feature vector and the fifth nose print feature vector.
  • the third degree of similarity is used to quantify the degree of similarity between the fourth nose print feature vector and the fifth nose print feature vector.
  • this embodiment calculates the third similarity between the fourth nose print feature vector and the fifth nose print feature vector, and the specific calculation method is the same as the method for calculating the first similarity in step S32, which is not repeated here.
  • the third comparison condition is used to measure whether the third similarity reaches a standard that can determine whether the fourth nose print feature vector and the fifth nose print feature vector are consistent. Specifically, it can be set according to actual application requirements. There are no specific restrictions here.
  • the fourth nose print feature vector and the fifth nose print feature vector are the nose print feature vector of the same cat, that is, the cat nose print corresponding to the fourth to-be-recognized nose print image and the fifth nose print feature vector is The cat nose pattern of the same cat, so that it can be determined that the cat corresponding to the fourth to-be-recognized nose-print image and the fifth nose-print feature vector is the same, then the fourth to-be-recognized nose print image and the fifth nose print feature can be included.
  • the information of the cat corresponding to the vector is output to the client for the user to use or manage the same cat.
  • the third comparison condition can be understood as the fourth nose print feature vector and the fifth nose print feature vector are not the same cat nose print feature vector, that is, the fourth to-be-recognized nose print image and the cat corresponding to the fifth nose print feature vector.
  • the nose pattern is not the cat nose pattern of the same cat, so it can be determined that the cat corresponding to the fourth to-be-recognized nose-print image and the fifth nose-print feature vector is not the same one, then the fourth to-be-recognized nose print image and the fifth
  • the information that the cats corresponding to the nose pattern feature vector are not the same is output to the client for use or management by the user.
  • a fourth nose print image to be identified is extracted through a nose print feature extraction model to extract a feature vector, and then the fourth nose print feature vector is combined with an ID in the database.
  • the fifth noseprint feature vector corresponding to the number is calculated by dot product to obtain the similarity, and then the similarity and threshold are used to determine whether the fourth to-be-recognized noseprint image and the cat corresponding to the fifth noseprint feature vector are the same cat.
  • the image data to be recognized is the sixth to-be-recognized nose print image and ID identification data, referring to FIG. 6 , step S3 identifies the to-be-recognized image data according to the nose print image recognition mode, and outputs the nose print recognition result. Also includes:
  • S61 Input the sixth to-be-recognized nose print image into the trained nose print feature extraction model to perform a feature extraction operation to obtain a sixth nose print feature vector.
  • the image data to be recognized is the sixth to-be-recognized noseprint image and ID identification data, which can be specifically understood as the specific noseprint recognition scene is judged based on a single noseprint image to be recognized and all IDs in the database Whether the cats corresponding to the corresponding nose print feature vectors are the same, that is, it is determined whether the sixth to-be-recognized nose print image and which of the nose print feature vectors in all the nose print feature vectors in the database correspond to the same cat.
  • the nose print image is the sixth to-be-identified nose print image, and all the nose print feature vectors in the database that match the ID identification data.
  • the sixth nose print feature vector is a feature sequence output after performing nose print feature extraction on the sixth to-be-recognized nose print image by using the trained nose print feature extraction model, and is usually presented in the form of a vector.
  • the noseprint feature extraction operation is performed by inputting the sixth to-be-recognized noseprint image into the trained noseprint feature extraction model, and the output feature sequence corresponding to the sixth to-be-identified noseprint image, that is, the sixth
  • the nose print feature vector can accurately obtain the nose print features in the image data, thereby ensuring the accuracy of cat nose print recognition to a certain extent.
  • the feature similarity is used to quantify the similarity between the sixth nose print feature vector and the K cat nose print feature vectors.
  • K feature similarities can be obtained by sequentially calculating the feature similarity between the sixth nose print feature vector and each cat nose print feature vector, and the specific calculation method is the same as that of calculating the first feature in step S32. Similarity is done in the same manner, and will not be repeated here.
  • the feature similarity is sorted in descending order, so as to quickly obtain the feature similarity ranked first as the maximum feature similarity.
  • the fourth comparison condition is used to measure whether the maximum feature similarity and the similarity of other features reach a level that can determine whether the sixth nose print feature vector and the cat nose print feature vectors of the K cat nose print feature vectors are consistent
  • the standard can be set according to the actual application requirements, and there is no specific limitation here.
  • the fourth comparison condition is whether the feature similarity is greater than the preset fourth threshold
  • it is determined whether the maximum feature similarity meets the preset fourth comparison condition that is, the maximum feature similarity obtained in step S63 is respectively It is compared with a preset fourth threshold to determine whether the maximum feature similarity meets the preset fourth comparison condition.
  • step S64 when the result is that the maximum feature similarity is less than or equal to the preset fourth threshold, that is, the maximum feature similarity does not meet the preset fourth threshold.
  • the fourth comparison condition can be understood as the sixth nose print feature vector and the K cat nose print feature vectors that do not have the same cat nose print feature vector, that is, the sixth to-be-recognized nose print image and the K cat nose print feature vectors
  • the sixth to-be-recognized nose pattern image and the cats corresponding to the K cat nose pattern feature vectors do not have the same cat, then the sixth to-be-recognized nose pattern can be included. No information about the same cat in the nose pattern image and the cats corresponding to the K cat nose pattern feature vectors is output to the client for use or management by the user.
  • the comparison result of comparing the maximum feature similarity with the preset fourth threshold in step S64 when the result is that the maximum feature similarity is greater than the preset fourth threshold, that is, the maximum feature similarity meets the preset fourth threshold.
  • the similarity of the preset fourth comparison condition may specifically be comparing the similarity of the K features with a preset fourth threshold and obtaining the comparison result.
  • the comparison result of comparing the K feature similarities with the preset fourth threshold in step S642 when the result is that there are J feature similarities greater than the preset fourth threshold, that is, there are J feature similarities
  • the preset fourth comparison condition is met, it can be understood that the J cat nose print feature vectors and the sixth nose print feature vector are the nose print feature vectors of the same cat, that is, the nose print images corresponding to the J cat nose print feature vectors.
  • the cat nose pattern in the sixth to-be-recognized nose-print image is the cat-nose-print of the same cat, so that it can be determined that the nose-print images corresponding to the J cat-nose-print feature vectors and the cat in the sixth to-be-identified nose print image are the same. If only, the information that the cat corresponding to the J cat nose print feature vectors and the cat in the sixth to-be-recognized nose print image are the same can be output to the client for use or management by the user.
  • the sixth nose print feature vector is extracted from the sixth to-be-recognized nose print image through a nose print feature extraction model, and then the sixth nose print feature vector is summed up in sequence
  • the cat nose pattern feature vectors corresponding to all IDs in the database are calculated by dot product similarity, and the K features are sorted from high to low. If the largest feature vector does not exceed the set fourth threshold, it is determined that the database is in the database.
  • the method further includes:
  • this embodiment in order to obtain a model with high nose print recognition accuracy, this embodiment builds a basic nose print deep learning network using Resnet50 as the skeleton network, and other networks can also be used according to actual application requirements, which will not be described here. limit.
  • this embodiment uses Resnet50 as the skeleton network, then introduces the attention model to strengthen the network, then uses the loss function to converge the entire network, and finally outputs the classification result through the softmax layer to construct the basic nose print deep learning network
  • the basic nose print deep learning network can be understood as including an input layer for preprocessing image data, a convolution layer for nose print feature extraction, a sampling layer for further nose print feature extraction, and a pooling for nose print feature compression. Layer, the fully connected layer for classifying nose prints, and the softmax layer for outputting nose print classification results, etc., can ensure the accuracy of nose print recognition to a certain extent.
  • the nose print data set is a pre-processed and prepared nose print training set and a nose print test set.
  • the database can be indexed according to the data type required by the actual application, so as to quickly and accurately obtain the pre-processed and prepared nose print training set and nose print test set for subsequent training.
  • S73 Input the nose print data set into the basic nose print deep learning network to perform an iterative training operation to obtain a trained nose print classification model.
  • the trained nose print classification model is a model used to identify the nose print features of the nose print images, and the extracted nose print features can be used to classify the nose prints.
  • a large amount of iterative training is performed on the basic nose print deep learning network by using the pre-processed and prepared nose print training set and nose print test set, so as to obtain nose prints that can be used to identify nose print images.
  • a nose print classification model that can extract the nose print features for nose print classification.
  • S74 Intercept the layer from the input layer to the output feature in the nose print classification model as a nose print feature extraction model.
  • intercepting the layers from the input layer to the output feature in the noseprint classification model can be understood as intercepting the input layer for preprocessing image data in the noseprint classification model, the convolution layer for noseprint feature extraction, The sampling layer used for further noseprint feature extraction and the pooling layer used for noseprint feature compression can also be intercepted according to actual application requirements, which are not specifically limited here.
  • the layers from the input layer to the output feature in the nose print classification model can be cut off after the fully connected layer in the nose print classification model, and the layer from the input layer to the output feature is reserved as the nose print feature extraction model.
  • this embodiment uses Resnet50 as the skeleton network, then introduces the attention model to strengthen the network, then uses the loss function to converge the entire network, and finally obtains the classification result through the softmax layer to obtain the basic nose. pattern.
  • the method further includes:
  • S81 Collect nose print images of different cats in group A, where A is a positive integer greater than 0.
  • this embodiment collects nose print images of different cats in group A, such as 3000 sets of nose print images of different cats.
  • the ID identifier is used to uniquely identify a cat, wherein each cat is in a one-to-one correspondence with the ID identifier.
  • a fixed and distinguishable ID number such as A111, is allocated to each cat.
  • S83 For each ID identifier, perform a preprocessing operation on each nose print image to obtain a processed nose print training image and a nose print test image.
  • the present embodiment performs a preprocessing operation on each nose print image for each ID mark, so as to obtain the nose print training in which the processed data format is adapted to the basic nose print deep learning network. image and nose print test image.
  • the preprocessing operation may be image preprocessing means such as rotation, size change, light and shade change, etc. of the nose print image, and other image processing means may also be performed according to actual application requirements, which are not specifically limited here.
  • image preprocessing methods such as image rotation, size change, and light and dark changes are performed on each nose print image, so as to obtain a processed data format that is compatible with the basic nose print deep learning network.
  • image preprocessing methods such as image rotation, size change, and light and dark changes are performed on each nose print image, so as to obtain a processed data format that is compatible with the basic nose print deep learning network.
  • the nose print training image and the nose print test image are performed on each nose print image.
  • S84 Save the nose print training image and the nose print test image corresponding to the ID identifier in the database to obtain a nose print data set.
  • the nose print training image and the nose print test image obtained in step S83 are stored in a database in one-to-one correspondence with their ID identifiers, so as to obtain a nose print data set.
  • a cat nose print recognition device based on a cat nose print feature extraction model is provided, referring to FIG. 9 , including:
  • a request receiving module 901 configured to receive a cat nose print identification request, where the cat nose print identification request at least carries image data to be identified;
  • This process of bending and wrinkling fluctuates with the change of the upper layer pressure generated by the inner layer, forming uneven ridges or folds, until the development process is stopped, and finally it is shaped into the unchanged nose line until death.
  • the formation of cat nose prints is exactly the same as the formation of human fingerprints.
  • the nose is the most peculiar part of cats. They will pay special attention to the safety of their nose, because that is the foundation for their survival. Once they smell a dangerous breath, they will immediately respond accordingly to defend, and the passage of time will not Affects the shape of the texture.
  • the cat nose print recognition request is an operation request input by the user according to the application of the actual specific nose print recognition scene, and the recognition operation needs to be performed; and the cat nose print recognition request carries at least the image data to be recognized, and the image data to be recognized is the actual image data to be recognized.
  • receiving a cat nose pattern recognition request input by a user from a client and obtaining the cat nose pattern recognition request carrying at least the image data to be recognized, enables subsequent analysis and recognition of the to-be-recognized image data to accurately identify the to-be-recognized image data
  • the middle cat nose pattern so as to accurately obtain the cat identity of the specific nose pattern recognition scene.
  • Recognition mode selection module 902 used for selecting a nose print image recognition mode that matches the image data to be recognized
  • the nose print image recognition mode simulates and experiments several recognition methods suitable for cat nose print images according to actual specific nose print recognition scenarios.
  • the nose print image recognition mode in this embodiment can be specifically constructed and trained to be suitable for The deep learning model of cat nose print feature extraction is combined with the feature matching algorithm for nose print recognition, with the accuracy of cat nose print recognition.
  • the subsequent nose print images obtained according to the index can be made.
  • the recognition mode performs recognition and analysis on the image data to be recognized, so as to accurately identify the cat nose print in the image data to be recognized, so as to accurately obtain the cat identity of the specific nose print recognition scene.
  • the recognition result output module 903 is configured to recognize the image data to be recognized according to the nose print image recognition mode, and output the nose print recognition result.
  • the nose print recognition result is a result of whether the nose prints in the image data to be identified are consistent or inconsistent, and can be used to indicate whether the image data to be identified are the same.
  • the cat nose pattern recognition device based on the cat nose pattern feature extraction model in the embodiment of the present invention obtains the image data to be recognized in the cat nose pattern recognition request; then, selects the image data to be recognized according to the image data to be recognized
  • the matching noseprint image recognition mode is used to realize the application of the specific noseprint recognition scene, and then the image data to be recognized is recognized according to the noseprint image recognition mode, and the noseprint recognition result is output to ensure the noseprint recognition in the specific noseprint recognition scene.
  • the cat nose print recognition device based on the cat nose print feature extraction model of the present invention can improve the accuracy of cat nose print recognition; the present invention has low computational complexity, simplicity and practicality, and low cost.
  • the nose print recognition technology adopted in this embodiment not only has low cost, no additional equipment costs, and high recognition accuracy, but also can avoid disadvantages such as harming the pet's body, and the nose print recognition is simple to operate, which can help pet service agencies. Save a lot of identity authentication time and improve work efficiency; secondly, the nose print recognition scheme adopted in this implementation can help cats better perform activities such as competition, insurance, and medical care.
  • the recognition result output module 903 includes:
  • the first feature extraction unit 101 is used for inputting the first nose print image to be identified and the second nose print image to be identified into a trained nose print feature extraction model to perform feature extraction operations to obtain a first nose print feature vector and a second nose print feature vector. texture feature vector.
  • the trained nose print feature extraction model is continuously iteratively trained on a pre-built basic nose print deep learning network by using a large number of cat nose print image data sets, and the obtained model has high accuracy A model of nose print feature recognition rate.
  • the image data to be recognized are the first nose print image to be identified and the second nose print image to be identified.
  • the specific nose print recognition scene is to determine whether the cat in the two nose print images is the same, and the two nose print images are the same.
  • the nose print images are the first to-be-recognized nose-print image and the second to-be-recognized nose print image.
  • the first nose print feature vector is a feature sequence output after performing nose print feature extraction on the first to-be-recognized nose print image by using the trained nose print feature extraction model, which is usually presented in the form of a vector;
  • the pattern feature vector is a feature sequence output after performing nose pattern feature extraction on the second to-be-recognized nose pattern image by using the trained nose pattern feature extraction model.
  • the first to-be-recognized noseprint image and the second to-be-recognized noseprint image are respectively input into a trained noseprint feature extraction model to perform noseprint feature extraction operations, and the respectively output and first to-be-identified noseprint images are
  • the feature sequence corresponding to the image, that is, the first nose print feature vector, and the feature sequence corresponding to the second nose print image to be recognized, that is, the second nose print feature vector can accurately obtain the nose print features in the image data. To a certain extent, the accuracy of cat nose print recognition is guaranteed.
  • the first similarity is used to quantify the similarity between the first nose print feature vector and the second nose print feature vector, that is, the similarity is digitized.
  • the first nose print image to be identified and the second nose print image to be identified corresponding to the first nose print feature vector and the second nose print feature vector are more similar to the cat nose prints, that is, the cats in the images are more similar.
  • calculating the first similarity between the first nose print feature vector and the second nose print feature vector can specifically be calculated by calculating the cosine value between the two feature vectors, that is, a cosine similarity algorithm, or by calculating two features.
  • the dot product between the vectors is used to represent the first similarity, and other calculation methods may also be used, which are not specifically limited here.
  • the dot product between the first nose print feature vector and the second nose print feature vector is calculated, and the value of the dot product is used as the first similarity.
  • the first comparison condition judgment unit 103 is configured to judge whether the first similarity complies with the preset first comparison condition.
  • the first comparison condition is used to measure whether the first similarity reaches a standard that can determine whether the first nose print feature vector and the second nose print feature vector are consistent. Specifically, it can be set according to actual application requirements. There are no specific restrictions.
  • the first comparison condition is whether the first similarity is greater than the preset first threshold
  • it is determined whether the first similarity meets the preset first comparison condition that is, the first similarity calculation unit 102 obtains the The first similarity is compared with a preset first threshold, so as to determine whether the first similarity meets the preset first comparison condition.
  • the first information output unit 1031 is configured to output information that the cats in the first to-be-recognized noseprint image and the second to-be-recognized noseprint image are the same if the first similarity meets the preset first comparison condition.
  • the first nose print feature vector and the second nose print feature vector are the nose print feature vector of the same cat, that is, the first to-be-recognized nose-print image and the second to-be-recognized nose print image
  • the cat nose pattern in the image is the cat nose pattern of the same cat, so that it can be determined that the cat in the first to-be-recognized nose-print image and the second to-be-recognized nose-print image are the same cat, then the first to-be-recognized nose pattern image can be included in the cat nose pattern.
  • the information that the cat in the pattern image and the second to-be-recognized nose pattern image is the same is output to the client for use or management by the
  • the second information output unit 1032 is configured to output information that the cat in the first to-be-recognized noseprint image and the second to-be-recognized noseprint image are not the same if the first similarity does not meet the preset first comparison condition.
  • the first nose print feature vector and the second nose print feature vector are not the nose print feature vector of the same cat, that is, the first to-be-recognized nose-print image and the second to-be-recognized nose print image
  • the cat nose pattern in the image is not the cat nose pattern of the same cat, so it can be determined that the cat in the first to-be-recognized nose-print image and the second to-be-recognized nose-print image are not the same cat, then the first to-be-recognized nose print image can be included.
  • the information that the cat in the pattern image and the second to-be-recognized nose pattern image are not the same is output to the client for use or management
  • the present embodiment obtains nose print features by inputting two nose print images into the trained nose print feature extraction model respectively, and then calculates two The similarity is represented by the dot product of each nose print feature vector, and the similarity is compared with the set first threshold to determine whether it is the same cat.
  • the accuracy of nose print recognition is easy to operate and has low computational complexity, which can improve the efficiency of cat nose print recognition to a certain extent.
  • the recognition result output module 903 further includes:
  • the nose print image obtaining unit 111 is configured to obtain a group of nose print images matching the first ID identifier in the database.
  • the image data to be identified is the third nose print image to be identified and the first ID identification, which can be specifically understood as the specific nose print recognition scene is the judgment based on the single nose print image to be identified and the ID under one ID. Whether the cats in a group of noseprint images are the same, the single noseprint image to be identified is the third noseprint image to be identified, and a group of noseprint images matching the first ID identifier.
  • an index can be performed in the database according to the acquired first ID identifier, so as to obtain the corresponding ID number.
  • the first ID identifies a matched set of nose print images.
  • the second feature extraction unit 112 is configured to input the third to-be-recognized nose print image and a group of nose print images respectively into the trained nose print feature extraction model for feature extraction operation to obtain a third nose print feature vector and N nose prints Eigenvectors, where N is a positive integer greater than 0.
  • the third nose print feature vector is a feature sequence that is output after performing nose print feature extraction on the third to-be-recognized nose print image by using a trained nose print feature extraction model, and is usually presented in the form of a vector;
  • the N nose print feature vectors are N feature sequences that are output after performing nose print feature extraction on a group of nose print images by using a trained nose print feature extraction model.
  • the third to-be-recognized nose-print image and a group of nose-print images are respectively input into the trained nose-print feature extraction model to perform nose-print feature extraction operations, and the output respectively corresponds to the third to-be-identified nose print image
  • the feature sequence of that is, the third nose print feature vector
  • the N feature sequences corresponding to a group of nose print images that is, the N nose print feature vectors
  • the second similarity calculation unit 113 is configured to calculate the second similarity between the third nose print feature vector and each nose print feature vector respectively.
  • the second similarity is used to quantify the similarity between the third nose print feature vector and the N nose print feature vectors.
  • the second similarity between the third nose print feature vector and each nose print feature vector can be calculated two by two in sequence, and the specific calculation method is the same as the calculation of the first similarity in the first similarity calculation unit 102. is the same, and will not be repeated here.
  • the second comparison condition determination unit 114 is configured to determine whether the second similarity meets the preset second comparison condition.
  • the second comparison condition is used to measure whether the second similarity reaches a standard that can determine whether the third nose print feature vector and which of the N nose print feature vectors are consistent. Set according to application requirements, and no specific restrictions are imposed here.
  • the second comparison condition is whether the second similarity is greater than the preset second threshold
  • it is determined whether the second similarity meets the preset second comparison condition that is, the second similarity calculation unit 113 obtains the The N second degrees of similarity are compared with a preset second threshold respectively, so as to determine whether the second degrees of similarity meet the preset second comparison condition.
  • the third information output unit 115 is configured to output the noseprint images corresponding to the M second similarity symbols and the third to-be-recognized noseprint image if there are M second similarities that meet the preset second comparison conditions.
  • the cat is information about the same cat, wherein M is a positive integer less than or equal to N and greater than or equal to 0.
  • the comparison result of comparing the N second similarities with the preset second threshold in the second comparison condition judging unit 114 when the result is that there are M second similarities greater than the preset second threshold, That is, if there are M second similarities that meet the preset second comparison conditions, it can be understood that the M second nose print feature vectors and the third nose print feature vector are the nose print feature vectors of the same cat, that is, the M second nose print feature vectors and the third nose print feature vectors are the nose print feature vectors of the same cat.
  • the nose print image corresponding to the second nose print feature vector and the cat nose print in the third to-be-recognized nose print image are the cat nose print of the same cat, so that the nose print images corresponding to the M second nose print feature vectors and the third nose print image can be determined.
  • the three cats in the noseprint image to be recognized are the same, then the noseprint image containing the M noseprint feature vectors and the information that the cat in the third to-be-recognized noseprint image is the same can be output to the client for For users to use or manage.
  • this embodiment extracts the noseprint features of a single noseprint image to be identified and a group of noseprint images under a certain ID respectively. , then, the feature vector of the single nose print image to be identified and a plurality of feature vectors of a group of nose print images are successively dotted to calculate the similarity, and then the calculated similarity value and the preset No. The two thresholds are compared, and the number that meets the second threshold is recorded, so as to finally determine whether it is the same cat by judging the similar number.
  • the recognition result output module 903 further includes:
  • the nose print feature obtaining unit 121 is configured to obtain a fifth nose print feature vector matching the second ID identifier in the database.
  • the image data to be identified is the fourth nose print image to be identified and the second ID identifier, which can be specifically understood as the specific nose print recognition scene is the judgment based on the single nose print image to be identified and the ID under one ID. Whether the cats corresponding to one nose print feature vector are the same, the single nose print image to be identified is the fourth nose print image to be identified, and the fifth nose print feature vector that matches the second ID identifier.
  • the present embodiment can perform an index in the database according to the acquired second ID, so as to obtain the corresponding ID number.
  • the second ID identifies the matching fifth nose print feature vector.
  • the third feature extraction unit 122 is used for inputting the fourth nose print image to be identified into the trained nose print feature extraction model to perform feature extraction operation to obtain the fourth nose print feature vector.
  • the fourth nose print feature vector is a feature sequence output after performing nose print feature extraction on the fourth to-be-recognized nose print image by using a trained nose print feature extraction model, and is usually presented in the form of a vector.
  • the noseprint feature extraction operation is performed by inputting the fourth to-be-recognized noseprint image into the trained noseprint feature extraction model, and the feature sequence corresponding to the fourth to-be-recognized noseprint image is output, that is, the fourth
  • the nose print feature vector can accurately obtain the nose print features in the image data, thereby ensuring the accuracy of cat nose print recognition to a certain extent.
  • the third similarity calculation unit 123 is configured to calculate the third similarity between the fourth nose print feature vector and the fifth nose print feature vector.
  • the third degree of similarity is used to quantify the degree of similarity between the fourth nose print feature vector and the fifth nose print feature vector.
  • this embodiment calculates the third similarity between the fourth nose print feature vector and the fifth nose print feature vector, and the specific calculation method is the same as the first similarity calculation method in the first similarity calculation unit 102, It is not repeated here.
  • the third comparison condition judging unit 124 is configured to judge whether the third similarity meets the preset third comparison condition.
  • the third comparison condition is used to measure whether the third similarity reaches a standard that can determine whether the fourth nose print feature vector and the fifth nose print feature vector are consistent. Specifically, it can be set according to actual application requirements. There are no specific restrictions here.
  • the third comparison condition is whether the third similarity is greater than the preset third threshold
  • it is determined whether the third similarity meets the preset third comparison condition that is, the third similarity calculation unit 123 obtains the The third similarity degrees of , respectively, are compared with the preset third thresholds, so as to determine whether the third similarity degrees meet the preset third comparison conditions.
  • the fourth information output unit 1241 is used to output the information that the cat in the fourth to-be-recognized nose print image and the cat corresponding to the fifth nose print feature vector are the same if the third similarity meets the preset third comparison condition .
  • the preset third comparison condition can be understood as the fourth nose print feature vector and the fifth nose print feature vector are the nose print feature vector of the same cat, that is, the fourth to-be-recognized nose print image and the fifth nose print feature vector correspond to
  • the cat nose pattern is the cat nose pattern of the same cat, so it can be determined that the cat corresponding to the fourth to-be-recognized nose-print image and the fifth nose-print feature vector is the same, then the fourth to-be-recognized nose print image and Information about the same cat corresponding to the fifth nose print feature vector is output to the client for use or management by the user.
  • the fifth information output unit 1242 is used to output the cat in the fourth to-be-recognized nose print image and the cat corresponding to the fifth nose print feature vector if the third similarity does not meet the preset third comparison condition. information.
  • the fourth nose print feature vector and the fifth nose print feature vector are not the nose print feature vector of the same cat, that is, the fourth to-be-recognized nose print image and the fifth nose print image
  • the cat nose pattern corresponding to the feature vector is not the cat nose pattern of the same cat, so it can be determined that the cat corresponding to the fourth to-be-recognized nose-print image and the fifth nose-print feature vector are not the same cat, and the fourth to-be-recognized nose pattern can be included.
  • the information that the pattern image and the cat corresponding to the fifth nose pattern feature vector are not the same is output to the client for use or management by the user.
  • a fourth nose print image to be identified is extracted through a nose print feature extraction model to extract feature vectors, and then the fourth nose print image is extracted by a nose print feature extraction model.
  • the pattern feature vector and the fifth nose pattern feature vector corresponding to an ID number in the database calculate the dot product to obtain the similarity, and then determine whether the cat corresponding to the fourth to-be-recognized nose pattern image and the fifth nose pattern feature vector is determined by the similarity and the threshold. the same cat.
  • the recognition result output module 903 further includes:
  • the fourth feature extraction unit 131 is configured to input the sixth to-be-recognized nose print image into the trained nose print feature extraction model to perform a feature extraction operation to obtain a sixth nose print feature vector.
  • the image data to be recognized is the sixth to-be-recognized noseprint image and ID identification data, which can be specifically understood as the specific noseprint recognition scene is judged based on a single noseprint image to be recognized and all IDs in the database Whether the cats corresponding to the corresponding nose print feature vectors are the same, that is, it is determined whether the sixth to-be-recognized nose print image and which of the nose print feature vectors in all the nose print feature vectors in the database correspond to the same cat.
  • the nose print image is the sixth to-be-identified nose print image, and all the nose print feature vectors in the database that match the ID identification data.
  • the sixth nose print feature vector is a feature sequence output after performing nose print feature extraction on the sixth to-be-recognized nose print image by using the trained nose print feature extraction model, and is usually presented in the form of a vector.
  • the noseprint feature extraction operation is performed by inputting the sixth to-be-recognized noseprint image into the trained noseprint feature extraction model, and the output feature sequence corresponding to the sixth to-be-identified noseprint image, that is, the sixth
  • the nose print feature vector can accurately obtain the nose print features in the image data, thereby ensuring the accuracy of cat nose print recognition to a certain extent.
  • the feature similarity calculation unit 132 is configured to calculate the similarity between the sixth nose print feature vector and each cat nose print feature vector, respectively, to obtain K feature similarities, where K is a positive integer greater than 0.
  • the feature similarity is used to quantify the similarity between the sixth nose print feature vector and the K cat nose print feature vectors.
  • K feature similarities can be obtained by calculating the feature similarity between the sixth nose print feature vector and each cat nose print feature vector in turn, and the specific calculation method is the same as that of the first similarity calculation unit.
  • the method for calculating the first similarity in 102 is the same, and details are not described here.
  • the feature similarity ranking unit 133 is configured to sort the K feature similarities to obtain the maximum feature similarity.
  • the feature similarity is sorted in descending order, so as to quickly obtain the feature similarity ranked first as the maximum feature similarity.
  • the fourth comparison condition judgment unit 134 is configured to judge whether the maximum feature similarity meets the preset fourth comparison condition.
  • the fourth comparison condition is used to measure whether the maximum feature similarity and the similarity of other features reach a level that can determine whether the sixth nose print feature vector and the cat nose print feature vectors of the K cat nose print feature vectors are consistent
  • the standard can be set according to the actual application requirements, and there is no specific limitation here.
  • the fourth comparison condition is whether the feature similarity is greater than a preset fourth threshold
  • the maximum feature similarity meets the preset fourth comparison condition, that is, the maximum feature similarity obtained in the feature similarity sorting unit 133
  • the feature similarity is compared with a preset fourth threshold, so as to determine whether the maximum feature similarity meets the preset fourth comparison condition.
  • the sixth information output unit 1341 is configured to output no information identical to the cat in the sixth to-be-recognized nose print image in the output database if the maximum feature similarity does not meet the preset fourth comparison condition.
  • the sixth nose print feature vector and the K cat nose print feature vectors do not have the same cat nose print feature vector, that is, the sixth nose print image to be identified and the K nose print feature vectors
  • the sixth to-be-recognized nose pattern image and the cats corresponding to the K cat nose pattern feature vectors do not have the same cat, then the inclusion of The sixth to-be-recognized nose print image and the cats corresponding to the K cat nose print feature vectors do not have the same cat information output to the client for use or management by the user.
  • the similarity judging unit 1342 is configured to, if the maximum feature similarity meets the preset fourth comparison condition, determine whether there is a similarity that meets the preset fourth comparison condition among the K feature similarities.
  • the comparison result of comparing the maximum feature similarity with the preset fourth threshold in the fourth comparison condition determining unit 134 when the result is that the maximum feature similarity is greater than the preset fourth threshold, that is, the maximum feature similarity meets the
  • the preset fourth comparison condition in order to further determine the sixth nose print feature vector and which of the K cat nose print feature vectors may be the same cat nose print feature, this embodiment determines that the K features are similar. Whether there is a similarity that meets the preset fourth comparison condition in the degree, specifically, comparing the K feature similarities with the preset fourth threshold and obtaining the comparison result.
  • the seventh information output unit 1343 is used to output the information that the cat corresponding to the ID of the J feature similarity and the cat in the sixth to-be-recognized nose print image are the same if there are J feature similarities that meet the fourth comparison condition. , where J is a positive integer less than or equal to K and greater than or equal to 0.
  • the comparison result of comparing the K feature similarities with the preset fourth threshold in the similarity judging unit 1342 when the result is that there are J feature similarities greater than the preset fourth threshold, that is, there are J
  • the feature similarity complies with the preset fourth comparison condition, and it can be understood that the J cat nose print feature vectors and the sixth nose print feature vector are the nose print feature vectors of the same cat, that is, the J cat nose print feature vectors correspond to
  • the cat nose prints in the nose print image and the sixth to-be-recognized nose print image are the cat nose prints of the same cat, so that the nose print images corresponding to the J cat nose print feature vectors and the sixth to-be-identified nose print image can be determined.
  • the information that the cat corresponding to the J cat nose print feature vectors and the cat in the sixth to-be-recognized nose print image are the same may be output to the client for use or management by the user.
  • this embodiment extracts the sixth nose print feature vector by subjecting the sixth to-be-recognized nose print image to the nose print feature extraction model, and then extracts the sixth nose print feature vector.
  • the sixth nose print feature vector and the cat nose print feature vector corresponding to all IDs in the database are in turn to calculate the similarity by dot product, and the K feature similarity is sorted from high to low, if the largest feature vector does not exceed the set
  • the fourth threshold is then judged that there is no cat corresponding to the sixth to-be-recognized nose print image in the database, if it exceeds the set fourth threshold, then output the ID corresponding to the maximum feature similarity, if it exceeds the set fourth threshold If the number of IDs is greater than three, output the three IDs with the highest similarity.
  • the device further includes:
  • a network building module 141 used to construct a basic nose print deep learning network
  • this embodiment in order to obtain a model with high nose print recognition accuracy, this embodiment builds a basic nose print deep learning network using Resnet50 as the skeleton network, and other networks can also be used according to actual application requirements, which will not be described here. limit.
  • this embodiment adopts Resnet50 as the skeleton network, then introduces the attention model to strengthen the network, then adopts the loss function to converge the entire network, and finally outputs the classification result through the softmax layer to construct the basic nose print deep learning network
  • the basic nose print deep learning network can be understood as including an input layer for preprocessing image data, a convolution layer for nose print feature extraction, a sampling layer for further nose print feature extraction, and a pooling for nose print feature compression. Layer, the fully connected layer for classifying nose prints, and the softmax layer for outputting nose print classification results, etc., can ensure the accuracy of nose print recognition to a certain extent.
  • the data set acquisition module 142 is used to label the cat images in the database to obtain a segmentation data set for training;
  • the nose print data set is a pre-processed and prepared nose print training set and a nose print test set.
  • the database can be indexed according to the data type required by the actual application, so as to quickly and accurately obtain the pre-processed and prepared nose print training set and nose print test set for subsequent training.
  • the network training module 143 is used for inputting the nose print data set into the basic nose print deep learning network to perform an iterative training operation to obtain a trained nose print classification model;
  • the trained nose print classification model is a model used to identify the nose print features of the nose print images, and the extracted nose print features can be used to classify the nose prints.
  • a large amount of iterative training is performed on the basic nose print deep learning network by using the pre-processed and prepared nose print training set and nose print test set, so as to obtain nose prints that can be used to identify nose print images.
  • a nose print classification model that can extract the nose print features for nose print classification.
  • the model interception module 144 is used for intercepting the layers from the input layer to the output feature in the nose print classification model as a nose print feature extraction model.
  • intercepting the layers from the input layer to the output feature in the noseprint classification model can be understood as intercepting the input layer for preprocessing image data in the noseprint classification model, the convolution layer for noseprint feature extraction, The sampling layer used for further noseprint feature extraction and the pooling layer used for noseprint feature compression can also be intercepted according to actual application requirements, which are not specifically limited here.
  • the layers from the input layer to the output feature in the nose print classification model can be cut off after the fully connected layer in the nose print classification model, and the layer from the input layer to the output feature is reserved as the nose print feature extraction model.
  • this embodiment uses Resnet50 as the skeleton network, then introduces the attention model to strengthen the network, then uses the loss function to converge the entire network, and finally obtains the classification through the softmax layer
  • the result is a basic nose line.
  • the device further includes:
  • the nose print image collection module 151 is used to collect nose print images of different cats in A group, where A is a positive integer greater than 0;
  • this embodiment collects nose print images of different cats in group A, such as 3000 sets of nose print images of different cats.
  • the identification setting module 152 is used to set an ID mark to each cat;
  • the ID identifier is used to uniquely identify a cat, wherein each cat is in a one-to-one correspondence with the ID identifier.
  • a fixed and distinguishable ID number such as A111, is allocated to each cat.
  • the image preprocessing module 153 is used to perform a preprocessing operation on each nose print image for each ID mark to obtain a processed nose print training image and a nose print test image;
  • the present embodiment performs a preprocessing operation on each nose print image for each ID mark, so as to obtain the nose print training in which the processed data format is adapted to the basic nose print deep learning network. image and nose print test image.
  • the preprocessing operation may be image preprocessing means such as rotation, size change, light and shade change, etc. of the nose print image, and other image processing means may also be performed according to actual application requirements, which are not specifically limited here.
  • image preprocessing methods such as image rotation, size change, and light and dark changes are performed on each nose print image, so as to obtain a processed data format that is compatible with the basic nose print deep learning network.
  • image preprocessing methods such as image rotation, size change, and light and dark changes are performed on each nose print image, so as to obtain a processed data format that is compatible with the basic nose print deep learning network.
  • the nose print training image and the nose print test image are performed on each nose print image.
  • the data set saving module 154 is configured to save the nose print training image and the nose print test image and the ID identifier in the database correspondingly to obtain a nose print dataset.
  • the nose print training images and nose print test images obtained in the image preprocessing module 153 are stored in the database in one-to-one correspondence with their IDs to obtain a nose print dataset.
  • this embodiment collects nose print images of different cats in group A, and records the images for each cat. Assign a fixed and distinguishable ID number. Further, in order to enhance the robustness of the model during training, we use image preprocessing methods such as rotation, size change, and light and dark changes to expand the noseprint images under the ID group A. The number of images is obtained from the nose print training set and the nose print test set as the nose print dataset.
  • the present invention has the advantages of the cat nose print identification method and device based on the cat nose print feature extraction model and the device:
  • the present embodiment obtains the image data to be recognized in the cat nose print identification request; then, selects a nose print image recognition mode that matches the image data to be recognized according to the image data to be recognized, to realize a specific nose print recognition scene. application, and then recognize the image data to be recognized according to the nose print image recognition mode, the operation process is simple, the efficiency is high and the accuracy rate is high;
  • the nose print recognition technology adopted in this embodiment not only has low cost, no additional equipment costs, and high recognition accuracy, but also avoids disadvantages such as harming the pet's body, and the nose print recognition is simple to operate, which can help pet service agencies save a lot of identities Certification time improves work efficiency;
  • This embodiment can be applied to a wide range of scenarios for cat nose print recognition based on the cat nose print feature extraction model, and has a good prospect.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

本发明涉及图像识别技术领域,具体涉及一种基于猫鼻纹特征提取模型的猫鼻纹识别方法及装置。该方法及装置通过接收猫鼻纹识别请求,猫鼻纹识别请求至少携带有待识别图像数据;选择与待识别图像数据相匹配的鼻纹图像识别模式;按照鼻纹图像识别模式对待识别图像数据进行识别,输出鼻纹识别结果,本发明基于猫鼻纹特征提取模型的猫鼻纹识别方法及装置能够提高猫鼻纹识别的准确率。

Description

基于猫鼻纹特征提取模型的猫鼻纹识别方法及装置 技术领域
本发明涉及图像识别技术领域,具体而言,涉及一种基于猫鼻纹特征提取模型的猫鼻纹识别方法及装置。
背景技术
随着现代计算机科学与技术等深入发展,基于深度学习和图像处理的生物特征被广泛应用于身份认证中,基于生物特征的身份认证也成为了国内外非常热门的前沿领域之一。其中,基于人的生物特征的身份认证应用最广,包括指纹识别、虹膜识别、人脸识别和声纹识别等已经广泛应用于我们的日常生活中。延伸到动物界中,动物的生物特征也可以用作身份认证。其中,动物的鼻纹已经被证明可以用作身份认证。
回顾历史,鼻纹识别发展较慢,相关的应用也较少。最早在1982年,日本警察用拓印鼻纹的纸张来识别被盗的牛,据此来逮捕盗窃犯。自此,国内外针对牛的鼻纹进行了一系列研究,包括通过鼻纹鉴别牛的种类、鉴别奶牛奶质的好坏、鉴别肉牛肉质的级别等。随着鼻纹识别在畜牧业中的应用日渐成熟,人们开始把目光放在中小型食肉动物上,包括狐狸、浣熊和宠物猫等,而宠物猫最为人类最好的朋友,其受关注程度也最高。
随着宠物数量的不断增加以及人们对宠物依赖程度的增加,会不断产出新的身份认证场景,为了安全、卫生的需要,国内各大城市都在为宠物猫进行身份认证。社会上出现了很多专业的宠物机构,能够提供两种宠物身份认证的方法:表皮注射芯片和宠物DNA检测。这两种方案的缺点也很明显:第一,表皮注射芯片会对猫造成身体和心理上的双重伤害;第二,不法分子可以较为容易的调换猫的表皮芯片,进行身份造假;第三,DNA检测的费用昂贵,检测周期长。这些缺点导致猫身份认证在实际运作中较为困难,很大程度上降低对猫身份认证的准确率,从而加大了猫管理的难度。
发明内容
本发明实施例提供了一种基于猫鼻纹特征提取模型的猫鼻纹识别方法及装置,以至少解决传统识别技术方式识别准确率不高的技术问题。
根据本发明的一实施例,提供了一种基于猫鼻纹特征提取模型的猫鼻纹识别方法,包括以下步骤:
接收猫鼻纹识别请求,猫鼻纹识别请求至少携带有待识别图像数据;
选择与待识别图像数据相匹配的鼻纹图像识别模式;
按照鼻纹图像识别模式对待识别图像数据进行识别,输出鼻纹识别结果。
进一步地,按照鼻纹图像识别模式对待识别图像数据进行识别,输出鼻纹识别结果的步骤包括:
将第一待识别鼻纹图像以及第二待识别鼻纹图像输入训练好的鼻纹特征提取模型进行特征提取操作,得到第一鼻纹特征向量以及第二鼻纹特征向量;
计算第一鼻纹特征向量以及第二鼻纹特征向量之间的第一相似度;
判断第一相似度是否符合预设的第一比较条件;
若第一相似度符合预设的第一比较条件,则输出第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫为同一只的信息;
若第一相似度不符合预设的第一比较条件,则输出第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫不是同一只的信息。
进一步地,按照鼻纹图像识别模式对待识别图像数据进行识别,输出鼻纹识别结果的步骤还包括:
在数据库中获取与第一ID标识相匹配的一组鼻纹图像;
将第三待识别鼻纹图像以及一组鼻纹图像分别输入训练好的鼻纹特征提取模型进行特征提取操作,得到第三鼻纹特征向量以及N个鼻纹特征向量,其中N为大于0的正整数;
分别计算第三鼻纹特征向量与每个鼻纹特征向量之间的第二相似度;
判断第二相似度是否符合预设的第二比较条件;
若有M个第二相似度符合预设的第二比较条件,则输出M个第二相似度符对应的鼻纹图像与第三待识别鼻纹图像中的猫为同一只的信息,其中,M为小于等于N且大于等于0的正整数。
进一步地,按照鼻纹图像识别模式对待识别图像数据进行识别,输出鼻纹识别结果的步骤还包括:
在数据库中获取与第二ID标识相匹配的第五鼻纹特征向量;
将第四待识别鼻纹图像输入训练好的鼻纹特征提取模型进行特征提取操作,得到第四鼻纹特征向量;
计算第四鼻纹特征向量与第五鼻纹特征向量之间的第三相似度;
判断第三相似度是否符合预设的第三比较条件;
若第三相似度符合预设的比较条件,则输出第四待识别鼻纹图像中的猫以及第五鼻纹特征向量对应的猫为同一只的信息;
若第三相似度不符合预设的第三比较条件,则输出第四待识别鼻纹图像中的猫以及第五鼻纹特征向量对应的猫不是同一只的信息。
进一步地,按照鼻纹图像识别模式对待识别图像数据进行识别,输出鼻纹识别结果的步骤还包括:
在数据库中获取与ID标识数据中的每个ID对应的猫鼻纹特征向量;
将第六待识别鼻纹图像输入训练好的鼻纹特征提取模型进行特征提取操作,得到第六鼻纹特征向量;
分别计算第六鼻纹特征向量与每个猫鼻纹特征向量之间的相似度,得到K个特征相似度,其中K为大于0的正整数;
将K个特征相似度进行排序,获取最大特征相似度;
判断最大特征相似度是否符合预设的第四比较条件;
若最大特征相似度不符合预设的第四比较条件,则输出数据库中没有与第六待识别鼻纹图像中的猫相同的信息;
若最大特征相似度符合预设的第四比较条件,则判断K个特征相似度中是否有符合预设的第四比较条件的相似度;
若有J个特征相似度符合第四比较条件,则输出J个特征相似度的ID对应的猫与第六待识别鼻纹图像中的猫为同一只的信息,其中,J为小于等于K且大于等于0的正整数。
进一步地,该方法还包括:
构建基础鼻纹深度学习网络;
对数据库中的猫图像进行标注,得到用于训练的分割数据集;
将鼻纹数据集输入基础鼻纹深度学习网络中进行迭代训练操作,得到训练好的鼻纹分类模型;
截取鼻纹分类模型中从输入层到输出特征的层,作为鼻纹特征提取模型。
进一步地,该方法还包括:
采集A组不同猫的鼻纹图像,其中A为大于0的正整数;
对每个猫设置一个ID标识;
针对每个ID标识,将每个鼻纹图像进行预处理操作,得到处理后的鼻纹训练图像以及鼻纹测试图像;
将鼻纹训练图像以及鼻纹测试图像与ID标识对应保存至数据库中,得到鼻纹数据集。
根据本发明的另一实施例,提供了一种基于猫鼻纹特征提取模型的猫鼻纹识别装置,包括:
请求接收模块,用于接收猫鼻纹识别请求,猫鼻纹识别请求至少携带有待识别图像数据;
识别模式选择模块,用于选择与待识别图像数据相匹配的鼻纹图像识别模式;
识别结果输出模块,用于按照鼻纹图像识别模式对待识别图像数据进行识别,输出鼻纹识别结果。
进一步地,该装置还包括:
网络构建模块,用于构建基础鼻纹深度学习网络;
数据集获取模块,用于对数据库中的猫图像进行标注,得到用于训练的分割数据集;
网络训练模块,用于将鼻纹数据集输入基础鼻纹深度学习网络中进行迭代训练操作,得到训练好的鼻纹分类模型;
模型截取模块,用于截取鼻纹分类模型中从输入层到输出特征的层,作为鼻纹特征提取模型。
进一步地,该装置还包括:
鼻纹图像采集模块,用于采集A组不同猫的鼻纹图像,其中A为大于0的正整数;
标识设置模块,用于对每个猫设置一个ID标识;
图像预处理模块,用于针对每个ID标识,将每个鼻纹图像进行预处理操作,得到处理后的鼻纹训练图像以及鼻纹测试图像;
数据集保存模块,用于将鼻纹训练图像以及鼻纹测试图像与ID标识对应保存至数据库中,得到鼻纹数据集。
本发明实施例中的基于猫鼻纹特征提取模型的猫鼻纹识别方法及装置,通过获取猫鼻纹识别请求中的待识别图像数据;然后,根据该待识别图像数据选择与该待识别图像数据相匹配的鼻纹图像识别模式,以实现具体鼻纹识别场景的应用,进而按照鼻纹图像识别模式对待识别图像数据进行识别,输出鼻纹识别结果,以保证具体鼻纹识别场景中的鼻纹识别准确率,本发明基于猫鼻纹特征提取模型的猫鼻纹识别方法及装置能够提高猫鼻纹识别的准确率。
附图说明
图1为本发明基于猫鼻纹特征提取模型的猫鼻纹识别方法的场景示意图;
图2为本发明基于猫鼻纹特征提取模型的猫鼻纹识别方法的流程图;
图3为本发明基于猫鼻纹特征提取模型的猫鼻纹识别方法的一按照鼻纹图像识别模式对待识别图像数据进行识别的流程图;
图4为本发明基于猫鼻纹特征提取模型的猫鼻纹识别方法的又一按照鼻纹图像识别模式对待识别图像数据进行识别的流程图;
图5为本发明基于猫鼻纹特征提取模型的猫鼻纹识别方法的另一按照鼻纹图像识别模式对待识别图像数据进行识别的流程图;
图6为本发明基于猫鼻纹特征提取模型的猫鼻纹识别方法的再一按照鼻纹图像识别模式对待识别图像数据进行识别的流程图;
图7为本发明基于猫鼻纹特征提取模型的猫鼻纹识别方法的获取鼻纹特征提取模型的流程图;
图8为本发明基于猫鼻纹特征提取模型的猫鼻纹识别方法的获取鼻纹数据集的流程图;
图9为本发明基于猫鼻纹特征提取模型的猫鼻纹识别装置的模块图;
图10为本发明基于猫鼻纹特征提取模型的猫鼻纹识别装置的一按照鼻纹图像识别模式对待识别图像数据进行识别的模块图;
图11为本发明基于猫鼻纹特征提取模型的猫鼻纹识别装置的又一按照鼻纹图像识别模式对待识别图像数据进行识别的模块图;
图12为本发明基于猫鼻纹特征提取模型的猫鼻纹识别装置的另一按照鼻纹图像识别模式对待识别图像数据进行识别的模块图;
图13为本发明基于猫鼻纹特征提取模型的猫鼻纹识别装置的再一按照鼻纹图像识别模式对待识别图像数据进行识别的模块图;
图14为本发明基于猫鼻纹特征提取模型的猫鼻纹识别装置的获取鼻纹特征提取模型的模块图;
图15为本发明基于猫鼻纹特征提取模型的猫鼻纹识别方法的获取鼻纹数 据集的模块图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
实施例1
根据本发明一实施例,提供了一种基于猫鼻纹特征提取模型的猫鼻纹识别方法,参见图1至图2,包括以下步骤:
S1:接收猫鼻纹识别请求,猫鼻纹识别请求至少携带有待识别图像数据。
在本实施例中,由于在猫鼻头皮肤的发育过程中,虽然表皮、真皮以及基质层都在共同长大,但是柔软的皮下组织长得相对比坚硬的表皮快,因此会对表皮产生源源不断的上顶压力,迫使长得较慢的表皮向内层组织收缩塌陷,并逐渐变弯打皱,以减轻皮下组织施加给它的压力。如此一来,一方面使劲向上攻,一方面被迫往下撤,导致表皮长得曲曲弯弯,坑洼不平,从而形成纹路。这种变弯打皱的过程随着内层组织产生的上层压力变化而波动起伏,形成凹凸不平的脊纹或皱褶,直到发育过程中止,最终定型为至死不变的鼻纹,从这一点上来说,猫鼻纹的形成与人类指纹的形成如出一辙。而且鼻子作为猫最奇特的部位,它们会格外注意自己鼻子的安危,因为那是它们赖以生存的根基,一旦嗅到危险的气息,立马做出相应的反应进行防御,并且时间的推移不会影响纹路的形状。
因此,本实施例基于猫鼻纹特征提取模型对猫鼻纹图像进行鼻纹识别以保证对猫鼻纹识别的准确率,从而保证对猫身份认证的准确率。
其中,猫鼻纹识别请求是用户根据实际具体鼻纹识别场景的应用,需要进行的识别操作而输入的操作请求;而猫鼻纹识别请求至少携带有待识别图像数据,该待识别图像数据是实际具体鼻纹识别场景的应用中提供的图像数据,以使后续根据该待识别图像数据确定该具体鼻纹识别场景的猫身份。
具体地,接收用户从客户端输入的猫鼻纹识别请求,并获取猫鼻纹识别请求至少携带有待识别图像数据,能够使后续对该待识别图像数据进行分析识别,以准确识别待识别图像数据中猫鼻纹,从而准确获取该具体鼻纹识别场景的猫身份。
S2:选择与待识别图像数据相匹配的鼻纹图像识别模式。
在本实施例中,鼻纹图像识别模式根据实际的具体鼻纹识别场景的而模拟实验出几种适用于猫鼻纹图像的识别方法。
进一步地,由于基于深度学习图像处理技术的猫鼻纹识别技术在宠物市场中不仅技术成本低且识别精度高,故本实施例中的鼻纹图像识别模式,具体可以是通过构建以及训练适用于猫鼻纹特征提取的深度学习模型,并配合特征匹配算法进行鼻纹识别,以猫鼻纹识别的准确率。
具体地,本实施例通过对在步骤S1中获取到我的待识别图像数据进行解 析,并根据解析结果索引相适配的鼻纹图像识别模式,能够使后续按照索引到的鼻纹图像识别模式对待识别图像数据进行识别分析,以准确识别待识别图像数据中猫鼻纹,从而准确获取该具体鼻纹识别场景的猫身份。
S3:按照鼻纹图像识别模式对待识别图像数据进行识别,输出鼻纹识别结果。
在本实施例中,鼻纹识别结果是待识别图像数据中的鼻纹是否一致或不一致的结果,能够用于表明待识别图像数据中是否相同。
具体地,按照鼻纹图像识别模式对待识别图像数据进行识别,具体可以是通过采用已构建以及训练好的适用于猫鼻纹特征提取的深度学习模型,并配合特征匹配算法对待识别图像数据进行鼻纹识别,以输出猫鼻纹是否一致或不一致的结果,能够保证获取鼻纹特征的准确性和保证性,从而保证猫鼻纹识别的准确率。
本发明实施例中的基于猫鼻纹特征提取模型的猫鼻纹识别方法,通过获取猫鼻纹识别请求中的待识别图像数据;然后,根据该待识别图像数据选择与该待识别图像数据相匹配的鼻纹图像识别模式,以实现具体鼻纹识别场景的应用,进而按照鼻纹图像识别模式对待识别图像数据进行识别,输出鼻纹识别结果,以保证具体鼻纹识别场景中的鼻纹识别准确率,本发明基于猫鼻纹特征提取模型的猫鼻纹识别方法能够提高猫鼻纹识别的准确率;本发明计算复杂度低,简便实用,成本低。
需要说明的是,本实施例采用的鼻纹识别技术不仅成本低、无需额外的设备费用且识别精度高,同时还可以避免伤害宠物身体等缺点,且鼻纹识别操作简单,可以帮助宠物服务机构节省大量身份认证时间提高工作效率;其次,本实施采用的鼻纹识别方案能够帮助猫更好的进行比赛、保险、医疗等活动。
作为优选的技术方案中,待识别图像数据为第一待识别鼻纹图像以及第二待识别鼻纹图像,参见图3,步骤S3按照鼻纹图像识别模式对待识别图像数据进行识别,输出鼻纹识别结果的步骤包括:
S31:将第一待识别鼻纹图像以及第二待识别鼻纹图像输入训练好的鼻纹特征提取模型进行特征提取操作,得到第一鼻纹特征向量以及第二鼻纹特征向量。
在本实施例中,训练好的鼻纹特征提取模型是通过采用大量的猫鼻纹图像的数据集对预先构建的基础鼻纹深度学习网络进行不断地迭代训练,而获取到的具有较高准确鼻纹特征识别率的模型。
其中,待识别图像数据为第一待识别鼻纹图像以及第二待识别鼻纹图像,具体可以理解为具体的鼻纹识别场景为判断两张鼻纹图像中的猫是否为同一只,两张鼻纹图像即第一待识别鼻纹图像以及第二待识别鼻纹图像。
其中,第一鼻纹特征向量是采用训练好的鼻纹特征提取模型对第一待识别鼻纹图像进行鼻纹特征提取后输出的特征序列,通常以向量的形式呈现;同理,第二鼻纹特征向量是采用训练好的鼻纹特征提取模型对第二待识别鼻纹图像进行鼻纹特征提取后输出的特征序列。
具体地,本实施例分别将第一待识别鼻纹图像以及第二待识别鼻纹图像输入训练好的鼻纹特征提取模型进行鼻纹特征提取操作,并分别输出的与第一待识别鼻纹图像对应的特征序列,即第一鼻纹特征向量,以及第二待识别鼻纹图像对应的特征序列,即第二鼻纹特征向量,能够实现对图像数据中的鼻纹特征的准确获取,从而在一定程度上保证对猫鼻纹识别的准确率。
S32:计算第一鼻纹特征向量以及第二鼻纹特征向量之间的第一相似度。
在本实施例中,第一相似度是用于量化第一鼻纹特征向量以及第二鼻纹特征向量之间的相似程度,即将相似程度数据化,相似度的数值越大,可以理解为第一鼻纹特征向量以及第二鼻纹特征向量对应的第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫鼻纹越相似,即图像中的猫越相似。
进一步地,计算第一鼻纹特征向量以及第二鼻纹特征向量之间的第一相似度具体可以通过计算两个特征向量之间的余弦值,即余弦相似度算法,或者是计算两个特征向量之间的点积,来表示该第一相似度,还可以是采用其他计算方式,此处不作具体限制。
具体地,本实施例通过计算第一鼻纹特征向量以及第二鼻纹特征向量之间的点积,并将该点积的值作为第一相似度。
S33:判断第一相似度是否符合预设的第一比较条件。
在本实施例中,第一比较条件是用于衡量第一相似度是否达到能够判断第一鼻纹特征向量以及第二鼻纹特征向量是一致的标准,具体可以根据实际应用需求进行设置,此处不作具体限制。
具体地,假设第一比较条件为第一相似度是否大于预设的第一阈值,则判断第一相似度是否符合预设的第一比较条件,即将在步骤S32中获取到的第一相似度与预设的第一阈值进行比较,从而判定第一相似度是否符合预设的第一比较条件。
S331:若第一相似度符合预设的第一比较条件,则输出第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫为同一只的信息。
具体地,根据步骤S33中第一相似度与预设的第一阈值进行比较的比较结果,当结果为第一相似度大于预设的第一阈值,即第一相似度符合预设的第一比较条件,可以理解为第一鼻纹特征向量以及第二鼻纹特征向量是同一猫的鼻纹特征向量,即该第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫鼻纹是同一猫的猫鼻纹,从而可以确定该第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫为同一只,则可以将包含有第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫为同一只的信息输出至客户端以供用户进行使用或管理。
S332:若第一相似度不符合预设的第一比较条件,则输出第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫不是同一只的信息。
具体地,根据步骤S33中第一相似度与预设的第一阈值进行比较的比较结果,当结果为第一相似度小于或等于预设的第一阈值,即第一相似度不符合预设的比较条件,可以理解为第一鼻纹特征向量以及第二鼻纹特征向量不是同一猫的鼻纹特征向量,即该第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫 鼻纹不是同一猫的猫鼻纹,从而可以确定该第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫不是同一只,则可以将包含有第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫不是同一只的信息输出至客户端以供用户进行使用或管理。
在本实施例中,根据步骤S31至步骤S332,本实施例通过将两张鼻纹图像分别输入训练好的鼻纹特征提取模型得到鼻纹特征,然后,计算两个鼻纹特征向量的点积来表示相似度,并将相似度和设定的第一阈值进行比较以判定是否为同一只猫,能够保证获取鼻纹特征的准确性和保证性,从而保证猫鼻纹识别的准确率,同时,操作简便,计算复杂度低,能够在一定程度上提高猫鼻纹识别的效率。
作为优选的技术方案中,待识别图像数据为第三待识别鼻纹图像以及第一ID标识,参见图4,步骤S3按照鼻纹图像识别模式对待识别图像数据进行识别,输出鼻纹识别结果的步骤还包括:
S41:在数据库中获取与第一ID标识相匹配的一组鼻纹图像。
在本实施例中,待识别图像数据为第三待识别鼻纹图像以及第一ID标识,具体可以理解为具体的鼻纹识别场景为判断以待识别的单张鼻纹图像与一个ID下的一组鼻纹图像中的猫是否为同一只,待识别的单张鼻纹图像即第三待识别鼻纹图像,以及与第一ID标识相匹配的一组鼻纹图像。
具体地,由于在数据库中保存有采集到每个猫数据相对应的一个固定的可区分的ID号,故本实施例可以根据获取到的第一ID标识在数据库中进行索引,以获取与该第一ID标识相匹配的一组鼻纹图像。
S42:将第三待识别鼻纹图像以及一组鼻纹图像分别输入训练好的鼻纹特征提取模型进行特征提取操作,得到第三鼻纹特征向量以及N个鼻纹特征向量,其中N为大于0的正整数。
在本实施例中,第三鼻纹特征向量是采用训练好的鼻纹特征提取模型对第三待识别鼻纹图像进行鼻纹特征提取后输出的特征序列,通常以向量的形式呈现;同理N个鼻纹特征向量是采用训练好的鼻纹特征提取模型对一组鼻纹图像进行鼻纹特征提取后输出的N个特征序列。
具体地,本实施例分别将第三待识别鼻纹图像以及一组鼻纹图像输入训练好的鼻纹特征提取模型进行鼻纹特征提取操作,并分别输出的与第三待识别鼻纹图像对应的特征序列,即第三鼻纹特征向量,以及一组鼻纹图像对应的N个特征序列,即N个鼻纹特征向量,能够实现对图像数据中的鼻纹特征的准确获取,从而在一定程度上保证对猫鼻纹识别的准确率。
S43:分别计算第三鼻纹特征向量与每个鼻纹特征向量之间的第二相似度。
在本实施例中,第二相似度是用于量化第三鼻纹特征向量以及N个鼻纹特征向量之间的相似程度。
具体地,本实施例可以通过依次两两计算第三鼻纹特征向量与每个鼻纹特征向量之间的第二相似度,具体计算方式与步骤S32中计算第一相似度的方式相同,此处不作赘述。
S44:判断第二相似度是否符合预设的第二比较条件。
在本实施例中,第二比较条件是用于衡量第二相似度是否达到能够判断第三鼻纹特征向量以及N个鼻纹特征向量的哪些鼻纹特征向量是一致的标准,具体可以根据实际应用需求进行设置,此处不作具体限制。
具体地,假设第二比较条件为第二相似度是否大于预设的第二阈值,则判断第二相似度是否符合预设的第二比较条件,即将在步骤S43中获取到的N个第二相似度分别与预设的第二阈值进行比较,从而判定第二相似度是否符合预设的第二比较条件。
S45:若有M个第二相似度符合预设的第二比较条件,则输出M个第二相似度符对应的鼻纹图像与第三待识别鼻纹图像中的猫为同一只的信息,其中,M为小于等于N且大于等于0的正整数。
具体地,根据步骤S44中N个第二相似度分别与预设的第二阈值进行比较的比较结果,当结果为有M个第二相似度大于预设的第二阈值,即有M个第二相似度符合预设的第二比较条件,可以理解为这M个第二鼻纹特征向量以及第三鼻纹特征向量是同一猫的鼻纹特征向量,即该M个第二鼻纹特征向量对应的鼻纹图像以及第三待识别鼻纹图像中的猫鼻纹是同一猫的猫鼻纹,从而可以确定该M个第二鼻纹特征向量对应的鼻纹图像以及第三待识别鼻纹图像中的猫为同一只,则可以将包含有M个鼻纹特征向量对应的鼻纹图像以及第三待识别鼻纹图像中的猫为同一只的信息输出至客户端以供用户进行使用或管理。
需要说明的是,若有M个第二相似度符合预设的第二比较条件,则可以理解为N-M个第二相似度不符合预设的第二比较条件,即有N-M个第二相似度对应的鼻纹特征向量对应的鼻纹图像以及第三待识别鼻纹图像中的猫不是同一只。
在本实施例中,根据步骤S41至步骤S45,本实施例通过分别提取待识别的单张鼻纹图像以及某一ID下的一组鼻纹图像的鼻纹特征,然后,将该待识别的单张鼻纹图像的特征向量和一组鼻纹图像的多个特征向量依次两两点积计算相似度,进而依次将计算出的相似度值和预设的第二阈值进行比较,并记录符合大于第二阈值的个数,从而通过判断相似的个数来最终判定是否为同一只猫。
作为优选的技术方案中,待识别图像数据为第四待识别鼻纹图像以及第二ID标识,参见图5,步骤S3按照鼻纹图像识别模式对待识别图像数据进行识别,输出鼻纹识别结果的步骤还包括:
S51:在数据库中获取与第二ID标识相匹配的第五鼻纹特征向量。
在本实施例中,待识别图像数据为第四待识别鼻纹图像以及第二ID标识,具体可以理解为具体的鼻纹识别场景为判断以待识别的单张鼻纹图像与一个ID下的一个鼻纹特征向量对应的猫是否为同一只,待识别的单张鼻纹图像即第四待识别鼻纹图像,以及与第二ID标识相匹配的第五鼻纹特征向量。
具体地,由于在数据库中保存有采集到每个猫数据相对应的一个固定的可 区分的ID号,故本实施例可以根据获取到的第二ID标识在数据库中进行索引,以获取与该第二ID标识相匹配的第五鼻纹特征向量。
S52:将第四待识别鼻纹图像输入训练好的鼻纹特征提取模型进行特征提取操作,得到第四鼻纹特征向量。
在本实施例中,第四鼻纹特征向量是采用训练好的鼻纹特征提取模型对第四待识别鼻纹图像进行鼻纹特征提取后输出的特征序列,通常以向量的形式呈现。
具体地,本实施例通过将第四待识别鼻纹图像输入训练好的鼻纹特征提取模型进行鼻纹特征提取操作,并输出的与第四待识别鼻纹图像对应的特征序列,即第四鼻纹特征向量,能够实现对图像数据中的鼻纹特征的准确获取,从而在一定程度上保证对猫鼻纹识别的准确率。
S53:计算第四鼻纹特征向量与第五鼻纹特征向量之间的第三相似度。
在本实施例中,第三相似度是用于量化第四鼻纹特征向量以及第五鼻纹特征向量之间的相似程度。
具体地,本实施例计算第四鼻纹特征向量与第五鼻纹特征向量之间的第三相似度,具体的计算方式与步骤S32中计算第一相似度的方式相同,此处不作赘述。
S54:判断第三相似度是否符合预设的第三比较条件。
在本实施例中,第三比较条件是用于衡量第三相似度是否达到能够判断第四鼻纹特征向量以及第五鼻纹特征向量的是一致的标准,具体可以根据实际应用需求进行设置,此处不作具体限制。
具体地,假设第三比较条件为第三相似度是否大于预设的第三阈值,则判断第三相似度是否符合预设的第三比较条件,即将在步骤S53中获取到的第三相似度分别与预设的第三阈值进行比较,从而判定第三相似度是否符合预设的第三比较条件。
S541:若第三相似度符合预设的第三比较条件,则输出第四待识别鼻纹图像中的猫以及第五鼻纹特征向量对应的猫为同一只的信息。
具体地,根据步骤S54中第三相似度与预设的第三阈值进行比较的比较结果,当结果为第三相似度大于预设的第三阈值,即第三相似度符合预设的第三比较条件,可以理解为第四鼻纹特征向量以及第五鼻纹特征向量是同一猫的鼻纹特征向量,即该第四待识别鼻纹图像以及第五鼻纹特征向量对应的猫鼻纹是同一猫的猫鼻纹,从而可以确定该第四待识别鼻纹图像以及第五鼻纹特征向量对应的猫为同一只,则可以将包含有第四待识别鼻纹图像以及第五鼻纹特征向量对应的猫为同一只的信息输出至客户端以供用户进行使用或管理。
S542:若第三相似度不符合预设的第三比较条件,则输出第四待识别鼻纹图像中的猫以及第五鼻纹特征向量对应的猫不是同一只的信息。
具体地,根据步骤S54中第三相似度与预设的第三阈值进行比较的比较结果,当结果为第三相似度小于或等于预设的第三阈值,即第三相似度不符合预设的第三比较条件,可以理解为第四鼻纹特征向量以及第五鼻纹特征向量不是 同一猫的鼻纹特征向量,即该第四待识别鼻纹图像以及第五鼻纹特征向量对应的猫鼻纹不是同一猫的猫鼻纹,从而可以确定该第四待识别鼻纹图像以及第五鼻纹特征向量对应的猫不是同一只,则可以将包含有第四待识别鼻纹图像以及第五鼻纹特征向量对应的猫不是同一只的信息输出至客户端以供用户进行使用或管理。
在本实施例中,根据步骤S51至步骤S542,本实施例通过将一第四待识别鼻纹图像经过鼻纹特征提取模型提取特征向量,然后将该第四鼻纹特征向量和数据库中一ID号对应的第五鼻纹特征向量计算点积得到相似度,再通过相似度和阈值判定第四待识别鼻纹图像以及第五鼻纹特征向量对应的猫是否为同一只猫。
作为优选的技术方案中,待识别图像数据为第六待识别鼻纹图像以及ID标识数据,参见图6,步骤S3按照鼻纹图像识别模式对待识别图像数据进行识别,输出鼻纹识别结果的步骤还包括:
S61:将第六待识别鼻纹图像输入训练好的鼻纹特征提取模型进行特征提取操作,得到第六鼻纹特征向量。
在本实施例中,待识别图像数据为第六待识别鼻纹图像以及ID标识数据,具体可以理解为具体的鼻纹识别场景为判断以待识别的单张鼻纹图像与数据库中的全部ID对应的鼻纹特征向量对应的猫是否为同一只,即判断第六待识别鼻纹图像与数据库中的全部鼻纹特征向量中的哪些鼻纹特征向量对应的猫为同一只,待识别的单张鼻纹图像即第六待识别鼻纹图像,以及与ID标识数据相匹配的数据库中的全部鼻纹特征向量。
其中,第六鼻纹特征向量是采用训练好的鼻纹特征提取模型对第六待识别鼻纹图像进行鼻纹特征提取后输出的特征序列,通常以向量的形式呈现。
具体地,本实施例通过将第六待识别鼻纹图像输入训练好的鼻纹特征提取模型进行鼻纹特征提取操作,并输出的与第六待识别鼻纹图像对应的特征序列,即第六鼻纹特征向量,能够实现对图像数据中的鼻纹特征的准确获取,从而在一定程度上保证对猫鼻纹识别的准确率。
S62:分别计算第六鼻纹特征向量与每个猫鼻纹特征向量之间的相似度,得到K个特征相似度,其中K为大于0的正整数。
在本实施例中,特征相似度是用于量化第六鼻纹特征向量以及K个猫鼻纹特征向量之间的相似程度。
具体地,本实施例可以通过依次两两计算第六鼻纹特征向量与每个猫鼻纹特征向量之间的特征相似度,得到K个特征相似度,具体计算方式与步骤S32中计算第一相似度的方式相同,此处不作赘述。
S63:将K个特征相似度进行排序,获取最大特征相似度。
具体地,获取最大特征相似度,本实施例通对特征相似度按照从高到低的顺序进行排序,以快速获取排序第一的特征相似度作为最大特征相似度。
S64:判断最大特征相似度是否符合预设的第四比较条件。
在本实施例中,第四比较条件是用于衡量最大特征相似度以及其他特征相 似度是否达到能够判断第六鼻纹特征向量以及K个猫鼻纹特征向量的哪些猫鼻纹特征向量是一致的标准,具体可以根据实际应用需求进行设置,此处不作具体限制。
具体地,假设第四比较条件为特征相似度是否大于预设的第四阈值,则判断最大特征相似度是否符合预设的第四比较条件,即将在步骤S63中获取到的最大特征相似度分别与预设的第四阈值进行比较,从而判定最大特征相似度是否符合预设的第四比较条件。
S641:若最大特征相似度不符合预设的第四比较条件,则输出数据库中没有与第六待识别鼻纹图像中的猫相同的信息。
具体地,根据步骤S64中最大相似度与预设的第四阈值进行比较的比较结果,当结果为最大特征相似度小于或等于预设的第四阈值,即最大特征相似度不符合预设的第四比较条件,可以理解为第六鼻纹特征向量以及K个猫鼻纹特征向量中没有相同的猫的鼻纹特征向量,即该第六待识别鼻纹图像以及K个猫鼻纹特征向量对应的猫鼻纹中没有同一猫的猫鼻纹,从而可以确定该第六待识别鼻纹图像以及K个猫鼻纹特征向量对应的猫中没有相同猫,则可以将包含有第六待识别鼻纹图像以及K个猫鼻纹特征向量对应的猫中没有相同猫的信息输出至客户端以供用户进行使用或管理。
S642:若最大特征相似度符合预设的第四比较条件,则判断K个特征相似度中是否有符合预设的第四比较条件的相似度。
具体地,根据步骤S64中最大特征相似度与预设的第四阈值进行比较的比较结果,当结果为最大特征相似度大于预设的第四阈值,即最大特征相似度符合预设的第四比较条件,为了进一步确定第六鼻纹特征向量以及K个猫鼻纹特征向量的哪些猫鼻纹特征向量可能是相同的猫鼻纹特征,本实施例通过判断K个特征相似度中是否有符合预设的第四比较条件的相似度,具体可以是将K个特征相似度分别与预设的第四阈值进行比较并获取比较结果。
S643:若有J个特征相似度符合第四比较条件,则输出J个特征相似度的ID对应的猫与第六待识别鼻纹图像中的猫为同一只的信息,其中,J为小于等于K且大于等于0的正整数。
具体地,根据步骤S642中K个特征相似度分别与预设的第四阈值进行比较的比较结果,当结果为有J个特征相似度大于预设的第四阈值,即有J个特征相似度符合预设的第四比较条件,可以理解为这J个猫鼻纹特征向量以及第六鼻纹特征向量是同一猫的鼻纹特征向量,即该J个猫鼻纹特征向量对应的鼻纹图像以及第六待识别鼻纹图像中的猫鼻纹是同一猫的猫鼻纹,从而可以确定该J个猫鼻纹特征向量对应的鼻纹图像以及第六待识别鼻纹图像中的猫为同一只,则可以将包含有J个猫鼻纹特征向量对应的猫以及第六待识别鼻纹图像中的猫为同一只的信息输出至客户端以供用户进行使用或管理。
需要说明的是,若有J个特征相似度符合预设的第四比较条件,则可以理解为K-J个特征相似度不符合预设的第四比较条件,即有K-J个特征相似度对应的猫鼻纹特征向量对应的猫以及第六待识别鼻纹图像中的猫不是同一只。
在本实施例中,根据步骤S61至步骤S643,本实施例通过将第六待识别鼻纹图像经过鼻纹特征提取模型提取第六鼻纹特征向量,然后将该第六鼻纹特征向量依次和数据库中的所有ID对应的猫鼻纹特征向量进行点积计算相似度,并对K个特征相似度从高到低进行排序,如果最大特征向量的没有超过设定的第四阈值则判定数据库中没有该第六待识别鼻纹图像对应的猫,如果超过设定的第四阈值,则输出该最大特征相似度对应的ID,如果超过设定的第四阈值的ID数大于三个,则输出这三个相似度最高的三个ID即可。
作为优选的技术方案中,参见图7,在步骤S2之前,该方法还包括:
S71:构建基础鼻纹深度学习网络。
在本实施例中,为了能够获取到鼻纹识别准确率较高的模型,本实施例构建基础鼻纹深度学习网络采用Resnet50作为骨架网络,还可以根据实际应用需求采用其他网络,此处不作具体限制。
具体地,本实施例通过采用Resnet50作为骨架网络,接着引入注意力模型强化网络,然后采用损失函数来收敛整个网络,最后通过softmax层输出分类结果,以构建该基础鼻纹深度学习网络,其中,该基础鼻纹深度学习网络可以理解为包括预处理图像数据的输入层、用于鼻纹特征提取的卷积层、用于进一步鼻纹特征提取的采样层、用于鼻纹特征压缩的池化层、用于对鼻纹进行分类的全连接层,以及用于输出鼻纹分类结果的softmax层等等,能够在一定程度上保证鼻纹识别的准确率。
S72:对数据库中的猫图像进行标注,得到用于训练的分割数据集。
在本实施例中,鼻纹数据集是预先处理和制作好的鼻纹训练集以及鼻纹测试集。
具体地,本实施例可以根据实际应用需要的数据类型在数据库中进行索引,以快速准确的获取到预先处理和制作好的鼻纹训练集以及鼻纹测试集,以供后续进行训练。
S73:将鼻纹数据集输入基础鼻纹深度学习网络中进行迭代训练操作,得到训练好的鼻纹分类模型。
在本实施例中,训练好的鼻纹分类模型是用于识别鼻纹图像的鼻纹特征,并能够提取出的鼻纹特征进行鼻纹分类的模型。
具体地,本实施例通过利用预先处理和制作好的鼻纹训练集以及鼻纹测试集对基础鼻纹深度学习网络进行大量的迭代训练,从而获取到能够得到用于识别鼻纹图像的鼻纹特征,并能够提取出的鼻纹特征进行鼻纹分类的鼻纹分类模型。
S74:截取鼻纹分类模型中从输入层到输出特征的层,作为鼻纹特征提取模型。
在本实施例中,截取鼻纹分类模型中从输入层到输出特征的层可以理解为截取鼻纹分类模型中用于预处理图像数据的输入层、用于鼻纹特征提取的卷积层、用于进一步鼻纹特征提取的采样层以及用于鼻纹特征压缩的池化层,具体还可以根据实际应用需求进行截取,此处不作具体限制。
具体地,截取鼻纹分类模型中从输入层到输出特征的层具体可以截去鼻纹分类模型中的全连接层之后的层,保留从输入层到输出特征的层作为鼻纹特征提取模型。
在本实施例中,根据步骤S71至步骤S74,本实施例通过利用Resnet50作为骨架网络,接着引入注意力模型强化网络,然后采用损失函数来收敛整个网络,最后通过softmax层得到分类结果得到基础鼻纹。利用制作好的鼻纹训练集和测试集对基础鼻纹深度学习网络进行大量的迭代训练得到鼻纹分类模型,然后,截取模型到输出特征的层得到最终的鼻纹特征提取模型。
作为优选的技术方案中,参见图8,在步骤S72之前,该方法还包括:
S81:采集A组不同猫的鼻纹图像,其中A为大于0的正整数。
具体地,由于训练深度学习模型需要大量的数据,因此本实施例采集了A组不同猫的鼻纹图像,如3000组不同猫的鼻纹图像。
S82:对每个猫设置一个ID标识。
在本实施例中,ID标识用于唯一标识一个猫,其中,每个猫与ID标识为一一对应的。
具体地,本实施例对每个猫分配一个固定的可区分的ID号,如A111。
S83:针对每个ID标识,将每个鼻纹图像进行预处理操作,得到处理后的鼻纹训练图像以及鼻纹测试图像。
在本实施例中,为了保证基础鼻纹深度学习网络对鼻纹数据集特征提取以及识别的准确率,需要保证输入基础鼻纹深度学习网络的鼻纹数据集的数据格式是与基础鼻纹深度学习网络相适配的,故本实施例针对每个ID标识,通过将每个鼻纹图像进行预处理操作,以得到处理后的数据格式与基础鼻纹深度学习网络相适配的鼻纹训练图像以及鼻纹测试图像。
其中,预处理操作具体可以是鼻纹图像采用旋转,改变尺寸,明暗变化等图像预处理手段,还可以根据实际应用需求进行其他图像处理手段,此处不作具体限制。
具体地,针对每个ID标识,将每个鼻纹图像分别进行图像旋转,改变尺寸,以及明暗变化等图像预处理手段,以获取处理好的数据格式与基础鼻纹深度学习网络相适配的鼻纹训练图像以及鼻纹测试图像。
S84:将鼻纹训练图像以及鼻纹测试图像与ID标识对应保存至数据库中,得到鼻纹数据集。
具体地,将在步骤S83中获取到的鼻纹训练图像以及鼻纹测试图像与其ID标识一一对应保存至数据库中,以得到鼻纹数据集。
在本实施例中,根据步骤S81至步骤S84,由于训练深度学习模型需要大量的数据,因此本实施例采集了A组不同猫的鼻纹图像,并为每个猫分配一个固定的可区分的ID号,进一步地,为了增强训练时模型的鲁棒性,我们对这A组ID标识下的鼻纹图像采用旋转,改变尺寸,明暗变化等图像预处理手段来扩展图像数量得到鼻纹训练集以及鼻纹测试集,作为该鼻纹数据集。
实施例2
根据本发明的另一实施例,提供了一种基于猫鼻纹特征提取模型的猫鼻纹识别装置,参见图9,包括:
请求接收模块901,用于接收猫鼻纹识别请求,猫鼻纹识别请求至少携带有待识别图像数据;
在本实施例中,由于在猫鼻头皮肤的发育过程中,虽然表皮、真皮以及基质层都在共同长大,但是柔软的皮下组织长得相对比坚硬的表皮快,因此会对表皮产生源源不断的上顶压力,迫使长得较慢的表皮向内层组织收缩塌陷,并逐渐变弯打皱,以减轻皮下组织施加给它的压力。如此一来,一方面使劲向上攻,一方面被迫往下撤,导致表皮长得曲曲弯弯,坑洼不平,从而形成纹路。这种变弯打皱的过程随着内层组织产生的上层压力变化而波动起伏,形成凹凸不平的脊纹或皱褶,直到发育过程中止,最终定型为至死不变的鼻纹,从这一点上来说,猫鼻纹的形成与人类指纹的形成如出一辙。而且鼻子作为猫最奇特的部位,它们会格外注意自己鼻子的安危,因为那是它们赖以生存的根基,一旦嗅到危险的气息,立马做出相应的反应进行防御,并且时间的推移不会影响纹路的形状。
因此,本实施例基于猫鼻纹特征提取模型对猫鼻纹图像进行鼻纹识别以保证对猫鼻纹识别的准确率,从而保证对猫身份认证的准确率。
其中,猫鼻纹识别请求是用户根据实际具体鼻纹识别场景的应用,需要进行的识别操作而输入的操作请求;而猫鼻纹识别请求至少携带有待识别图像数据,该待识别图像数据是实际具体鼻纹识别场景的应用中提供的图像数据,以使后续根据该待识别图像数据确定该具体鼻纹识别场景的猫身份。
具体地,接收用户从客户端输入的猫鼻纹识别请求,并获取猫鼻纹识别请求至少携带有待识别图像数据,能够使后续对该待识别图像数据进行分析识别,以准确识别待识别图像数据中猫鼻纹,从而准确获取该具体鼻纹识别场景的猫身份。
识别模式选择模块902,用于选择与待识别图像数据相匹配的鼻纹图像识别模式;
在本实施例中,鼻纹图像识别模式根据实际的具体鼻纹识别场景的而模拟实验出几种适用于猫鼻纹图像的识别方法。
进一步地,由于基于深度学习图像处理技术的猫鼻纹识别技术在宠物市场中不仅技术成本低且识别精度高,故本实施例中的鼻纹图像识别模式,具体可以是通过构建以及训练适用于猫鼻纹特征提取的深度学习模型,并配合特征匹配算法进行鼻纹识别,以猫鼻纹识别的准确率。
具体地,本实施例通过对在请求接收模块901中获取到我的待识别图像数据进行解析,并根据解析结果索引相适配的鼻纹图像识别模式,能够使后续按照索引到的鼻纹图像识别模式对待识别图像数据进行识别分析,以准确识别待识别图像数据中猫鼻纹,从而准确获取该具体鼻纹识别场景的猫身份。
识别结果输出模块903,用于按照鼻纹图像识别模式对待识别图像数据进行识别,输出鼻纹识别结果。
在本实施例中,鼻纹识别结果是待识别图像数据中的鼻纹是否一致或不一致的结果,能够用于表明待识别图像数据中是否相同。
具体地,按照鼻纹图像识别模式对待识别图像数据进行识别,具体可以是通过采用已构建以及训练好的适用于猫鼻纹特征提取的深度学习模型,并配合特征匹配算法对待识别图像数据进行鼻纹识别,以输出猫鼻纹是否一致或不一致的结果,能够保证获取鼻纹特征的准确性和保证性,从而保证猫鼻纹识别的准确率。
本发明实施例中的基于猫鼻纹特征提取模型的猫鼻纹识别装置,通过获取猫鼻纹识别请求中的待识别图像数据;然后,根据该待识别图像数据选择与该待识别图像数据相匹配的鼻纹图像识别模式,以实现具体鼻纹识别场景的应用,进而按照鼻纹图像识别模式对待识别图像数据进行识别,输出鼻纹识别结果,以保证具体鼻纹识别场景中的鼻纹识别准确率,本发明基于猫鼻纹特征提取模型的猫鼻纹识别装置能够提高猫鼻纹识别的准确率;本发明计算复杂度低,简便实用,成本低。
需要说明的是,本实施例采用的鼻纹识别技术不仅成本低、无需额外的设备费用且识别精度高,同时还可以避免伤害宠物身体等缺点,且鼻纹识别操作简单,可以帮助宠物服务机构节省大量身份认证时间提高工作效率;其次,本实施采用的鼻纹识别方案能够帮助猫更好的进行比赛、保险、医疗等活动。
作为优选的技术方案中,参见图10,识别结果输出模块903包括:
第一特征提取单元101,用于将第一待识别鼻纹图像以及第二待识别鼻纹图像输入训练好的鼻纹特征提取模型进行特征提取操作,得到第一鼻纹特征向量以及第二鼻纹特征向量。
在本实施例中,训练好的鼻纹特征提取模型是通过采用大量的猫鼻纹图像的数据集对预先构建的基础鼻纹深度学习网络进行不断地迭代训练,而获取到的具有较高准确鼻纹特征识别率的模型。
其中,待识别图像数据为第一待识别鼻纹图像以及第二待识别鼻纹图像,具体可以理解为具体的鼻纹识别场景为判断两张鼻纹图像中的猫是否为同一只,两张鼻纹图像即第一待识别鼻纹图像以及第二待识别鼻纹图像。
其中,第一鼻纹特征向量是采用训练好的鼻纹特征提取模型对第一待识别鼻纹图像进行鼻纹特征提取后输出的特征序列,通常以向量的形式呈现;同理,第二鼻纹特征向量是采用训练好的鼻纹特征提取模型对第二待识别鼻纹图像进行鼻纹特征提取后输出的特征序列。
具体地,本实施例分别将第一待识别鼻纹图像以及第二待识别鼻纹图像输入训练好的鼻纹特征提取模型进行鼻纹特征提取操作,并分别输出的与第一待识别鼻纹图像对应的特征序列,即第一鼻纹特征向量,以及第二待识别鼻纹图像对应的特征序列,即第二鼻纹特征向量,能够实现对图像数据中的鼻纹特征的准确获取,从而在一定程度上保证对猫鼻纹识别的准确率。
第一相似度计算单元102,用于计算第一鼻纹特征向量以及第二鼻纹特征向量之间的第一相似度。
在本实施例中,第一相似度是用于量化第一鼻纹特征向量以及第二鼻纹特征向量之间的相似程度,即将相似程度数据化,相似度的数值越大,可以理解为第一鼻纹特征向量以及第二鼻纹特征向量对应的第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫鼻纹越相似,即图像中的猫越相似。
进一步地,计算第一鼻纹特征向量以及第二鼻纹特征向量之间的第一相似度具体可以通过计算两个特征向量之间的余弦值,即余弦相似度算法,或者是计算两个特征向量之间的点积,来表示该第一相似度,还可以是采用其他计算方式,此处不作具体限制。
具体地,本实施例通过计算第一鼻纹特征向量以及第二鼻纹特征向量之间的点积,并将该点积的值作为第一相似度。
第一比较条件判断单元103,用于判断第一相似度是否符合预设的第一比较条件。
在本实施例中,第一比较条件是用于衡量第一相似度是否达到能够判断第一鼻纹特征向量以及第二鼻纹特征向量是一致的标准,具体可以根据实际应用需求进行设置,此处不作具体限制。
具体地,假设第一比较条件为第一相似度是否大于预设的第一阈值,则判断第一相似度是否符合预设的第一比较条件,即将在第一相似度计算单元102中获取到的第一相似度与预设的第一阈值进行比较,从而判定第一相似度是否符合预设的第一比较条件。
第一信息输出单元1031,用于若第一相似度符合预设的第一比较条件,则输出第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫为同一只的信息。
具体地,根据步骤第一比较条件判断单元103中第一相似度与预设的第一阈值进行比较的比较结果,当结果为第一相似度大于预设的第一阈值,即第一相似度符合预设的第一比较条件,可以理解为第一鼻纹特征向量以及第二鼻纹特征向量是同一猫的鼻纹特征向量,即该第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫鼻纹是同一猫的猫鼻纹,从而可以确定该第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫为同一只,则可以将包含有第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫为同一只的信息输出至客户端以供用户进行使用或管理。
第二信息输出单元1032,用于若第一相似度不符合预设的第一比较条件,则输出第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫不是同一只的信息。
具体地,根据第一比较条件判断单元103中第一相似度与预设的第一阈值进行比较的比较结果,当结果为第一相似度小于或等于预设的第一阈值,即第一相似度不符合预设的比较条件,可以理解为第一鼻纹特征向量以及第二鼻纹特征向量不是同一猫的鼻纹特征向量,即该第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫鼻纹不是同一猫的猫鼻纹,从而可以确定该第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫不是同一只,则可以将包含有第一待识别鼻纹图像以及第二待识别鼻纹图像中的猫不是同一只的信息输出至客户端以 供用户进行使用或管理。
在本实施例中,根据第一特征提取单元101至第二信息输出单元1032,本实施例通过将两张鼻纹图像分别输入训练好的鼻纹特征提取模型得到鼻纹特征,然后,计算两个鼻纹特征向量的点积来表示相似度,并将相似度和设定的第一阈值进行比较以判定是否为同一只猫,能够保证获取鼻纹特征的准确性和保证性,从而保证猫鼻纹识别的准确率,同时,操作简便,计算复杂度低,能够在一定程度上提高猫鼻纹识别的效率。
作为优选的技术方案中,参见图11,识别结果输出模块903还包括:
鼻纹图像获取单元111,用于在数据库中获取与第一ID标识相匹配的一组鼻纹图像。
在本实施例中,待识别图像数据为第三待识别鼻纹图像以及第一ID标识,具体可以理解为具体的鼻纹识别场景为判断以待识别的单张鼻纹图像与一个ID下的一组鼻纹图像中的猫是否为同一只,待识别的单张鼻纹图像即第三待识别鼻纹图像,以及与第一ID标识相匹配的一组鼻纹图像。
具体地,由于在数据库中保存有采集到每个猫数据相对应的一个固定的可区分的ID号,故本实施例可以根据获取到的第一ID标识在数据库中进行索引,以获取与该第一ID标识相匹配的一组鼻纹图像。
第二特征提取单元112,用于将第三待识别鼻纹图像以及一组鼻纹图像分别输入训练好的鼻纹特征提取模型进行特征提取操作,得到第三鼻纹特征向量以及N个鼻纹特征向量,其中N为大于0的正整数。
在本实施例中,第三鼻纹特征向量是采用训练好的鼻纹特征提取模型对第三待识别鼻纹图像进行鼻纹特征提取后输出的特征序列,通常以向量的形式呈现;同理N个鼻纹特征向量是采用训练好的鼻纹特征提取模型对一组鼻纹图像进行鼻纹特征提取后输出的N个特征序列。
具体地,本实施例分别将第三待识别鼻纹图像以及一组鼻纹图像输入训练好的鼻纹特征提取模型进行鼻纹特征提取操作,并分别输出的与第三待识别鼻纹图像对应的特征序列,即第三鼻纹特征向量,以及一组鼻纹图像对应的N个特征序列,即N个鼻纹特征向量,能够实现对图像数据中的鼻纹特征的准确获取,从而在一定程度上保证对猫鼻纹识别的准确率。
第二相似度计算单元113,用于分别计算第三鼻纹特征向量与每个鼻纹特征向量之间的第二相似度。
在本实施例中,第二相似度是用于量化第三鼻纹特征向量以及N个鼻纹特征向量之间的相似程度。
具体地,本实施例可以通过依次两两计算第三鼻纹特征向量与每个鼻纹特征向量之间的第二相似度,具体计算方式与第一相似度计算单元102中计算第一相似度的方式相同,此处不作赘述。
第二比较条件判断单元114,用于判断第二相似度是否符合预设的第二比较条件。
在本实施例中,第二比较条件是用于衡量第二相似度是否达到能够判断第 三鼻纹特征向量以及N个鼻纹特征向量的哪些鼻纹特征向量是一致的标准,具体可以根据实际应用需求进行设置,此处不作具体限制。
具体地,假设第二比较条件为第二相似度是否大于预设的第二阈值,则判断第二相似度是否符合预设的第二比较条件,即将在第二相似度计算单元113中获取到的N个第二相似度分别与预设的第二阈值进行比较,从而判定第二相似度是否符合预设的第二比较条件。
第三信息输出单元115,用于若有M个第二相似度符合预设的第二比较条件,则输出M个第二相似度符对应的鼻纹图像与第三待识别鼻纹图像中的猫为同一只的信息,其中,M为小于等于N且大于等于0的正整数。
具体地,根据第二比较条件判断单元114中N个第二相似度分别与预设的第二阈值进行比较的比较结果,当结果为有M个第二相似度大于预设的第二阈值,即有M个第二相似度符合预设的第二比较条件,可以理解为这M个第二鼻纹特征向量以及第三鼻纹特征向量是同一猫的鼻纹特征向量,即该M个第二鼻纹特征向量对应的鼻纹图像以及第三待识别鼻纹图像中的猫鼻纹是同一猫的猫鼻纹,从而可以确定该M个第二鼻纹特征向量对应的鼻纹图像以及第三待识别鼻纹图像中的猫为同一只,则可以将包含有M个鼻纹特征向量对应的鼻纹图像以及第三待识别鼻纹图像中的猫为同一只的信息输出至客户端以供用户进行使用或管理。
需要说明的是,若有M个第二相似度符合预设的第二比较条件,则可以理解为N-M个第二相似度不符合预设的第二比较条件,即有N-M个第二相似度对应的鼻纹特征向量对应的鼻纹图像以及第三待识别鼻纹图像中的猫不是同一只。
在本实施例中,根据第二特征提取单元112至第三信息输出单元115,本实施例通过分别提取待识别的单张鼻纹图像以及某一ID下的一组鼻纹图像的鼻纹特征,然后,将该待识别的单张鼻纹图像的特征向量和一组鼻纹图像的多个特征向量依次两两点积计算相似度,进而依次将计算出的相似度值和预设的第二阈值进行比较,并记录符合大于第二阈值的个数,从而通过判断相似的个数来最终判定是否为同一只猫。
作为优选的技术方案中,参见图12,识别结果输出模块903还包括:
鼻纹特征获取单元121,用于在数据库中获取与第二ID标识相匹配的第五鼻纹特征向量。
在本实施例中,待识别图像数据为第四待识别鼻纹图像以及第二ID标识,具体可以理解为具体的鼻纹识别场景为判断以待识别的单张鼻纹图像与一个ID下的一个鼻纹特征向量对应的猫是否为同一只,待识别的单张鼻纹图像即第四待识别鼻纹图像,以及与第二ID标识相匹配的第五鼻纹特征向量。
具体地,由于在数据库中保存有采集到每个猫数据相对应的一个固定的可区分的ID号,故本实施例可以根据获取到的第二ID标识在数据库中进行索引,以获取与该第二ID标识相匹配的第五鼻纹特征向量。
第三特征提取单元122,用于将第四待识别鼻纹图像输入训练好的鼻纹特 征提取模型进行特征提取操作,得到第四鼻纹特征向量。
在本实施例中,第四鼻纹特征向量是采用训练好的鼻纹特征提取模型对第四待识别鼻纹图像进行鼻纹特征提取后输出的特征序列,通常以向量的形式呈现。
具体地,本实施例通过将第四待识别鼻纹图像输入训练好的鼻纹特征提取模型进行鼻纹特征提取操作,并输出的与第四待识别鼻纹图像对应的特征序列,即第四鼻纹特征向量,能够实现对图像数据中的鼻纹特征的准确获取,从而在一定程度上保证对猫鼻纹识别的准确率。
第三相似度计算单元123,用于计算第四鼻纹特征向量与第五鼻纹特征向量之间的第三相似度。
在本实施例中,第三相似度是用于量化第四鼻纹特征向量以及第五鼻纹特征向量之间的相似程度。
具体地,本实施例计算第四鼻纹特征向量与第五鼻纹特征向量之间的第三相似度,具体的计算方式与第一相似度计算单元102中计算第一相似度的方式相同,此处不作赘述。
第三比较条件判断单元124,用于判断第三相似度是否符合预设的第三比较条件。
在本实施例中,第三比较条件是用于衡量第三相似度是否达到能够判断第四鼻纹特征向量以及第五鼻纹特征向量的是一致的标准,具体可以根据实际应用需求进行设置,此处不作具体限制。
具体地,假设第三比较条件为第三相似度是否大于预设的第三阈值,则判断第三相似度是否符合预设的第三比较条件,即将在第三相似度计算单元123中获取到的第三相似度分别与预设的第三阈值进行比较,从而判定第三相似度是否符合预设的第三比较条件。
第四信息输出单元1241,用于若第三相似度符合预设的第三比较条件,则输出第四待识别鼻纹图像中的猫以及第五鼻纹特征向量对应的猫为同一只的信息。
具体地,根据第三比较条件判断单元124中第三相似度与预设的第三阈值进行比较的比较结果,当结果为第三相似度大于预设的第三阈值,即第三相似度符合预设的第三比较条件,可以理解为第四鼻纹特征向量以及第五鼻纹特征向量是同一猫的鼻纹特征向量,即该第四待识别鼻纹图像以及第五鼻纹特征向量对应的猫鼻纹是同一猫的猫鼻纹,从而可以确定该第四待识别鼻纹图像以及第五鼻纹特征向量对应的猫为同一只,则可以将包含有第四待识别鼻纹图像以及第五鼻纹特征向量对应的猫为同一只的信息输出至客户端以供用户进行使用或管理。
第五信息输出单元1242,用于若第三相似度不符合预设的第三比较条件,则输出第四待识别鼻纹图像中的猫以及第五鼻纹特征向量对应的猫不是同一只的信息。
具体地,根据第三比较条件判断单元124中第三相似度与预设的第三阈值 进行比较的比较结果,当结果为第三相似度小于或等于预设的第三阈值,即第三相似度不符合预设的第三比较条件,可以理解为第四鼻纹特征向量以及第五鼻纹特征向量不是同一猫的鼻纹特征向量,即该第四待识别鼻纹图像以及第五鼻纹特征向量对应的猫鼻纹不是同一猫的猫鼻纹,从而可以确定该第四待识别鼻纹图像以及第五鼻纹特征向量对应的猫不是同一只,则可以将包含有第四待识别鼻纹图像以及第五鼻纹特征向量对应的猫不是同一只的信息输出至客户端以供用户进行使用或管理。
在本实施例中,根据鼻纹特征获取单元121至第五信息输出单元1242,本实施例通过将一第四待识别鼻纹图像经过鼻纹特征提取模型提取特征向量,然后将该第四鼻纹特征向量和数据库中一ID号对应的第五鼻纹特征向量计算点积得到相似度,再通过相似度和阈值判定第四待识别鼻纹图像以及第五鼻纹特征向量对应的猫是否为同一只猫。
作为优选的技术方案中,参见图13,识别结果输出模块903还包括:
第四特征提取单元131,用于将第六待识别鼻纹图像输入训练好的鼻纹特征提取模型进行特征提取操作,得到第六鼻纹特征向量。
在本实施例中,待识别图像数据为第六待识别鼻纹图像以及ID标识数据,具体可以理解为具体的鼻纹识别场景为判断以待识别的单张鼻纹图像与数据库中的全部ID对应的鼻纹特征向量对应的猫是否为同一只,即判断第六待识别鼻纹图像与数据库中的全部鼻纹特征向量中的哪些鼻纹特征向量对应的猫为同一只,待识别的单张鼻纹图像即第六待识别鼻纹图像,以及与ID标识数据相匹配的数据库中的全部鼻纹特征向量。
其中,第六鼻纹特征向量是采用训练好的鼻纹特征提取模型对第六待识别鼻纹图像进行鼻纹特征提取后输出的特征序列,通常以向量的形式呈现。
具体地,本实施例通过将第六待识别鼻纹图像输入训练好的鼻纹特征提取模型进行鼻纹特征提取操作,并输出的与第六待识别鼻纹图像对应的特征序列,即第六鼻纹特征向量,能够实现对图像数据中的鼻纹特征的准确获取,从而在一定程度上保证对猫鼻纹识别的准确率。
特征相似度计算单元132,用于分别计算第六鼻纹特征向量与每个猫鼻纹特征向量之间的相似度,得到K个特征相似度,其中K为大于0的正整数。
在本实施例中,特征相似度是用于量化第六鼻纹特征向量以及K个猫鼻纹特征向量之间的相似程度。
具体地,本实施例可以通过依次两两计算第六鼻纹特征向量与每个猫鼻纹特征向量之间的特征相似度,得到K个特征相似度,具体计算方式与第一相似度计算单元102中计算第一相似度的方式相同,此处不作赘述。
特征相似度排序单元133,用于将K个特征相似度进行排序,获取最大特征相似度。
具体地,获取最大特征相似度,本实施例通对特征相似度按照从高到低的顺序进行排序,以快速获取排序第一的特征相似度作为最大特征相似度。
第四比较条件判断单元134,用于判断最大特征相似度是否符合预设的第 四比较条件。
在本实施例中,第四比较条件是用于衡量最大特征相似度以及其他特征相似度是否达到能够判断第六鼻纹特征向量以及K个猫鼻纹特征向量的哪些猫鼻纹特征向量是一致的标准,具体可以根据实际应用需求进行设置,此处不作具体限制。
具体地,假设第四比较条件为特征相似度是否大于预设的第四阈值,则判断最大特征相似度是否符合预设的第四比较条件,即将在特征相似度排序单元133中获取到的最大特征相似度分别与预设的第四阈值进行比较,从而判定最大特征相似度是否符合预设的第四比较条件。
第六信息输出单元1341,用于若最大特征相似度不符合预设的第四比较条件,则输出数据库中没有与第六待识别鼻纹图像中的猫相同的信息。
具体地,根据第四比较条件判断单元134中最大相似度与预设的第四阈值进行比较的比较结果,当结果为最大特征相似度小于或等于预设的第四阈值,即最大特征相似度不符合预设的第四比较条件,可以理解为第六鼻纹特征向量以及K个猫鼻纹特征向量中没有相同的猫的鼻纹特征向量,即该第六待识别鼻纹图像以及K个猫鼻纹特征向量对应的猫鼻纹中没有同一猫的猫鼻纹,从而可以确定该第六待识别鼻纹图像以及K个猫鼻纹特征向量对应的猫中没有相同猫,则可以将包含有第六待识别鼻纹图像以及K个猫鼻纹特征向量对应的猫中没有相同猫的信息输出至客户端以供用户进行使用或管理。
相似度判断单元1342,用于若最大特征相似度符合预设的第四比较条件,则判断K个特征相似度中是否有符合预设的第四比较条件的相似度。
具体地,根据第四比较条件判断单元134中最大特征相似度与预设的第四阈值进行比较的比较结果,当结果为最大特征相似度大于预设的第四阈值,即最大特征相似度符合预设的第四比较条件,为了进一步确定第六鼻纹特征向量以及K个猫鼻纹特征向量的哪些猫鼻纹特征向量可能是相同的猫鼻纹特征,本实施例通过判断K个特征相似度中是否有符合预设的第四比较条件的相似度,具体可以是将K个特征相似度分别与预设的第四阈值进行比较并获取比较结果。
第七信息输出单元1343,用于若有J个特征相似度符合第四比较条件,则输出J个特征相似度的ID对应的猫与第六待识别鼻纹图像中的猫为同一只的信息,其中,J为小于等于K且大于等于0的正整数。
具体地,根据相似度判断单元1342中K个特征相似度分别与预设的第四阈值进行比较的比较结果,当结果为有J个特征相似度大于预设的第四阈值,即有J个特征相似度符合预设的第四比较条件,可以理解为这J个猫鼻纹特征向量以及第六鼻纹特征向量是同一猫的鼻纹特征向量,即该J个猫鼻纹特征向量对应的鼻纹图像以及第六待识别鼻纹图像中的猫鼻纹是同一猫的猫鼻纹,从而可以确定该J个猫鼻纹特征向量对应的鼻纹图像以及第六待识别鼻纹图像中的猫为同一只,则可以将包含有J个猫鼻纹特征向量对应的猫以及第六待识别鼻纹图像中的猫为同一只的信息输出至客户端以供用户进行使用或管理。
需要说明的是,若有J个特征相似度符合预设的第四比较条件,则可以理解为K-J个特征相似度不符合预设的第四比较条件,即有K-J个特征相似度对应的猫鼻纹特征向量对应的猫以及第六待识别鼻纹图像中的猫不是同一只。
在本实施例中,根据第四特征提取单元131至第七信息输出单元1343,本实施例通过将第六待识别鼻纹图像经过鼻纹特征提取模型提取第六鼻纹特征向量,然后将该第六鼻纹特征向量依次和数据库中的所有ID对应的猫鼻纹特征向量进行点积计算相似度,并对K个特征相似度从高到低进行排序,如果最大特征向量的没有超过设定的第四阈值则判定数据库中没有该第六待识别鼻纹图像对应的猫,如果超过设定的第四阈值,则输出该最大特征相似度对应的ID,如果超过设定的第四阈值的ID数大于三个,则输出这三个相似度最高的三个ID即可。
作为优选的技术方案中,参见图14,该装置还包括:
网络构建模块141,用于构建基础鼻纹深度学习网络;
在本实施例中,为了能够获取到鼻纹识别准确率较高的模型,本实施例构建基础鼻纹深度学习网络采用Resnet50作为骨架网络,还可以根据实际应用需求采用其他网络,此处不作具体限制。
具体地,本实施例通过采用Resnet50作为骨架网络,接着引入注意力模型强化网络,然后采用损失函数来收敛整个网络,最后通过softmax层输出分类结果,以构建该基础鼻纹深度学习网络,其中,该基础鼻纹深度学习网络可以理解为包括预处理图像数据的输入层、用于鼻纹特征提取的卷积层、用于进一步鼻纹特征提取的采样层、用于鼻纹特征压缩的池化层、用于对鼻纹进行分类的全连接层,以及用于输出鼻纹分类结果的softmax层等等,能够在一定程度上保证鼻纹识别的准确率。
数据集获取模块142,用于对数据库中的猫图像进行标注,得到用于训练的分割数据集;
在本实施例中,鼻纹数据集是预先处理和制作好的鼻纹训练集以及鼻纹测试集。
具体地,本实施例可以根据实际应用需要的数据类型在数据库中进行索引,以快速准确的获取到预先处理和制作好的鼻纹训练集以及鼻纹测试集,以供后续进行训练。
网络训练模块143,用于将鼻纹数据集输入基础鼻纹深度学习网络中进行迭代训练操作,得到训练好的鼻纹分类模型;
在本实施例中,训练好的鼻纹分类模型是用于识别鼻纹图像的鼻纹特征,并能够提取出的鼻纹特征进行鼻纹分类的模型。
具体地,本实施例通过利用预先处理和制作好的鼻纹训练集以及鼻纹测试集对基础鼻纹深度学习网络进行大量的迭代训练,从而获取到能够得到用于识别鼻纹图像的鼻纹特征,并能够提取出的鼻纹特征进行鼻纹分类的鼻纹分类模型。
模型截取模块144,用于截取鼻纹分类模型中从输入层到输出特征的层, 作为鼻纹特征提取模型。
在本实施例中,截取鼻纹分类模型中从输入层到输出特征的层可以理解为截取鼻纹分类模型中用于预处理图像数据的输入层、用于鼻纹特征提取的卷积层、用于进一步鼻纹特征提取的采样层以及用于鼻纹特征压缩的池化层,具体还可以根据实际应用需求进行截取,此处不作具体限制。
具体地,截取鼻纹分类模型中从输入层到输出特征的层具体可以截去鼻纹分类模型中的全连接层之后的层,保留从输入层到输出特征的层作为鼻纹特征提取模型。
在本实施例中,根据网络构建模块141至模型截取模块144,本实施例通过利用Resnet50作为骨架网络,接着引入注意力模型强化网络,然后采用损失函数来收敛整个网络,最后通过softmax层得到分类结果得到基础鼻纹。利用制作好的鼻纹训练集和测试集对基础鼻纹深度学习网络进行大量的迭代训练得到鼻纹分类模型,然后,截取模型到输出特征的层得到最终的鼻纹特征提取模型。
作为优选的技术方案中,参见图15,该装置还包括:
鼻纹图像采集模块151,用于采集A组不同猫的鼻纹图像,其中A为大于0的正整数;
具体地,由于训练深度学习模型需要大量的数据,因此本实施例采集了A组不同猫的鼻纹图像,如3000组不同猫的鼻纹图像。
标识设置模块152,用于对每个猫设置一个ID标识;
在本实施例中,ID标识用于唯一标识一个猫,其中,每个猫与ID标识为一一对应的。
具体地,本实施例对每个猫分配一个固定的可区分的ID号,如A111。
图像预处理模块153,用于针对每个ID标识,将每个鼻纹图像进行预处理操作,得到处理后的鼻纹训练图像以及鼻纹测试图像;
在本实施例中,为了保证基础鼻纹深度学习网络对鼻纹数据集特征提取以及识别的准确率,需要保证输入基础鼻纹深度学习网络的鼻纹数据集的数据格式是与基础鼻纹深度学习网络相适配的,故本实施例针对每个ID标识,通过将每个鼻纹图像进行预处理操作,以得到处理后的数据格式与基础鼻纹深度学习网络相适配的鼻纹训练图像以及鼻纹测试图像。
其中,预处理操作具体可以是鼻纹图像采用旋转,改变尺寸,明暗变化等图像预处理手段,还可以根据实际应用需求进行其他图像处理手段,此处不作具体限制。
具体地,针对每个ID标识,将每个鼻纹图像分别进行图像旋转,改变尺寸,以及明暗变化等图像预处理手段,以获取处理好的数据格式与基础鼻纹深度学习网络相适配的鼻纹训练图像以及鼻纹测试图像。
数据集保存模块154,用于将鼻纹训练图像以及鼻纹测试图像与ID标识对应保存至数据库中,得到鼻纹数据集。
具体地,将在图像预处理模块153中获取到的鼻纹训练图像以及鼻纹测试 图像与其ID标识一一对应保存至数据库中,以得到鼻纹数据集。
在本实施例中,根据鼻纹图像采集模块151至数据集保存模块154,由于训练深度学习模型需要大量的数据,因此本实施例采集了A组不同猫的鼻纹图像,并为每个猫分配一个固定的可区分的ID号,进一步地,为了增强训练时模型的鲁棒性,我们对这A组ID标识下的鼻纹图像采用旋转,改变尺寸,明暗变化等图像预处理手段来扩展图像数量得到鼻纹训练集以及鼻纹测试集,作为该鼻纹数据集。
与现有的猫识别方法相比,本发明基于猫鼻纹特征提取模型的猫鼻纹识别方法及装置的优点在于:
1.本实施例通过获取猫鼻纹识别请求中的待识别图像数据;然后,根据该待识别图像数据选择与该待识别图像数据相匹配的鼻纹图像识别模式,以实现具体鼻纹识别场景的应用,进而按照鼻纹图像识别模式对待识别图像数据进行识别,操作流程简单,效率高以及准确率高;
2.本实施例采用的鼻纹识别技术不仅成本低、无需额外的设备费用且识别精度高,同时还可以避免伤害宠物身体等缺点,且鼻纹识别操作简单,可以帮助宠物服务机构节省大量身份认证时间提高工作效率;
3.本实施例能够基于猫鼻纹特征提取模型对猫鼻纹识别能够适用的场景广泛,前景好。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本发明中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其它实施例中实现。因此,本发明将不会被限制于本发明所示的这些实施例,而是要符合与本发明所公开的原理和新颖特点相一致的最宽的范围。

Claims (10)

  1. 一种基于猫鼻纹特征提取模型的猫鼻纹识别方法,其特征在于,包括以下步骤:
    接收猫鼻纹识别请求,所述猫鼻纹识别请求至少携带有待识别图像数据;
    选择与所述待识别图像数据相匹配的鼻纹图像识别模式;
    按照所述鼻纹图像识别模式对所述待识别图像数据进行识别,输出鼻纹识别结果。
  2. 根据权利要求1所述的基于猫鼻纹特征提取模型的猫鼻纹识别方法,所述待识别图像数据为第一待识别鼻纹图像以及第二待识别鼻纹图像,其特征在于,所述按照所述鼻纹图像识别模式对所述待识别图像数据进行识别,输出鼻纹识别结果的步骤包括:
    将所述第一待识别鼻纹图像以及所述第二待识别鼻纹图像输入训练好的鼻纹特征提取模型进行特征提取操作,得到第一鼻纹特征向量以及第二鼻纹特征向量;
    计算所述第一鼻纹特征向量以及所述第二鼻纹特征向量之间的第一相似度;
    判断所述第一相似度是否符合预设的第一比较条件;
    若所述第一相似度符合预设的第一比较条件,则输出所述第一待识别鼻纹图像以及所述第二待识别鼻纹图像中的猫为同一只的信息;
    若所述第一相似度不符合预设的第一比较条件,则输出所述第一待识别鼻纹图像以及所述第二待识别鼻纹图像中的猫不是同一只的信息。
  3. 根据权利要求2所述的基于猫鼻纹特征提取模型的猫鼻纹识别方法,所述待识别图像数据为第三待识别鼻纹图像以及第一ID标识,其特征在于,所述按照所述鼻纹图像识别模式对所述待识别图像数据进行识别,输出鼻纹识别结果的步骤还包括:
    在数据库中获取与所述第一ID标识相匹配的一组鼻纹图像;
    将所述第三待识别鼻纹图像以及一组所述鼻纹图像分别输入训练好的鼻纹特征提取模型进行特征提取操作,得到第三鼻纹特征向量以及N个鼻纹特征向量,其中N为大于0的正整数;
    分别计算所述第三鼻纹特征向量与每个所述鼻纹特征向量之间的第二相似度;
    判断所述第二相似度是否符合预设的第二比较条件;
    若有M个所述第二相似度符合预设的第二比较条件,则输出M个所述第二相似度符对应的鼻纹图像与所述第三待识别鼻纹图像中的猫为同一只的信息,其中,M为小于等于N且大于等于0的正整数。
  4. 根据权利要求2所述的基于猫鼻纹特征提取模型的猫鼻纹识别方法,所述待识别图像数据为第四待识别鼻纹图像以及第二ID标识,其特征在于,所述按照所述鼻纹图像识别模式对所述待识别图像数据进行识别,输出鼻纹识别结果的步骤还包括:
    在数据库中获取与所述第二ID标识相匹配的第五鼻纹特征向量;
    将所述第四待识别鼻纹图像输入训练好的鼻纹特征提取模型进行特征提取操作,得到第四鼻纹特征向量;
    计算所述第四鼻纹特征向量与所述第五鼻纹特征向量之间的第三相似度;
    判断所述第三相似度是否符合预设的第三比较条件;
    若所述第三相似度符合预设的第三比较条件,则输出所述第四待识别鼻纹图像中的猫以及所述第五鼻纹特征向量对应的猫为同一只的信息;
    若所述第一相似度不符合预设的第三比较条件,则输出所述第四待识别鼻纹图像中的猫以及所述第五鼻纹特征向量对应的猫不是同一只的信息。
  5. 根据权利要求2所述的基于猫鼻纹特征提取模型的猫鼻纹识别方法,所述待识别图像数据为第六待识别鼻纹图像以及ID标识数据,其特征在于,所述按照所述鼻纹图像识别模式对所述待识别图像数据进行识别,输出鼻纹识别结果的步骤还包括:
    在数据库中获取与所述ID标识数据中的每个ID对应的猫鼻纹特征向量;
    将所述第六待识别鼻纹图像输入训练好的鼻纹特征提取模型进行特征提取操作,得到第六鼻纹特征向量;
    分别计算所述第六鼻纹特征向量与每个所述猫鼻纹特征向量之间的相似度,得到K个特征相似度,其中K为大于0的正整数;
    将K个所述特征相似度进行排序,获取最大特征相似度;
    判断所述最大特征相似度是否符合预设的第四比较条件;
    若所述最大特征相似度不符合预设的第四比较条件,则输出数据库中没有与所述第六待识别鼻纹图像中的猫相同的信息;
    若所述最大特征相似度符合预设的第四比较条件,则判断K个特征相似度中是否有符合预设的所述第四比较条件的相似度;
    若有J个特征相似度符合所述第四比较条件,则输出J个所述特征相似度的ID对应的猫与所述第六待识别鼻纹图像中的猫为同一只的信息,其中,J为小于等于K且大于等于0的正整数。
  6. 根据权利要求2所述的基于猫鼻纹特征提取模型的猫鼻纹识别方法,其特征在于,在所述选择与所述待识别图像数据相匹配的鼻纹图像识别模式的步骤之前,所述方法还包括:
    构建基础鼻纹深度学习网络;
    对数据库中的猫图像进行标注,得到用于训练的分割数据集;
    将所述鼻纹数据集输入基础鼻纹深度学习网络中进行迭代训练操作,得到训练好的鼻纹分类模型;
    截取所述鼻纹分类模型中从输入层到输出特征的层,作为所述鼻纹特征提取模型。
  7. 根据权利要求6所述的基于猫鼻纹特征提取模型的猫鼻纹识别方法,其特征在于,在所述对数据库中的猫图像进行标注,得到用于训练的分割数据集的步骤之前,所述方法还包括:
    采集A组不同猫的鼻纹图像,其中A为大于0的正整数;
    对每个猫设置一个ID标识;
    针对每个ID标识,将每个鼻纹图像进行预处理操作,得到处理后的鼻纹训练图像以及鼻纹测试图像;
    将所述鼻纹训练图像以及所述鼻纹测试图像与所述ID标识对应保存至数据库中,得到所述鼻纹数据集。
  8. 一种基于猫鼻纹特征提取模型的猫鼻纹识别装置,其特征在于,包括:
    请求接收模块,用于接收猫鼻纹识别请求,所述猫鼻纹识别请求至少携带有待识别图像数据;
    识别模式选择模块,用于选择与所述待识别图像数据相匹配的鼻纹图像识别模式;
    识别结果输出模块,用于按照所述鼻纹图像识别模式对所述待识别图像数据进行识别,输出鼻纹识别结果。
  9. 根据权利要求8所述的基于猫鼻纹特征提取模型的猫鼻纹识别装置,其特征在于,所述装置还包括:
    网络构建模块,用于构建基础鼻纹深度学习网络;
    数据集获取模块,用于对数据库中的猫图像进行标注,得到用于训练的分割数据集;
    网络训练模块,用于将所述鼻纹数据集输入基础鼻纹深度学习网络中进行迭代训练操作,得到训练好的鼻纹分类模型;
    模型截取模块,用于截取所述鼻纹分类模型中从输入层到输出特征的层,作为所述鼻纹特征提取模型。
  10. 根据权利要求9所述的基于猫鼻纹特征提取模型的猫鼻纹识别装置,其特征在于,所述装置还包括:
    鼻纹图像采集模块,用于采集A组不同猫的鼻纹图像,其中A为大于0的正整数;
    标识设置模块,用于对每个猫设置一个ID标识;
    图像预处理模块,用于针对每个ID标识,将每个鼻纹图像进行预处理操作,得到处理后的鼻纹训练图像以及鼻纹测试图像;
    将所述鼻纹训练图像以及所述鼻纹测试图像与所述ID标识对应保存至数据库中,得到所述鼻纹数据集。
PCT/CN2021/089559 2020-10-27 2021-04-25 基于猫鼻纹特征提取模型的猫鼻纹识别方法及装置 WO2022088626A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011162210.2 2020-10-27
CN202011162210.2A CN112329573A (zh) 2020-10-27 2020-10-27 基于猫鼻纹特征提取模型的猫鼻纹识别方法及装置

Publications (1)

Publication Number Publication Date
WO2022088626A1 true WO2022088626A1 (zh) 2022-05-05

Family

ID=74312335

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/089559 WO2022088626A1 (zh) 2020-10-27 2021-04-25 基于猫鼻纹特征提取模型的猫鼻纹识别方法及装置

Country Status (2)

Country Link
CN (1) CN112329573A (zh)
WO (1) WO2022088626A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329573A (zh) * 2020-10-27 2021-02-05 苏州中科先进技术研究院有限公司 基于猫鼻纹特征提取模型的猫鼻纹识别方法及装置
CN113076886A (zh) * 2021-04-09 2021-07-06 深圳市悦保科技有限公司 一种猫的面部个体识别装置和方法
CN113722522A (zh) * 2021-09-02 2021-11-30 四川楠水农牧科技有限公司 一种牛只唯一性识别方法、终端、可读存储介质
CN114511886A (zh) * 2022-02-22 2022-05-17 华南师范大学 一种基于深度残差收缩网络的犬鼻纹识别方法和模型

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150136719A (ko) * 2014-05-27 2015-12-08 (주)링크옵틱스 애완견 관리를 위한 비문 인식처리장치
CN109543663A (zh) * 2018-12-28 2019-03-29 北京旷视科技有限公司 一种犬只身份识别方法、装置、系统及存储介质
CN109829381A (zh) * 2018-12-28 2019-05-31 北京旷视科技有限公司 一种犬只识别管理方法、装置、系统及存储介质
CN110737885A (zh) * 2019-10-16 2020-01-31 支付宝(杭州)信息技术有限公司 豢养物的身份认证方法及装置
CN110765323A (zh) * 2019-10-24 2020-02-07 武汉菲旺软件技术有限责任公司 小区宠物狗的识别方法、装置、设备及介质
CN112329573A (zh) * 2020-10-27 2021-02-05 苏州中科先进技术研究院有限公司 基于猫鼻纹特征提取模型的猫鼻纹识别方法及装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110929650B (zh) * 2019-11-25 2022-04-12 蚂蚁胜信(上海)信息技术有限公司 豢养物身份识别方法、装置、计算设备、可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150136719A (ko) * 2014-05-27 2015-12-08 (주)링크옵틱스 애완견 관리를 위한 비문 인식처리장치
CN109543663A (zh) * 2018-12-28 2019-03-29 北京旷视科技有限公司 一种犬只身份识别方法、装置、系统及存储介质
CN109829381A (zh) * 2018-12-28 2019-05-31 北京旷视科技有限公司 一种犬只识别管理方法、装置、系统及存储介质
CN110737885A (zh) * 2019-10-16 2020-01-31 支付宝(杭州)信息技术有限公司 豢养物的身份认证方法及装置
CN110765323A (zh) * 2019-10-24 2020-02-07 武汉菲旺软件技术有限责任公司 小区宠物狗的识别方法、装置、设备及介质
CN112329573A (zh) * 2020-10-27 2021-02-05 苏州中科先进技术研究院有限公司 基于猫鼻纹特征提取模型的猫鼻纹识别方法及装置

Also Published As

Publication number Publication date
CN112329573A (zh) 2021-02-05

Similar Documents

Publication Publication Date Title
WO2022088626A1 (zh) 基于猫鼻纹特征提取模型的猫鼻纹识别方法及装置
CN104102915B (zh) 一种心电异常状态下基于ecg多模板匹配的身份识别方法
CN101226591A (zh) 基于手机摄像头结合人脸识别技术的身份识别方法
CN101874738B (zh) 基于压力累积足印图像的人体生理分析与身份识别的方法
CN107944356B (zh) 综合多类型特征的层次主题模型掌纹图像识别的身份认证方法
Kumar et al. An improved biometric fusion system of fingerprint and face using whale optimization
Kuzu et al. Loss functions for CNN-based biometric vein recognition
CN109497990A (zh) 一种基于典型相关分析的心电信号身份识别方法及系统
CN101089874A (zh) 一种远程人脸图像的身份识别方法
CN1304114A (zh) 基于多生物特征的身份鉴定融合方法
CN106203497A (zh) 一种基于图像质量评价的手指静脉感兴趣区域图像筛选方法
Omoyiola Overview of biometric and facial recognition techniques
CN110008674A (zh) 一种高泛化性的心电信号身份认证方法
CN101819629A (zh) 一种基于监督张量流形学习的掌纹识别系统及识别方法
Garg et al. Biometric authentication using finger nail surface
CN110163123A (zh) 一种基于单幅近红外手指图像指纹指静脉融合识别方法
CN114239649B (zh) 面向可穿戴设备光电容积脉搏波信号发现和识别新用户的身份识别方法
Arora et al. Palmhashnet: Palmprint hashing network for indexing large databases to boost identification
CN113468988B (zh) 一种基于ecg信号的多压力状态下身份识别方法
Arora et al. FKPIndexNet: An efficient learning framework for finger-knuckle-print database indexing to boost identification
Zhou et al. A novel approach for code match in iris recognition
Cherifi et al. Fusion of face recognition methods at score level
CN113705443A (zh) 综合利用知识图谱和深度残差网络的掌纹图像识别方法
CN108491802A (zh) 基于联合加权差分激励和双Gabor方向的掌纹交叉匹配识别方法
Hashemi et al. Biometric identification through hand geometry

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21884369

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21884369

Country of ref document: EP

Kind code of ref document: A1