WO2022082401A1 - 宠物鼻纹识别方法、装置、计算机设备及存储介质 - Google Patents

宠物鼻纹识别方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2022082401A1
WO2022082401A1 PCT/CN2020/122025 CN2020122025W WO2022082401A1 WO 2022082401 A1 WO2022082401 A1 WO 2022082401A1 CN 2020122025 W CN2020122025 W CN 2020122025W WO 2022082401 A1 WO2022082401 A1 WO 2022082401A1
Authority
WO
WIPO (PCT)
Prior art keywords
nose print
pet
sample
nose
recognition model
Prior art date
Application number
PCT/CN2020/122025
Other languages
English (en)
French (fr)
Inventor
虞爱
肖唐财
谭升
刘周可
Original Assignee
深圳市悦保科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市悦保科技有限公司 filed Critical 深圳市悦保科技有限公司
Priority to PCT/CN2020/122025 priority Critical patent/WO2022082401A1/zh
Publication of WO2022082401A1 publication Critical patent/WO2022082401A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the present application relates to the technical field of artificial intelligence, and in particular, to a method, device, computer equipment and storage medium for identifying pet nose prints.
  • non-biometric identification technology is also known as RFID (Radio Frequency Identification, radio frequency identification technology) pet identification technology.
  • RFID Radio Frequency Identification, radio frequency identification technology
  • the identification chip generates an induced current based on the signal sent by the scanner. With this energy, the information stored in the chip is sent out. Scanner receives and recognizes.
  • the inventor realizes that the current method of animal identification needs to be implanted in the pet, and requires professional equipment to transmit signals for identification, which may cause damage to the pet and is inconvenient to use.
  • biometric technologies include facial recognition, iris recognition, voiceprint recognition, and gait recognition.
  • Facial recognition has higher accuracy in face recognition, but lower accuracy for facial recognition of hairy pets.
  • iris recognition is unique and has high precision, it requires professional equipment to collect images, which is costly to develop and inconvenient to use.
  • Voiceprint and gait are also unique and theoretically feasible, but the technology is not mature enough.
  • the embodiments of the present application provide a method, device, computer equipment and storage medium for identifying pet nose prints, so as to solve the technical problems of low identification accuracy and high identification cost when identifying hairy pets in the prior art.
  • a method for identifying a pet nose print comprising:
  • the calculated Euclidean distance is smaller than the first preset value, it is determined that the pet in the original picture corresponding to the feature vector to be verified is the same pet as the correspondingly calculated pet in the archive.
  • a pet nose print identification device comprising:
  • the image recognition module is used to obtain the original picture with the pet's nose print, and identify the nose print image in the original image through the pre-trained nose print image recognition model;
  • the feature extraction module is used for extracting the nose print feature of the nose print image through the pre-trained nose print recognition model to obtain the feature vector to be verified of the nose print image;
  • a calculation module used to calculate the Euclidean distance of the feature vector to be verified and the feature vector of each pet stored in the archive respectively;
  • the pet determination module is configured to determine that the pet in the original image corresponding to the feature vector to be verified is the same pet as the correspondingly calculated pet in the archive when the calculated Euclidean distance is smaller than the first preset value.
  • a computer device comprising a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, the processor implementing the following steps when executing the computer-readable instructions:
  • the calculated Euclidean distance is less than the first preset value, it is determined that the pet in the original picture corresponding to the feature vector to be verified is the same pet as the correspondingly calculated pet in the archive.
  • One or more readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the following steps:
  • the calculated Euclidean distance is less than the first preset value, it is determined that the pet in the original picture corresponding to the feature vector to be verified is the same pet as the correspondingly calculated pet in the archive.
  • the pet nose print identification method, device, computer equipment and storage medium proposed in the present application only need to collect images of pets with nose prints through a camera in the entire identification process of animals, which reduces the cost of pet identification, and because the nose prints With uniqueness, the vector recognition of the nose print in the nose print image is carried out through the pre-trained nose print image recognition model and the nose print recognition model, and is compared with the feature vector of each pet stored in the archive, that is The uniqueness of the pet can be accurately identified, the identification process is accurate, and the identification efficiency is high.
  • FIG. 1 is a schematic diagram of an application environment of a pet nose print identification method in an embodiment of the present application
  • FIG. 2 is a flowchart of a method for identifying a pet nose print in an embodiment of the present application
  • 3 is a flow chart of an application scenario of the pet nose print recognition method in an embodiment of the present application.
  • FIG. 4 is a flowchart of training a nose print image recognition model in an embodiment of the present application.
  • FIG. 5 is a flowchart of training a nose print recognition model in an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a pet nose print recognition device in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a computer device in an embodiment of the present application.
  • the pet nose print identification method provided in this application can be applied in the application environment as shown in FIG. 1 , where the computer equipment includes but is not limited to various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices.
  • a method for identifying a pet nose print is provided, which is described by taking the method applied to the computer device in FIG. 1 as an example, including the following steps S101 to S104 .
  • the original picture with the pet's nose print is the pet's nose print image to be identified, which can be crawled from the Internet, or can be obtained by shooting with the user's mobile phone.
  • the method also includes:
  • FIG. 4 is a flowchart of training a nose print image recognition model in an embodiment of the present application.
  • sample images are first collected, and the collected sample images are manually marked, mainly marking the sample images.
  • the sample image carrying the annotation information is divided into a training sample set and a test sample set, the nose print image recognition model is trained through the training sample set, and the nose print image is recognized through the test sample set.
  • the model is tested, and the training results and test results are used to comprehensively determine whether the nose print image recognition model meets the training standards.
  • S102 Perform nose print feature extraction on the nose print image by using a pre-trained nose print recognition model to obtain a feature vector of the nose print image to be verified.
  • FIG. 5 is a flowchart of training a nose print recognition model in an embodiment of the present application.
  • nose print sample images are collected, then the collected nose print sample images are sorted, and then the sorted nose print sample images are sorted.
  • the pattern sample images are divided into a nose pattern training sample set and a nose pattern test sample set.
  • the nose pattern recognition model is trained through the nose pattern training sample set, and the nose pattern recognition model is tested through the nose pattern test sample set.
  • the results and the test results are combined to determine whether the nose print recognition model is trained up to the standard.
  • Euclidean distance is a commonly used definition of distance, which refers to the true distance between two points in m-dimensional space, or the natural length of a vector, that is, the distance from the point to the origin. Euclidean distance in 2D and 3D space is the actual distance between two points.
  • the pet nose print identification method further includes:
  • a pet archive is established according to the pre-obtained pet feature vector, so as to verify the pet to be identified.
  • FIG. 3 is a flow chart of an application scenario of the pet nose print recognition method in an embodiment of the present application. As shown in FIG. 3 , by collecting the original picture, and then performing nose print detection and nose print recognition on the collected image, according to the identification If there is an image of pet nose print that needs to be identified, compare the identified nose print with that of the pet on file to determine whether the pet to be identified is one of the pets on file. one of the.
  • the calculated Euclidean distance is greater than or equal to the first preset value, it is determined that the pet in the original picture corresponding to the feature vector to be verified and the correspondingly calculated pet in the archive are different pets.
  • the pet nose print recognition method proposed in this embodiment firstly obtains the original picture with the pet nose print, identifies the nose print image in the original image through the pre-trained nose print image recognition model, and then uses the pre-trained nose print image to identify the nose print image in the original image.
  • the recognition model performs nose print feature extraction on the nose print image, obtains the feature vector to be verified of the nose print image, calculates the Euclidean distance between the feature vector to be verified and the feature vector of each pet stored in the archive, when When the calculated Euclidean distance is smaller than the first preset value, it is determined that the pet in the original picture corresponding to the feature vector to be verified is the same pet as the correspondingly calculated pet in the archive.
  • the entire identification process only needs to collect images of pets with nose prints through the camera, which reduces the cost of pet identification, and because nose prints are unique, the pre-trained nose print image recognition model and nose print recognition model are used to identify the pet.
  • the nose pattern in the nose pattern image is identified by vector and compared with the feature vector of each pet stored in the archive, the uniqueness of the pet can be accurately identified, the identification process is accurate, and the identification efficiency is high.
  • the step of training this nose print image recognition model includes:
  • the original sample image is randomly divided into a sample image training set and a sample image testing set;
  • the sample pictures in the sample picture training set into the noseprint image recognition model to be trained obtain the predicted coordinates of the pet noseprint area of each sample picture in the sample picture training set, and according to the predicted coordinates of the pet noseprint area in the sample picture and the pet
  • the first loss value of the real coordinates of the nose print area adjusts the current network parameters of the loss function in the nose print image recognition model
  • the loss function selected by the nose print image recognition model is the mean square error loss function, and the formula of the mean square error loss function is:
  • pi ( ⁇ , ⁇ ) is the predicted coordinate
  • y i is the real coordinate
  • N is the number of extractions
  • ⁇ and ⁇ are network parameters.
  • forward propagation is to input the training set data into the network in batches to obtain the predicted coordinates, and then calculate the loss with the real coordinates. Network parameters. This step is repeated continuously until the loss value gradually decreases to a stable level.
  • the sample image test set is used for the test set to verify the training effect.
  • the trained network and parameters are used to predict the test set, and the loss of the test set is calculated in combination with the test set label, and then compared with the loss of the training set to judge the training effect. Bad.
  • the verification process is generally carried out at the same time as the training process, and the training effect is dynamically judged and the training is terminated. The judgment is based on the following: when the loss (error) of the training set and the test set are both in the declining stage, it belongs to the underfitting stage, and this process is in the early stage of training; when the loss (error) of the training set and the test set gradually decreases to tend to It is stable, and the difference between the two is not large.
  • the training of the model is moderate, and the training can be terminated; if the training continues, the loss (error) of the training set will gradually decrease, and it will be much lower than the loss (error) of the test set. Low reliability.
  • the loss (error) of the training set and the test set When the loss (error) of the training set and the test set is large, it belongs to the underfitting stage, and the process is in the early stage of training; when the loss (error) of the training set and the test set tends to be stable and very small, the two If the difference is not large, the model training is moderate, and the training can be terminated; when the loss (error) of the training set is small, but the loss (error) of the validation set is large, the model is overfitted, and the reliability of the prediction results is low.
  • the steps of training the nose print recognition model include:
  • the positive sample pair and the negative sample pair are input into the noseprint recognition model to be trained, the third loss value of the loss function in the noseprint recognition model to be trained is calculated, and the network parameters of the loss function in the noseprint recognition model are adjusted , this step is repeated until the third loss value is continuously reduced and the fluctuation interval of the third loss value is smaller than the third preset value, it is judged that the training of the nose print recognition model is completed.
  • the nose print recognition is to obtain the characteristic information of the nose print. Similar to the face recognition model FaceNet, the face is represented as a 128-dimensional feature vector, which is used to distinguish different individuals.
  • This patent uses the Inception module to create a deep learning network with an output dimension of 128 dimensions, which is used to represent nose print feature information.
  • the Inception module adopts the distributed idea to divide the one-way network into multiple networks with different structures, so that the network can learn richer features.
  • the step of training the nose print recognition model also includes:
  • the loss function selected by the nose print recognition model is a triplet loss function, and the formula of the triplet loss function is:
  • the training data is processed into the form of positive sample pairs and negative sample pairs, and then the feature vector is obtained by inputting the network, and the triplet loss is calculated, and the network parameters are updated by gradient descent.
  • the ultimate goal is to gradually reduce the loss. Small to stable.
  • the training data is processed into positive sample pairs and negative sample pairs for input, and after the loss value is obtained, the parameters are updated in the direction of loss reduction, and the ultimate goal is to gradually reduce the loss to a stable state.
  • the testing process of the nose print recognition model is similar to the above nose print image recognition model, and the detection and training processes can be performed simultaneously, which will not be repeated here.
  • a pet nose print identification device is provided, and the pet nose print identification device is in one-to-one correspondence with the pet nose print identification method in the above embodiment.
  • the pet nose print recognition device includes an image recognition module 11 , a feature extraction module 12 , a calculation module 13 and a pet determination module 14 .
  • the detailed description of each functional module is as follows:
  • the image recognition module 11 is used to obtain the original picture with the pet's nose print, and identify the nose print image in the original picture through the pre-trained nose print image recognition model;
  • the feature extraction module 12 is used for performing nose print feature extraction on the nose print image through a pre-trained nose print recognition model to obtain the feature vector to be verified of the nose print image;
  • Calculation module 13 is used to calculate the Euclidean distance of the feature vector to be verified and the feature vector of each pet stored in the archive respectively;
  • the pet determination module 14 is configured to determine that the pet in the original picture corresponding to the feature vector to be verified is the same pet as the correspondingly calculated pet in the archive when the calculated Euclidean distance is smaller than the first preset value.
  • the device also includes:
  • the sample image acquisition module is used to acquire the original sample image marked with the real coordinates of the pet nose print area
  • the classification module is used to randomly divide the original sample image into a sample image training set and a sample image testing set;
  • the parameter adjustment module is used to input the sample images in the sample image training set into the nose print image recognition model to be trained, and obtain the predicted coordinates of the pet nose print area of each sample image in the sample image training set. According to the pet nose print in the sample image
  • the first loss value of the region predicted coordinates and the real coordinates of the pet nose print region adjusts the current network parameters of the loss function in the nose print image recognition model;
  • the loop module is used to input the sample pictures in the sample picture test set into the nose print image recognition model corresponding to the current network parameters, obtain the predicted coordinates of the pet nose print area of each test picture in the sample picture test set, and calculate the The second loss value between the predicted coordinates of the pet nose print area and the real coordinates of the pet nose print area, and the current network parameters are cyclically adjusted to this step until the first loss value decreases, the second loss value decreases, and the first loss value decreases.
  • the fluctuation interval of the loss value is smaller than the preset first coordinate value
  • the fluctuation interval of the second loss value is smaller than the preset second coordinate value
  • the difference between the first loss value and the second loss value is smaller than the second preset value , taking the current network parameters as the network parameters of the loss function in the nose print image recognition model.
  • the loss function selected by the nose print image recognition model is the mean square error loss function.
  • the device also includes:
  • the nose print sample acquisition module is used to obtain the nose print sample pictures of each pet carrying the nose print information, wherein the nose print sample pictures of the same pet include multiple pieces;
  • a positive and negative sample processing module configured to process the nose print sample image into a positive sample pair and a negative sample pair according to the pet object to which the nose print sample image belongs;
  • a training module is used to input the positive sample pair and the negative sample pair to the noseprint recognition model to be trained, calculate the third loss value of the loss function in the noseprint recognition model to be trained, and adjust the noseprint recognition model to be trained. For the network parameters of the loss function, this step is repeated until the third loss value continues to decrease and the fluctuation interval of the third loss value is smaller than the third preset value, then it is judged that the training of the nose print recognition model is completed.
  • the loss function selected by the nose print recognition model is a triple loss function.
  • each module in the above-mentioned pet nose print recognition device can be implemented in whole or in part by software, hardware and combinations thereof.
  • the above modules can be embedded in or independent of the processor in the computer device in the form of hardware, or stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided, and the computer device may be a terminal, and its internal structure diagram may be as shown in FIG. 7 .
  • the computer equipment includes a processor, memory, a network interface, a display screen, and an input device connected by a system bus.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium, an internal memory.
  • the non-volatile storage medium stores an operating system and computer-readable instructions.
  • the internal memory provides an environment for the execution of the operating system and computer-readable instructions in the non-volatile storage medium.
  • the network interface of the computer device is used to communicate with an external server over a network connection.
  • the computer-readable instructions when executed by a processor, implement a method for identifying a pet's nose print.
  • a computer device including a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, and the processor implements the pet in the above embodiment when the processor executes the computer-readable instructions
  • the steps of the nose print identification method are, for example, steps 101 to 104 shown in FIG. 2 and other extensions of the method and extensions of related steps.
  • the processor executes the computer-readable instructions
  • the functions of each module/unit of the apparatus for identifying pet nose prints in the above embodiments, such as the functions of modules 11 to 14 shown in FIG. 6 are implemented. In order to avoid repetition, details are not repeated here.
  • the processor implements the following steps when executing the computer-readable instructions:
  • the calculated Euclidean distance is less than the first preset value, it is determined that the pet in the original picture corresponding to the feature vector to be verified is the same pet as the correspondingly calculated pet in the archive.
  • processor also implements the following steps when executing the computer-readable instructions:
  • the original sample pictures are randomly divided into a sample picture training set and a sample picture test set;
  • the fluctuation interval of the first loss value is smaller than the preset first coordinate value
  • the fluctuation interval of the second loss value is smaller than the preset second coordinate value
  • the difference between the first loss value and the second loss value When the value is less than the second preset value, the current network parameter is used as the network parameter of the loss function in the nose print image recognition model.
  • the loss function selected by the nose print image recognition model is a mean square error loss function.
  • processor also implements the following steps when executing the computer-readable instructions:
  • the network parameters of the medium loss function, and this step is repeated until the third loss value continues to decrease and the fluctuation interval of the third loss value is smaller than the third preset value, it is judged that the training of the nose print recognition model is completed.
  • the loss function selected by the nose print recognition model is a triple loss function.
  • the processor may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), application specific integrated circuits (Application Specific Integrated Circuits, ASICs), off-the-shelf processors. Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor can be a microprocessor or the processor can also be any conventional processor, etc.
  • the processor is the control center of the computer device, and uses various interfaces and lines to connect various parts of the entire computer device.
  • the memory may be used to store the computer-readable instructions and/or modules, and the processor may execute or execute the computer-readable instructions and/or modules stored in the memory and recall the data stored in the memory, Various functions of the computer device are implemented.
  • the memory may mainly include a stored program area and a stored data area, wherein the stored program area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), etc.; the storage data area may store Data (such as audio data, video data, etc.) created according to the usage of the mobile phone, etc.
  • the memory may be integrated in the processor, or may be provided separately from the processor.
  • one or more readable storage media storing computer-readable instructions, which may be non-volatile or volatile, and the computer-readable instructions When executed by one or more processors, the one or more processors implement the steps of the pet nose print recognition method in the above embodiment, such as steps 101 to 104 shown in FIG. 2 and other aspects of the method. Extensions and extensions of related steps.
  • the computer-readable instructions when executed by the processor, the functions of each module/unit of the apparatus for identifying pet nose prints in the above-mentioned embodiments, such as the functions of modules 11 to 14 shown in FIG. 6 , are implemented.
  • the computer-readable instructions when executed by one or more processors, cause the one or more processors to perform the following steps:
  • the calculated Euclidean distance is less than the first preset value, it is determined that the pet in the original picture corresponding to the feature vector to be verified is the same pet as the correspondingly calculated pet in the archive.
  • the one or more processors further perform the following steps:
  • the original sample pictures are randomly divided into a sample picture training set and a sample picture test set;
  • the fluctuation interval of the first loss value is smaller than the preset first coordinate value
  • the fluctuation interval of the second loss value is smaller than the preset second coordinate value
  • the difference between the first loss value and the second loss value When the value is less than the second preset value, the current network parameter is used as the network parameter of the loss function in the nose print image recognition model.
  • the loss function selected by the nose print image recognition model is a mean square error loss function.
  • the one or more processors further perform the following steps:
  • the network parameters of the medium loss function, and this step is repeated until the third loss value continues to decrease and the fluctuation interval of the third loss value is smaller than the third preset value, it is judged that the training of the nose print recognition model is completed.
  • the loss function selected by the nose print recognition model is a triple loss function.
  • the pet nose print identification method, device, computer equipment and storage medium proposed in this embodiment firstly obtain an original picture with pet nose print, and identify the nose print image in the original picture through a pre-trained nose print image recognition model, Then perform nose print feature extraction on the nose print image through the pre-trained nose print recognition model to obtain the feature vector to be verified of the nose print image, and calculate the feature vector to be verified and the pets stored in the archive.
  • the Euclidean distance of the feature vector When the calculated Euclidean distance is smaller than the first preset value, it is determined that the pet in the original image corresponding to the feature vector to be verified is the same pet as the correspondingly calculated pet in the archive.
  • the entire identification process only needs to collect images of pets with nose prints through the camera, which reduces the cost of pet identification, and because nose prints are unique, the pre-trained nose print image recognition model and nose print recognition model are used to identify the pet.
  • the nose pattern in the nose pattern image is identified by vector, and by comparing with the feature vector of each pet stored in the archive, the uniqueness of the pet can be accurately identified, the identification process is accurate, and the identification efficiency is high.
  • Nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in various forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Road (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

一种宠物鼻纹识别方法、装置、计算机设备及存储介质,应用于人工智能技术领域,用于解决现有技术对多毛的宠物进行识别时识别精度低、识别成本高的技术问题。方法包括:获取带有宠物鼻纹的原始图片,通过预先训练好的鼻纹图像识别模型识别该原始图片中的鼻纹图像(S101);通过预先训练好的鼻纹识别模型对该鼻纹图像进行鼻纹特征提取,得到该鼻纹图像的待验证的特征向量(S102);分别计算该待验证的特征向量与档案库中存储的各宠物的特征向量的欧氏距离(S103);当计算的该欧氏距离小于第一预设值时,确定该待验证特征向量对应原始图片中的宠物与该档案库中对应计算的宠物为同一宠物(S104)。

Description

宠物鼻纹识别方法、装置、计算机设备及存储介质 技术领域
本申请涉及人工智能技术领域,尤其涉及一种宠物鼻纹识别方法、装置、计算机设备及存储介质。
背景技术
随着人们生活质量的提升,宠物在人们生活中扮演着越来越重要的角色,由此衍生出宠物管理、宠物保险等应用场景。目前对于宠物进行识别主要有两种识别方式,分别为生物识别技术和非生物识别技术。其中,非生物识别技术又称RFID(Radio Frequency Identification,射频识别技术)宠物识别技术,识别芯片靠扫描仪发出的信号产生感应电流,靠这些能量,将存储在芯片中的信息发送出去,并被扫描仪接收和识别。发明人意识到目前这种动物识别的方法需要在宠物体内进行植入,且需要专业设备发射信号才能进行识别,对宠物可能造成伤害,且使用不方便。
其中,生物识别技术包括面部识别、虹膜识别、声纹识别和步态识别等等。面部识别在人脸识别中精度较高,但是对于多毛发的宠物面部识别精度较低。虹膜识别虽然具备唯一性,且精度高,但是需要专业设备采集图像,开发成本高,使用不方便。声纹和步态也具备唯一性,理论上可行,但技术不够成熟。
目前还没有一种比较好的对宠物进行识别的技术,使得在对宠物进行识别时既不对宠物造成伤害,又在不投入过高成本的基础上对宠物进行精准识别。
发明内容
本申请实施例提供一种宠物鼻纹识别方法、装置、计算机设备及存储介质,以解决现有技术对多毛的宠物进行识别时识别精度低、识别成本高的技术问题。
一种宠物鼻纹识别方法,该方法包括:
获取带有宠物鼻纹的原始图片,通过预先训练好的鼻纹图像识别模型识别该原始图片中的鼻纹图像;
通过预先训练好的鼻纹识别模型对该鼻纹图像进行鼻纹特征提取,得到该鼻纹图像的待验证的特征向量;
分别计算该待验证的特征向量与档案库中存储的各宠物的特征向量的欧氏距离;
当计算的该欧氏距离小于第一预设值时,确定该待验证特征向量对应原始图片中的宠物与该档案库中对应计算的宠物为同一宠物。
一种宠物鼻纹识别装置,包括:
图像识别模块,用于获取带有宠物鼻纹的原始图片,通过预先训练好的鼻纹图像识别模型识别该原始图片中的鼻纹图像;
特征提取模块,用于通过预先训练好的鼻纹识别模型对该鼻纹图像进行鼻纹特征提取,得到该鼻纹图像的待验证的特征向量;
计算模块,用于分别计算该待验证的特征向量与档案库中存储的各宠物的特征向量的欧氏距离;
宠物确定模块,用于当计算的该欧氏距离小于第一预设值时,确定该待验证特征向量对应原始图片中的宠物与该档案库中对应计算的宠物为同一宠物。
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
获取带有宠物鼻纹的原始图片,通过预先训练好的鼻纹图像识别模型识别所述原始图片中的鼻纹图像;
通过预先训练好的鼻纹识别模型对所述鼻纹图像进行鼻纹特征提取,得到所述鼻纹图像的待验证的特征向量;
分别计算所述待验证的特征向量与档案库中存储的各宠物的特征向量的欧氏距离;
当计算的所述欧氏距离小于第一预设值时,确定所述待验证特征向量对应原始图片中的宠物与所述档案库中对应计算的宠物为同一宠物。
一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
获取带有宠物鼻纹的原始图片,通过预先训练好的鼻纹图像识别模型识别所述原始图片中的鼻纹图像;
通过预先训练好的鼻纹识别模型对所述鼻纹图像进行鼻纹特征提取,得到所述鼻纹图像的待验证的特征向量;
分别计算所述待验证的特征向量与档案库中存储的各宠物的特征向量的欧氏距离;
当计算的所述欧氏距离小于第一预设值时,确定所述待验证特征向量对应原始图片中的宠物与所述档案库中对应计算的宠物为同一宠物。
本申请提出的宠物鼻纹识别方法、装置、计算机设备及存储介质,对动物的整个识别过程只需要通过摄像头采集宠物带有鼻纹的图像即可,降低了宠物识别的成本,且由于鼻纹具有唯一性,通过预先训练好的鼻纹图像识别模型和鼻纹识别模型对该鼻纹图像中的鼻纹进行向量识别,并通过与档案库中存储的各宠物的特征向量进行比对,即可以准确的识别出该宠物的唯一性,识别过程精准,且识别效率高。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施 例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例中宠物鼻纹识别方法的一应用环境示意图;
图2是本申请一实施例中宠物鼻纹识别方法的一流程图;
图3是本申请一实施例中宠物鼻纹识别方法的一应用场景流程图;
图4是本申请一实施例中训练鼻纹图像识别模型的流程图;
图5是本申请一实施例中训练鼻纹识别模型的流程图;
图6是本申请一实施例中宠物鼻纹识别装置的结构示意图;
图7是本申请一实施例中计算机设备的一示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请提供的宠物鼻纹识别方法,可应用在如图1的应用环境中,其中,该计算机设备包括但不限于各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备。
在一实施例中,如图2所示,提供一种宠物鼻纹识别方法,以该方法应用在图1中的计算机设备为例进行说明,包括如下步骤S101至S104。
S101、获取带有宠物鼻纹的原始图片,通过预先训练好的鼻纹图像识别模型识别该原始图片中的鼻纹图像。
其中,该带有宠物鼻纹的原始图片为需要识别的宠物鼻纹图片,可以从网上爬取,也可以通过用户的手机进行拍摄得到。
进一步地,该方法还包括:
根据识别到的所述鼻纹对所述原始图片进行裁剪;
通过预先训练好的鼻纹图像识别模型识别该裁剪后的原始图片中的鼻纹图像。
其中,图4是本申请一实施例中训练鼻纹图像识别模型的流程图,如图4所示,首先对样本图像进行搜集,在对搜集的样本图像进行人工标注,主要是标注样本图像中的鼻纹区域,然后将该携带有标注信息的样本图像分为训练样本集和测试样本集,通过训练样本集对该鼻纹图像识别模型进行训练,通过该测试样本集对该鼻纹图像识别模型进行测试,通过训练结果和测试结果综合判定该鼻纹图像识别模型是否训练达标。
S102、通过预先训练好的鼻纹识别模型对该鼻纹图像进行鼻纹特征提取,得到该鼻纹图像的待验证的特征向量。
图5是本申请一实施例中训练鼻纹识别模型的流程图,如图5所示,首先对鼻纹样本 图像进行采集,在对搜集的鼻纹样本图像进行整理,然后将整理后的鼻纹样本图像分为鼻纹训练样本集和鼻纹测试样本集,通过鼻纹训练样本集对该鼻纹识别模型进行训练,通过该鼻纹测试样本集对该鼻纹识别模型进行测试,通过训练结果和测试结果综合判定该鼻纹识别模型是否训练达标。
S103、分别计算该待验证的特征向量与档案库中存储的各宠物的特征向量的欧氏距离。
欧氏距离是一个通常采用的距离定义,指在m维空间中两个点之间的真实距离,或者向量的自然长度,即该点到原点的距离。在二维和三维空间中的欧氏距离就是两点之间的实际距离。
S104、当计算的该欧氏距离小于第一预设值时,确定该待验证特征向量对应原始图片中的宠物与该档案库中对应计算的宠物为同一宠物。
在其中一个实施例中,该宠物鼻纹识别方法还包括:
根据预先得到的宠物的特征向量建立宠物档案库,以便对待识别的宠物进行验证。
图3是本申请一实施例中宠物鼻纹识别方法的一应用场景流程图,如图3所示,通过对原始图片进行采集,然后对采集的图像进行鼻纹检测和鼻纹识别,根据识别的宠物鼻纹对宠物进行建档,在有需要识别的宠物鼻纹图像时,将识别的鼻纹与建档宠物的鼻纹进行一一比对,判断待识别的宠物是否为建档宠物中的某一只。
可以理解地,当计算的该欧氏距离大于或者等于该第一预设值时,确定该待验证特征向量对应原始图片中的宠物与该档案库中对应计算的宠物为不同宠物。
本实施例提出的宠物鼻纹识别方法,首先获取带有宠物鼻纹的原始图片,通过预先训练好的鼻纹图像识别模型识别该原始图片中的鼻纹图像,然后通过预先训练好的鼻纹识别模型对该鼻纹图像进行鼻纹特征提取,得到该鼻纹图像的待验证的特征向量,分别计算该待验证的特征向量与档案库中存储的各宠物的特征向量的欧氏距离,当计算的该欧氏距离小于第一预设值时,确定该待验证特征向量对应原始图片中的宠物与该档案库中对应计算的宠物为同一宠物。整个识别过程只需要通过摄像头采集宠物带有鼻纹的图像即可,降低了宠物识别的成本,且由于鼻纹具有唯一性,通过预先训练好的鼻纹图像识别模型和鼻纹识别模型对该鼻纹图像中的鼻纹进行向量识别,并通过与档案库中存储的各宠物的特征向量进行比对,即可以准确的识别出该宠物的唯一性,识别过程精准,且识别效率高。
具体地,训练该鼻纹图像识别模型的步骤包括:
获取标注有宠物鼻纹区域真实坐标的原始样本图片;
将该原始样本图片随机分为样本图片训练集和样本图片测试集;
将该样本图片训练集中的样本图片输入待训练的鼻纹图像识别模型,得到该样本图片训练集中各样本图片的宠物鼻纹区域预测坐标,根据该样本图片中宠物鼻纹区域预测坐标与该宠物鼻纹区域真实坐标的第一损失值调整该鼻纹图像识别模型中损失函数的当前网络参数;
将该样本图片测试集中的样本图片输入至该当前网络参数对应的鼻纹图像识别模型,得到该样本图片测试集中各测试图片的宠物鼻纹区域预测坐标,计算该测试图片中宠物鼻纹区域预测坐标与该宠物鼻纹区域真实坐标的第二损失值,循环调整该当前网络参数至本步骤,直到该第一损失值减小、该第二损失值减小、该第一损失值的波动区间小于预设第一坐标值、该第二损失值的波动区间小于预设第二坐标值且该第一损失值和该第二损失值的差值小于第二预设值时,将当前的网络参数作为该鼻纹图像识别模型中损失函数的网络参数。
其中,该鼻纹图像识别模型选用的损失函数为均方差损失函数,该均方差损失函数的公式为:
Figure PCTCN2020122025-appb-000001
其中,p i(ω,θ)为预测的坐标,y i为真实坐标,N为抽取数量,ω和θ均为网络参数。
在调整该网络参数的过程中,正向传播即将训练集数据按批次输入网络得到预测坐标,然后与真实坐标求损失,反向传播即根据梯度下降法,沿着损失的梯度方向反向更新网络参数。不断循环本步骤,直至损失值逐步减小至稳定。
进一步地,样本图片测试集用于测试集验证训练效果,用训练过的网络及参数预测测试集,并结合测试集标签求出验证集的损失,然后与训练集损失对比,判断训练效果的好坏。验证过程一般和训练过程同时进行,动态判断训练效果及终止训练。判断依据为:当训练集和测试集的损失(误差)均处于下降阶段时,属于欠拟合阶段,这个过程处于训练的前期;当训练集和测试集的损失(误差)逐步下降至趋于稳定,且两者相差不大,模型训练适度,可终止训练;若继续训练,训练集损失(误差)将逐步下降,且远低于测试集损失(误差),模型过拟合,预测结果可信度低。
当训练集和测试集的损失(误差)都很大时,属于欠拟合阶段,这个过程处于训练的前期;当训练集和测试集的损失(误差)趋于稳定,且都很小,两者相差不大,模型训练适度,可终止训练;当训练集损失(误差)很小,但验证集损失(误差)很大,模型过拟合,预测结果可信度低。
具体地,训练该鼻纹识别模型的步骤包括:
获取携带有鼻纹信息的各个宠物的鼻纹样本图片,其中,同一宠物的鼻纹样本图片包括多张;
根据该鼻纹样本图片所属的宠物对象将该鼻纹样本图片处理为正样本对和负样本对;
将该正样本对和该负样本对输入至该待训练鼻纹识别模型,计算该待训练该鼻纹识别模型中损失函数的第三损失值,调整该鼻纹识别模型中损失函数的网络参数,循环本步骤直到该第三损失值不断减小且该第三损失值的波动区间小于第三预设值时,判断该鼻纹识别模型训练完成。
鼻纹识别是获取鼻纹特征信息。类似人脸识别模型FaceNet,将人脸表征为128维特 征向量,用于区别不同个体的人。本专利采用Inception模块,创建深度学习网络,输出维度为128维,用于表征鼻纹特征信息。Inception模块采用分布式的思想,将单向网络分割为多个不同结构的网络,使得网络能够学习到更丰富的特征。
其中,可以通过人工采集携带有鼻纹信息的各个宠物的大量鼻纹样本图片,鼻纹拍摄需要足够清晰,单个个体采集图片数足够多。
进一步地,在根据该鼻纹样本图片所属的宠物对象将该鼻纹样本图片处理为正样本对和负样本对的步骤之前,训练该鼻纹识别模型的步骤还包括:
对该鼻纹样本图片进行图像整理,具体为用上述鼻纹图像识别模型识别采集到的鼻纹图片,裁剪鼻纹区域,并按个体对鼻纹进行归类,单个文件夹内仅放同一个个体鼻纹,不同个体鼻纹归属不同文件夹,且各个个体放入同一目录下。
进一步地,该鼻纹识别模型选用的损失函数为三元组损失函数,该三元组损失函数的公式为:
Figure PCTCN2020122025-appb-000002
其中,
Figure PCTCN2020122025-appb-000003
为正样本输出值,
Figure PCTCN2020122025-appb-000004
为负样本输出值,
Figure PCTCN2020122025-appb-000005
为目标样本预测值
Figure PCTCN2020122025-appb-000006
为目标样本与正样本距离,
Figure PCTCN2020122025-appb-000007
为目标样本与负样本距离,α为补充参数。
类似检测的训练过程,将训练数据处理成正样本对、负样本对的形式,然后输入网络得到特征向量,并计算三元组损失,采用梯度下降的方式更新网络参数,最终目标是使损失逐步减小至稳定。
将训练数据处理成正样本对、负样本对的形式输入,得到损失值后,按损失减小的方向更新参数,最终目标是使损失逐步减小至稳定。
其中,对该鼻纹识别模型的测试过程与对上述鼻纹图像识别模型类似,检测与训练过程均可同步进行,在此不再赘述。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
在一实施例中,提供一种宠物鼻纹识别装置,该宠物鼻纹识别装置与上述实施例中宠物鼻纹识别方法一一对应。如图6所示,该宠物鼻纹识别装置包括图像识别模块11、特征提取模块12、计算模块13和宠物确定模块14。各功能模块详细说明如下:
图像识别模块11,用于获取带有宠物鼻纹的原始图片,通过预先训练好的鼻纹图像识别模型识别该原始图片中的鼻纹图像;
特征提取模块12,用于通过预先训练好的鼻纹识别模型对该鼻纹图像进行鼻纹特征提取,得到该鼻纹图像的待验证的特征向量;
计算模块13,用于分别计算该待验证的特征向量与档案库中存储的各宠物的特征向 量的欧氏距离;
宠物确定模块14,用于当计算的该欧氏距离小于第一预设值时,确定该待验证特征向量对应原始图片中的宠物与该档案库中对应计算的宠物为同一宠物。
进一步地,该装置还包括:
样本图片获取模块,用于获取标注有宠物鼻纹区域真实坐标的原始样本图片;
分类模块,用于将该原始样本图片随机分为样本图片训练集和样本图片测试集;
参数调整模块,用于将该样本图片训练集中的样本图片输入待训练的鼻纹图像识别模型,得到该样本图片训练集中各样本图片的宠物鼻纹区域预测坐标,根据该样本图片中宠物鼻纹区域预测坐标与该宠物鼻纹区域真实坐标的第一损失值调整该鼻纹图像识别模型中损失函数的当前网络参数;
循环模块,用于将该样本图片测试集中的样本图片输入至该当前网络参数对应的鼻纹图像识别模型,得到该样本图片测试集中各测试图片的宠物鼻纹区域预测坐标,计算该测试图片中宠物鼻纹区域预测坐标与该宠物鼻纹区域真实坐标的第二损失值,循环调整该当前网络参数至本步骤,直到该第一损失值减小、该第二损失值减小、该第一损失值的波动区间小于预设第一坐标值、该第二损失值的波动区间小于预设第二坐标值且该第一损失值和该第二损失值的差值小于第二预设值时,将当前的网络参数作为该鼻纹图像识别模型中损失函数的网络参数。
其中,该鼻纹图像识别模型选用的损失函数为均方差损失函数。
进一步地,该装置还包括:
鼻纹样本获取模块,用于获取携带有鼻纹信息的各个宠物的鼻纹样本图片,其中,同一宠物的鼻纹样本图片包括多张;
正负样本处理模块,用于根据该鼻纹样本图片所属的宠物对象将该鼻纹样本图片处理为正样本对和负样本对;
训练模块,用于将该正样本对和该负样本对输入至该待训练鼻纹识别模型,计算该待训练该鼻纹识别模型中损失函数的第三损失值,调整该鼻纹识别模型中损失函数的网络参数,循环本步骤直到该第三损失值不断减小且该第三损失值的波动区间小于第三预设值时,判断该鼻纹识别模型训练完成。
其中,该鼻纹识别模型选用的损失函数为三元组损失函数。
其中上述模块/单元中的“第一”和“第二”的意义仅在于将不同的模块/单元加以区分,并不用于限定哪个模块/单元的优先级更高或者其它的限定意义。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或模块的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或模块,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或模块,本申请中所出现的模块的划分,仅仅是一种逻辑上的划分,实际应用中实现时可以有另外的划分方式。
关于宠物鼻纹识别装置的具体限定可以参见上文中对于宠物鼻纹识别方法的限定,在此不再赘述。上述宠物鼻纹识别装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是终端,其内部结构图可以如图7所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口、显示屏和输入装置。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机可读指令。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的网络接口用于与外部服务器通过网络连接通信。该计算机可读指令被处理器执行时以实现一种宠物鼻纹识别方法。
在一个实施例中,提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,处理器执行计算机可读指令时实现上述实施例中宠物鼻纹识别方法的步骤,例如图2所示的步骤101至步骤104及该方法的其它扩展和相关步骤的延伸。或者,处理器执行计算机可读指令时实现上述实施例中宠物鼻纹识别装置的各模块/单元的功能,例如图6所示模块11至模块14的功能。为避免重复,这里不再赘述。具体地,所述处理器执行所述计算机可读指令时实现如下步骤:
获取带有宠物鼻纹的原始图片,通过预先训练好的鼻纹图像识别模型识别所述原始图片中的鼻纹图像;
通过预先训练好的鼻纹识别模型对所述鼻纹图像进行鼻纹特征提取,得到所述鼻纹图像的待验证的特征向量;
分别计算所述待验证的特征向量与档案库中存储的各宠物的特征向量的欧氏距离;
当计算的所述欧氏距离小于第一预设值时,确定所述待验证特征向量对应原始图片中的宠物与所述档案库中对应计算的宠物为同一宠物。
进一步地,所述处理器执行所述计算机可读指令时还实现如下步骤:
获取标注有宠物鼻纹区域真实坐标的原始样本图片;
将所述原始样本图片随机分为样本图片训练集和样本图片测试集;
将所述样本图片训练集中的样本图片输入待训练的鼻纹图像识别模型,得到所述样本图片训练集中各样本图片的宠物鼻纹区域预测坐标,根据所述样本图片中宠物鼻纹区域预测坐标与所述宠物鼻纹区域真实坐标的第一损失值调整所述鼻纹图像识别模型中损失函数的当前网络参数;
将所述样本图片测试集中的样本图片输入至所述当前网络参数对应的鼻纹图像识别模型,得到所述样本图片测试集中各测试图片的宠物鼻纹区域预测坐标,计算所述测试图片中宠物鼻纹区域预测坐标与所述宠物鼻纹区域真实坐标的第二损失值,循环调整所述当 前网络参数至本步骤,直到所述第一损失值减小、所述第二损失值减小、所述第一损失值的波动区间小于预设第一坐标值、所述第二损失值的波动区间小于预设第二坐标值且所述第一损失值和所述第二损失值的差值小于第二预设值时,将当前的网络参数作为所述鼻纹图像识别模型中损失函数的网络参数。
进一步地,所述鼻纹图像识别模型选用的损失函数为均方差损失函数。
进一步地,所述处理器执行所述计算机可读指令时还实现如下步骤:
获取携带有鼻纹信息的各个宠物的鼻纹样本图片,其中,同一宠物的鼻纹样本图片包括多张;
根据所述鼻纹样本图片所属的宠物对象将所述鼻纹样本图片处理为正样本对和负样本对;
将所述正样本对和所述负样本对输入至所述待训练鼻纹识别模型,计算所述待训练所述鼻纹识别模型中损失函数的第三损失值,调整所述鼻纹识别模型中损失函数的网络参数,循环本步骤直到所述第三损失值不断减小且所述第三损失值的波动区间小于第三预设值时,判断所述鼻纹识别模型训练完成。
进一步地,所述鼻纹识别模型选用的损失函数为三元组损失函数。
所述处理器可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等,所述处理器是所述计算机装置的控制中心,利用各种接口和线路连接整个计算机装置的各个部分。
所述存储器可用于存储所述计算机可读指令和/或模块,所述处理器通过运行或执行存储在所述存储器内的计算机可读指令和/或模块,以及调用存储在存储器内的数据,实现所述计算机装置的各种功能。所述存储器可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、视频数据等)等。
所述存储器可以集成在所述处理器中,也可以与所述处理器分开设置。
在一个实施例中,提供了一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读存储介质可以是非易失性,也可以是易失性,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行时实现上述实施例中宠物鼻纹识别方法的步骤,例如图2所示的步骤101至步骤104及该方法的其它扩展和相关步骤的延伸。或者,计算机可读指令被处理器执行时实现上述实施例中宠物鼻纹识别装置的各模块/单元的功能,例如图6所示模块11至模块14的功能。具体地,该计算机可读指令被一个或多个处 理器执行时,使得所述一个或多个处理器执行如下步骤其中:
获取带有宠物鼻纹的原始图片,通过预先训练好的鼻纹图像识别模型识别所述原始图片中的鼻纹图像;
通过预先训练好的鼻纹识别模型对所述鼻纹图像进行鼻纹特征提取,得到所述鼻纹图像的待验证的特征向量;
分别计算所述待验证的特征向量与档案库中存储的各宠物的特征向量的欧氏距离;
当计算的所述欧氏距离小于第一预设值时,确定所述待验证特征向量对应原始图片中的宠物与所述档案库中对应计算的宠物为同一宠物。
进一步地,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
获取标注有宠物鼻纹区域真实坐标的原始样本图片;
将所述原始样本图片随机分为样本图片训练集和样本图片测试集;
将所述样本图片训练集中的样本图片输入待训练的鼻纹图像识别模型,得到所述样本图片训练集中各样本图片的宠物鼻纹区域预测坐标,根据所述样本图片中宠物鼻纹区域预测坐标与所述宠物鼻纹区域真实坐标的第一损失值调整所述鼻纹图像识别模型中损失函数的当前网络参数;
将所述样本图片测试集中的样本图片输入至所述当前网络参数对应的鼻纹图像识别模型,得到所述样本图片测试集中各测试图片的宠物鼻纹区域预测坐标,计算所述测试图片中宠物鼻纹区域预测坐标与所述宠物鼻纹区域真实坐标的第二损失值,循环调整所述当前网络参数至本步骤,直到所述第一损失值减小、所述第二损失值减小、所述第一损失值的波动区间小于预设第一坐标值、所述第二损失值的波动区间小于预设第二坐标值且所述第一损失值和所述第二损失值的差值小于第二预设值时,将当前的网络参数作为所述鼻纹图像识别模型中损失函数的网络参数。
进一步地,所述鼻纹图像识别模型选用的损失函数为均方差损失函数。
进一步地,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
获取携带有鼻纹信息的各个宠物的鼻纹样本图片,其中,同一宠物的鼻纹样本图片包括多张;
根据所述鼻纹样本图片所属的宠物对象将所述鼻纹样本图片处理为正样本对和负样本对;
将所述正样本对和所述负样本对输入至所述待训练鼻纹识别模型,计算所述待训练所述鼻纹识别模型中损失函数的第三损失值,调整所述鼻纹识别模型中损失函数的网络参数,循环本步骤直到所述第三损失值不断减小且所述第三损失值的波动区间小于第三预设值时,判断所述鼻纹识别模型训练完成。
进一步地,所述鼻纹识别模型选用的损失函数为三元组损失函数。
本实施例提出的宠物鼻纹识别方法、装置、计算机设备及存储介质,首先获取带有宠物鼻纹的原始图片,通过预先训练好的鼻纹图像识别模型识别该原始图片中的鼻纹图像,然后通过预先训练好的鼻纹识别模型对该鼻纹图像进行鼻纹特征提取,得到该鼻纹图像的待验证的特征向量,分别计算该待验证的特征向量与档案库中存储的各宠物的特征向量的欧氏距离,当计算的该欧氏距离小于第一预设值时,确定该待验证特征向量对应原始图片中的宠物与该档案库中对应计算的宠物为同一宠物。整个识别过程只需要通过摄像头采集宠物带有鼻纹的图像即可,降低了宠物识别的成本,且由于鼻纹具有唯一性,通过预先训练好的鼻纹图像识别模型和鼻纹识别模型对该鼻纹图像中的鼻纹进行向量识别,并通过与档案库中存储的各宠物的特征向量进行比对,即可以准确的识别出该宠物的唯一性,识别过程精准,且识别效率高。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种宠物鼻纹识别方法,其中,所述方法包括:
    获取带有宠物鼻纹的原始图片,通过预先训练好的鼻纹图像识别模型识别所述原始图片中的鼻纹图像;
    通过预先训练好的鼻纹识别模型对所述鼻纹图像进行鼻纹特征提取,得到所述鼻纹图像的待验证的特征向量;
    分别计算所述待验证的特征向量与档案库中存储的各宠物的特征向量的欧氏距离;
    当计算的所述欧氏距离小于第一预设值时,确定所述待验证特征向量对应原始图片中的宠物与所述档案库中对应计算的宠物为同一宠物。
  2. 根据权利要求1所述的宠物鼻纹识别方法,其中,训练所述鼻纹图像识别模型的步骤包括:
    获取标注有宠物鼻纹区域真实坐标的原始样本图片;
    将所述原始样本图片随机分为样本图片训练集和样本图片测试集;
    将所述样本图片训练集中的样本图片输入待训练的鼻纹图像识别模型,得到所述样本图片训练集中各样本图片的宠物鼻纹区域预测坐标,根据所述样本图片中宠物鼻纹区域预测坐标与所述宠物鼻纹区域真实坐标的第一损失值调整所述鼻纹图像识别模型中损失函数的当前网络参数;
    将所述样本图片测试集中的样本图片输入至所述当前网络参数对应的鼻纹图像识别模型,得到所述样本图片测试集中各测试图片的宠物鼻纹区域预测坐标,计算所述测试图片中宠物鼻纹区域预测坐标与所述宠物鼻纹区域真实坐标的第二损失值,循环调整所述当前网络参数至本步骤,直到所述第一损失值减小、所述第二损失值减小、所述第一损失值的波动区间小于预设第一坐标值、所述第二损失值的波动区间小于预设第二坐标值且所述第一损失值和所述第二损失值的差值小于第二预设值时,将当前的网络参数作为所述鼻纹图像识别模型中损失函数的网络参数。
  3. 根据权利要求2所述的宠物鼻纹识别方法,其中,所述鼻纹图像识别模型选用的损失函数为均方差损失函数。
  4. 根据权利要求1所述的宠物鼻纹识别方法,其中,训练所述鼻纹识别模型的步骤包括:
    获取携带有鼻纹信息的各个宠物的鼻纹样本图片,其中,同一宠物的鼻纹样本图片包括多张;
    根据所述鼻纹样本图片所属的宠物对象将所述鼻纹样本图片处理为正样本对和负样本对;
    将所述正样本对和所述负样本对输入至所述待训练鼻纹识别模型,计算所述待训练所述鼻纹识别模型中损失函数的第三损失值,调整所述鼻纹识别模型中损失函数的网络参 数,循环本步骤直到所述第三损失值不断减小且所述第三损失值的波动区间小于第三预设值时,判断所述鼻纹识别模型训练完成。
  5. 根据权利要求4所述的宠物鼻纹识别方法,其中,所述鼻纹识别模型选用的损失函数为三元组损失函数。
  6. 一种宠物鼻纹识别装置,其中,包括:
    图像识别模块,用于获取带有宠物鼻纹的原始图片,通过预先训练好的鼻纹图像识别模型识别所述原始图片中的鼻纹图像;
    特征提取模块,用于通过预先训练好的鼻纹识别模型对所述鼻纹图像进行鼻纹特征提取,得到所述鼻纹图像的待验证的特征向量;
    计算模块,用于分别计算所述待验证的特征向量与档案库中存储的各宠物的特征向量的欧氏距离;
    宠物确定模块,用于当计算的所述欧氏距离小于第一预设值时,确定所述待验证特征向量对应原始图片中的宠物与所述档案库中对应计算的宠物为同一宠物。
  7. 根据权利要求6所述的宠物鼻纹识别装置,其中,所述装置还包括:
    样本图片获取模块,用于获取标注有宠物鼻纹区域真实坐标的原始样本图片;
    分类模块,用于将所述原始样本图片随机分为样本图片训练集和样本图片测试集;
    参数调整模块,用于将所述样本图片训练集中的样本图片输入待训练的鼻纹图像识别模型,得到所述样本图片训练集中各样本图片的宠物鼻纹区域预测坐标,根据所述样本图片中宠物鼻纹区域预测坐标与所述宠物鼻纹区域真实坐标的第一损失值调整所述鼻纹图像识别模型中损失函数的当前网络参数;
    循环模块,用于将所述样本图片测试集中的样本图片输入至所述当前网络参数对应的鼻纹图像识别模型,得到所述样本图片测试集中各测试图片的宠物鼻纹区域预测坐标,计算所述测试图片中宠物鼻纹区域预测坐标与所述宠物鼻纹区域真实坐标的第二损失值,循环调整所述当前网络参数至本步骤,直到所述第一损失值减小、所述第二损失值减小、所述第一损失值的波动区间小于预设第一坐标值、所述第二损失值的波动区间小于预设第二坐标值且所述第一损失值和所述第二损失值的差值小于第二预设值时,将当前的网络参数作为所述鼻纹图像识别模型中损失函数的网络参数。
  8. 根据权利要求7所述的宠物鼻纹识别装置,其中,所述鼻纹图像识别模型选用的损失函数为均方差损失函数。
  9. 根据权利要求6所述的宠物鼻纹识别装置,其中,所述装置还包括:
    鼻纹样本获取模块,用于获取携带有鼻纹信息的各个宠物的鼻纹样本图片,其中,同一宠物的鼻纹样本图片包括多张;
    正负样本处理模块,用于根据所述鼻纹样本图片所属的宠物对象将所述鼻纹样本图片 处理为正样本对和负样本对;
    训练模块,用于将所述正样本对和所述负样本对输入至所述待训练鼻纹识别模型,计算所述待训练所述鼻纹识别模型中损失函数的第三损失值,调整所述鼻纹识别模型中损失函数的网络参数,循环本步骤直到所述第三损失值不断减小且所述第三损失值的波动区间小于第三预设值时,判断所述鼻纹识别模型训练完成。
  10. 根据权利要求9所述的宠物鼻纹识别装置,其中,所述鼻纹识别模型选用的损失函数为三元组损失函数。
  11. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时实现如下步骤:
    获取带有宠物鼻纹的原始图片,通过预先训练好的鼻纹图像识别模型识别所述原始图片中的鼻纹图像;
    通过预先训练好的鼻纹识别模型对所述鼻纹图像进行鼻纹特征提取,得到所述鼻纹图像的待验证的特征向量;
    分别计算所述待验证的特征向量与档案库中存储的各宠物的特征向量的欧氏距离;
    当计算的所述欧氏距离小于第一预设值时,确定所述待验证特征向量对应原始图片中的宠物与所述档案库中对应计算的宠物为同一宠物。
  12. 根据权利要求11所述的计算机设备,其中,所述处理器执行所述计算机可读指令时还实现如下步骤:
    获取标注有宠物鼻纹区域真实坐标的原始样本图片;
    将所述原始样本图片随机分为样本图片训练集和样本图片测试集;
    将所述样本图片训练集中的样本图片输入待训练的鼻纹图像识别模型,得到所述样本图片训练集中各样本图片的宠物鼻纹区域预测坐标,根据所述样本图片中宠物鼻纹区域预测坐标与所述宠物鼻纹区域真实坐标的第一损失值调整所述鼻纹图像识别模型中损失函数的当前网络参数;
    将所述样本图片测试集中的样本图片输入至所述当前网络参数对应的鼻纹图像识别模型,得到所述样本图片测试集中各测试图片的宠物鼻纹区域预测坐标,计算所述测试图片中宠物鼻纹区域预测坐标与所述宠物鼻纹区域真实坐标的第二损失值,循环调整所述当前网络参数至本步骤,直到所述第一损失值减小、所述第二损失值减小、所述第一损失值的波动区间小于预设第一坐标值、所述第二损失值的波动区间小于预设第二坐标值且所述第一损失值和所述第二损失值的差值小于第二预设值时,将当前的网络参数作为所述鼻纹图像识别模型中损失函数的网络参数。
  13. 根据权利要求12所述的计算机设备,其中,所述鼻纹图像识别模型选用的损失函数为均方差损失函数。
  14. 根据权利要求11所述的计算机设备,其中,所述处理器执行所述计算机可读指令时还实现如下步骤:
    获取携带有鼻纹信息的各个宠物的鼻纹样本图片,其中,同一宠物的鼻纹样本图片包括多张;
    根据所述鼻纹样本图片所属的宠物对象将所述鼻纹样本图片处理为正样本对和负样本对;
    将所述正样本对和所述负样本对输入至所述待训练鼻纹识别模型,计算所述待训练所述鼻纹识别模型中损失函数的第三损失值,调整所述鼻纹识别模型中损失函数的网络参数,循环本步骤直到所述第三损失值不断减小且所述第三损失值的波动区间小于第三预设值时,判断所述鼻纹识别模型训练完成。
  15. 根据权利要求14所述的计算机设备,其中,所述鼻纹识别模型选用的损失函数为三元组损失函数。
  16. 一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤其中:
    获取带有宠物鼻纹的原始图片,通过预先训练好的鼻纹图像识别模型识别所述原始图片中的鼻纹图像;
    通过预先训练好的鼻纹识别模型对所述鼻纹图像进行鼻纹特征提取,得到所述鼻纹图像的待验证的特征向量;
    分别计算所述待验证的特征向量与档案库中存储的各宠物的特征向量的欧氏距离;
    当计算的所述欧氏距离小于第一预设值时,确定所述待验证特征向量对应原始图片中的宠物与所述档案库中对应计算的宠物为同一宠物。
  17. 根据权利要求16所述的一个或多个存储有计算机可读指令的可读存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
    获取标注有宠物鼻纹区域真实坐标的原始样本图片;
    将所述原始样本图片随机分为样本图片训练集和样本图片测试集;
    将所述样本图片训练集中的样本图片输入待训练的鼻纹图像识别模型,得到所述样本图片训练集中各样本图片的宠物鼻纹区域预测坐标,根据所述样本图片中宠物鼻纹区域预测坐标与所述宠物鼻纹区域真实坐标的第一损失值调整所述鼻纹图像识别模型中损失函数的当前网络参数;
    将所述样本图片测试集中的样本图片输入至所述当前网络参数对应的鼻纹图像识别模型,得到所述样本图片测试集中各测试图片的宠物鼻纹区域预测坐标,计算所述测试图片中宠物鼻纹区域预测坐标与所述宠物鼻纹区域真实坐标的第二损失值,循环调整所述当前网络参数至本步骤,直到所述第一损失值减小、所述第二损失值减小、所述第一损失值 的波动区间小于预设第一坐标值、所述第二损失值的波动区间小于预设第二坐标值且所述第一损失值和所述第二损失值的差值小于第二预设值时,将当前的网络参数作为所述鼻纹图像识别模型中损失函数的网络参数。
  18. 根据权利要求17所述的一个或多个存储有计算机可读指令的可读存储介质,其中,所述鼻纹图像识别模型选用的损失函数为均方差损失函数。
  19. 根据权利要求16所述的一个或多个存储有计算机可读指令的可读存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
    获取携带有鼻纹信息的各个宠物的鼻纹样本图片,其中,同一宠物的鼻纹样本图片包括多张;
    根据所述鼻纹样本图片所属的宠物对象将所述鼻纹样本图片处理为正样本对和负样本对;
    将所述正样本对和所述负样本对输入至所述待训练鼻纹识别模型,计算所述待训练所述鼻纹识别模型中损失函数的第三损失值,调整所述鼻纹识别模型中损失函数的网络参数,循环本步骤直到所述第三损失值不断减小且所述第三损失值的波动区间小于第三预设值时,判断所述鼻纹识别模型训练完成。
  20. 根据权利要求19所述的一个或多个存储有计算机可读指令的可读存储介质,其中,所述鼻纹识别模型选用的损失函数为三元组损失函数。
PCT/CN2020/122025 2020-10-20 2020-10-20 宠物鼻纹识别方法、装置、计算机设备及存储介质 WO2022082401A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/122025 WO2022082401A1 (zh) 2020-10-20 2020-10-20 宠物鼻纹识别方法、装置、计算机设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/122025 WO2022082401A1 (zh) 2020-10-20 2020-10-20 宠物鼻纹识别方法、装置、计算机设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022082401A1 true WO2022082401A1 (zh) 2022-04-28

Family

ID=81289542

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/122025 WO2022082401A1 (zh) 2020-10-20 2020-10-20 宠物鼻纹识别方法、装置、计算机设备及存储介质

Country Status (1)

Country Link
WO (1) WO2022082401A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115226046A (zh) * 2022-06-02 2022-10-21 新瑞鹏宠物医疗集团有限公司 注意事项推送方法及相关设备
CN115684316A (zh) * 2022-11-08 2023-02-03 广州市中耕信息技术有限公司 基于mos气敏传感器阵列的仿生电子鼻的气味识别方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190446A (zh) * 2018-07-06 2019-01-11 西北工业大学 基于三元组聚焦损失函数的行人再识别方法
CN109784166A (zh) * 2018-12-13 2019-05-21 北京飞搜科技有限公司 行人重识别的方法及装置
CN110532920A (zh) * 2019-08-21 2019-12-03 长江大学 基于FaceNet方法的小数量数据集人脸识别方法
CN111695462A (zh) * 2020-05-29 2020-09-22 平安科技(深圳)有限公司 一种人脸识别方法、装置、存储介质和服务器
CN112434556A (zh) * 2020-10-20 2021-03-02 深圳市悦保科技有限公司 宠物鼻纹识别方法、装置、计算机设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190446A (zh) * 2018-07-06 2019-01-11 西北工业大学 基于三元组聚焦损失函数的行人再识别方法
CN109784166A (zh) * 2018-12-13 2019-05-21 北京飞搜科技有限公司 行人重识别的方法及装置
CN110532920A (zh) * 2019-08-21 2019-12-03 长江大学 基于FaceNet方法的小数量数据集人脸识别方法
CN111695462A (zh) * 2020-05-29 2020-09-22 平安科技(深圳)有限公司 一种人脸识别方法、装置、存储介质和服务器
CN112434556A (zh) * 2020-10-20 2021-03-02 深圳市悦保科技有限公司 宠物鼻纹识别方法、装置、计算机设备及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115226046A (zh) * 2022-06-02 2022-10-21 新瑞鹏宠物医疗集团有限公司 注意事项推送方法及相关设备
CN115226046B (zh) * 2022-06-02 2024-02-13 新瑞鹏宠物医疗集团有限公司 注意事项推送方法及相关设备
CN115684316A (zh) * 2022-11-08 2023-02-03 广州市中耕信息技术有限公司 基于mos气敏传感器阵列的仿生电子鼻的气味识别方法
CN115684316B (zh) * 2022-11-08 2023-12-22 中耕生态环境科技(广州)有限公司 基于mos气敏传感器阵列的仿生电子鼻的气味识别方法

Similar Documents

Publication Publication Date Title
CN112434556A (zh) 宠物鼻纹识别方法、装置、计算机设备及存储介质
CN110443110B (zh) 基于多路摄像的人脸识别方法、装置、终端及存储介质
WO2020252917A1 (zh) 一种模糊人脸图像识别方法、装置、终端设备及介质
CN109271870B (zh) 行人重识别方法、装置、计算机设备及存储介质
WO2020037898A1 (zh) 人脸特征点检测方法、装置、计算机设备及存储介质
CN111191568B (zh) 翻拍图像识别方法、装置、设备及介质
WO2020119419A1 (zh) 基于图像识别的测试方法、装置、计算机设备及存储介质
CN111476268A (zh) 翻拍识别模型训练、图像识别方法、装置、设备及介质
CN107463865B (zh) 人脸检测模型训练方法、人脸检测方法及装置
WO2021114612A1 (zh) 目标重识别方法、装置、计算机设备和存储介质
WO2022082401A1 (zh) 宠物鼻纹识别方法、装置、计算机设备及存储介质
US11875599B2 (en) Method and device for detecting blurriness of human face in image and computer-readable storage medium
US10650234B2 (en) Eyeball movement capturing method and device, and storage medium
CN106056083B (zh) 一种信息处理方法及终端
CN110969045B (zh) 一种行为检测方法、装置、电子设备和存储介质
WO2021082045A1 (zh) 微笑表情检测方法、装置、计算机设备及存储介质
CN111582027B (zh) 身份认证方法、装置、计算机设备和存储介质
CN110008943B (zh) 一种图像处理方法及装置、一种计算设备及存储介质
CN111291773A (zh) 特征识别的方法及装置
CN110766077A (zh) 证据链图像中特写图筛选方法、装置和设备
WO2021189770A1 (zh) 基于人工智能的图像增强处理方法、装置、设备及介质
CN111353364A (zh) 一种人脸动态识别方法及装置、电子设备
CN111144285A (zh) 胖瘦程度识别方法、装置、设备及介质
CN111881740A (zh) 人脸识别方法、装置、电子设备及介质
CN113283388B (zh) 活体人脸检测模型的训练方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20957984

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20957984

Country of ref document: EP

Kind code of ref document: A1