WO2022213396A1 - 猫的面部个体识别装置、方法、计算机设备及存储介质 - Google Patents

猫的面部个体识别装置、方法、计算机设备及存储介质 Download PDF

Info

Publication number
WO2022213396A1
WO2022213396A1 PCT/CN2021/086547 CN2021086547W WO2022213396A1 WO 2022213396 A1 WO2022213396 A1 WO 2022213396A1 CN 2021086547 W CN2021086547 W CN 2021086547W WO 2022213396 A1 WO2022213396 A1 WO 2022213396A1
Authority
WO
WIPO (PCT)
Prior art keywords
facial
cat
training
target
recognition
Prior art date
Application number
PCT/CN2021/086547
Other languages
English (en)
French (fr)
Inventor
虞爱
周涛
刘周可
谭升
肖唐财
Original Assignee
深圳市悦保科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市悦保科技有限公司 filed Critical 深圳市悦保科技有限公司
Publication of WO2022213396A1 publication Critical patent/WO2022213396A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Definitions

  • the present application relates to the technical field of facial recognition, in particular to a cat facial individual recognition device, method, computer equipment and storage medium.
  • Non-biometric identification technology such as RFID pet identification technology
  • the identification chip generates an induced current based on the signal sent by the scanner. With this energy, the information stored in the chip is sent out, and is received and identified by the scanner.
  • the chip needs to be implanted into the pet, and professional equipment is required to transmit a signal for identification, which may cause damage to the pet and is inconvenient to use.
  • biometrics There are many types of biometrics.
  • the identification of nose prints is unique, but the nose prints of some breeds of cats are shallow, and the existing technology is difficult to collect, and the technical difficulty is high.
  • iris recognition the iris is unique and has high precision; however, it requires professional equipment to collect images, which is expensive to develop and inconvenient to use.
  • Voiceprint and gait recognition Voiceprint and gait are also unique, which is theoretically feasible, but the technology is not mature enough. Facial recognition, extracting the facial features of cats, the image acquisition is convenient, but the accuracy is insufficient.
  • the present application proposes a cat facial individual recognition device, method, computer equipment and storage medium, to efficiently and accurately complete the cat's individual recognition.
  • the present application provides a cat facial individual recognition device, comprising: a cat face acquisition module: used to acquire a facial image of a target; a cat face detection module: used to extract the facial coordinate points of the cat from the facial image, according to the Described coordinate point clips described facial image, obtains the facial area of cat; Cat face recognition module: is used to obtain the facial feature vector of cat from described facial area;
  • the pet cat archive is used for the identification of the pet cat; the cat face discrimination module is used to compare the facial feature vector of the target to be identified with the facial feature vector of the pet cat in the archive, and discriminate the individual identity of the target.
  • the cat face detection module includes a cat face detection model
  • the training process of the detection model includes: detection image collection, detection image labeling, detection network training, and detection model testing.
  • the detection images are collected by crawling pictures from pet adoption websites according to pet names, and then synthesizing training pictures of various backgrounds through manual PS, and dividing the training pictures into training sets and test sets. It is used for network training, and the test set is used for model testing to verify the effect of network training.
  • the detection image is marked by marking the facial coordinate point information of the cat and the circumscribed rectangle of the facial contour on the training picture.
  • the detection network training uses training set pictures for training, and the loss function used for training is Among them: p i ( ⁇ , ⁇ ) is the predicted coordinate, y i is the real coordinate, N is the number of extractions, and ⁇ and ⁇ are the network parameters.
  • the detection test model is to verify the training effect with the test set pictures, use the trained detection network and parameters to predict the test set, and combine the test set labels to obtain the loss of the validation set, and then compare it with the loss of the training set , judging whether the training effect of the detection model is good or not.
  • the cat face recognition module includes a cat face recognition model
  • the training process of the recognition model includes: recognition image acquisition, recognition image sorting, recognition network training, and recognition model testing.
  • the identifying image collection includes manually collecting a large amount of pet cat facial information.
  • the recognition image is organized as using the collected facial pictures of the cat, cropping the cat's facial region, and classifying the facial region pictures by individuals.
  • the recognition network training adopts Softmax loss and center loss joint training, and the loss function used is
  • L S is the Softmax loss function
  • L C is the center loss function
  • m is the number of samples taken in each batch
  • n is the total number of training categories
  • the weight W T is equivalent to a matrix composed of multiple vectors
  • b yi and b j are the biases of categories yi and j, respectively parameters
  • X i is the predicted value extracted through the network each time
  • C yi is the center of each category.
  • the training process of the recognition network training is to input the training data into the recognition network to obtain the facial feature vector, calculate the center loss, randomly initialize the category center, and then calculate the distance between the current data and the category center in each batch, This gradient-form distance is then added to the class center.
  • the application also provides a method for individual recognition of a cat's face, using the aforementioned facial recognition device to perform individual recognition on the cat's face, and the specific steps include:
  • the extraction of the facial coordinate points of the cat is carried out from the facial image, and the facial image is cropped according to the coordinate points to obtain the facial area of the cat;
  • the facial feature vector of the target to be recognized is compared with the facial feature vector of the pet cat in the archive to determine the individual identity of the target.
  • the present application also provides a computer device comprising a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, wherein when the processor executes the computer-readable instructions Implement the following steps:
  • the extraction of the facial coordinate points of the cat is carried out from the facial image, and the facial image is cropped according to the coordinate points to obtain the facial area of the cat;
  • the facial feature vector of the target to be recognized is compared with the facial feature vector of the pet cat in the archive to determine the individual identity of the target.
  • the present application also provides one or more readable storage media storing computer-readable instructions, and when the computer-readable instructions are executed by one or more processors, the one or more processors perform the following steps:
  • the extraction of the facial coordinate points of the cat is carried out from the facial image, and the facial image is cropped according to the coordinate points to obtain the facial area of the cat;
  • the facial feature vector of the target to be recognized is compared with the facial feature vector of the pet cat in the archive to determine the individual identity of the target.
  • This application combines biometric technology and deep learning algorithm to realize the facial individual recognition of cats, the recognition is accurate and efficient, and the effect is remarkable.
  • Fig. 1 is the schematic diagram of the facial individual recognition device of the application cat
  • Fig. 2 is the training flow chart of the cat face detection model of the present application
  • Fig. 3 is the training flow chart of the cat face recognition model of the present application.
  • Fig. 4 is the schematic diagram of the facial individual recognition of the application cat
  • FIG. 5 is a schematic diagram of the computer equipment of the present application.
  • Biometric technology Through the close combination of computer and high-tech means such as optics, acoustics, biosensors and biostatistics principles, the use of inherent physiological characteristics of the human body (such as fingerprints, facial images, iris, etc.) and behavioral characteristics (such as handwriting, voice, etc.) , gait, etc.) for personal identification.
  • Deep learning It is to learn the inherent laws and representation levels of sample data. The information obtained in these learning processes is of great help to the interpretation of data such as text, images and sounds. Its ultimate goal is to enable machines to have the ability to analyze and learn like humans, and to recognize data such as words, images, and sounds. Deep learning is a complex machine learning algorithm that has achieved results in speech and image recognition far exceeding previous related technologies.
  • An embodiment of the present application provides a cat facial individual recognition device, as shown in FIG. 1 , including: a cat face acquisition module 10: used to acquire a facial image of a target; a cat face detection module 20: used to obtain a facial image from the facial image Carry out the extraction of the facial coordinate point of the cat, cut out the facial image according to the coordinate point, and obtain the facial area of the cat; Cat face recognition module 30: for obtaining the facial feature vector of the cat from the facial area;
  • the cat face is archived Module 40: used to establish a pet cat archive according to the pre-acquired facial feature vector, so as to be used for the identification of the pet cat; cat face discrimination module 50: used to compare the facial feature vector of the target to be identified with the pet cat's profile in the archive.
  • the facial feature vectors are compared to determine the individual identity of the target. If the Euclidean distance of the facial feature vector is low enough and less than the set threshold, it can be determined that the target to be identified and the pet cat compared in the archive are the same pet cat, otherwise they are different.
  • the cat face detection module 20 provided by the present application includes a cat face detection model 100 , and the training process of the detection model 100 includes: detection image collection 110 , detection image labeling 120 , detection network training 130 and detection model test 140.
  • Cat facial features are unique and can be used as a basis for pet identification.
  • the cat face detection is to obtain the facial area and 7 facial key points, the left cat ear, the right cat ear, the left eye, the right eye, the left nostril, the right nostril, and the center of the mouth. Because the faces of different breeds of cats are generally similar, the use of these 7 points is better for generalization, and it can better reflect the outline of the cat.
  • Traditional image processing methods can also obtain facial regions, but they are heavily dependent on the scene and have poor accuracy and generalization ability. Deep learning can learn useful information from a large amount of data and is widely used in image processing. Compared with traditional methods, deep learning has significant advantages in accuracy and generalization ability. This application adopts the method of deep neural network to detect the position information of the cat's face in the image.
  • the detection image collection 110 is to crawl pictures according to pet names from pet adoption websites. There are cats of various breeds on such websites, and the same cat will store multiple pictures, which are manually removed from them. Inappropriate pictures. Then, artificial PS is used to synthesize training pictures of various backgrounds, and the cat face is marked as a positive sample, and other backgrounds of the picture are as rich as possible and used as negative samples, so that the model trained on the generated training set can be accurately detected. Out of cat face.
  • the detection image collection 110 divides the collected images into two parts after data processing (rotation, illumination, denoising, flipping), 90% is used for the training set and 10% is used for the test set, and the training set is used for network training, The test set is used for model testing to verify the effect of detection network training.
  • the detection image labeling 120 is to label the facial coordinate point information of the cat and the circumscribed rectangle of the facial contour on the training picture.
  • the detection network passes the pre-marked coordinate point information, and finally it will be learned by the network, and the coordinate point information will also be obtained when facing the cat image to be recognized.
  • the face is cropped to obtain a suitable area of the cat's face that can be correctly identified.
  • the detection network training 130 uses the training set pictures for training, and the mean square error loss function used in the training is:
  • p i ( ⁇ , ⁇ ) is the predicted coordinate
  • y i is the real coordinate
  • N is the number of extractions
  • ⁇ and ⁇ are the network parameters.
  • the network parameters are updated in reverse along the gradient direction of the loss.
  • the detection model test 140 is to verify the training effect with the pictures of the test set, use the trained detection network and parameters to predict the test set, and combine the labels of the test set to obtain the loss of the validation set, and then compare it with the loss of the training set , judging whether the training effect of the detection model is good or not.
  • the verification process is generally carried out at the same time as the training process, and the training effect is dynamically judged and the training is terminated.
  • the judgment is based on the following: when the loss (error) of the training set and the test set are both in the declining stage, it belongs to the underfitting stage, and this process is in the early stage of training; when the loss (error) of the training set and the test set gradually decreases to tend to It is stable, and the difference between the two is not large.
  • the training of the model is moderate, and the training can be terminated; if the training continues, the loss (error) of the training set will gradually decrease, and it will be much lower than the loss (error) of the test set. Low reliability.
  • the loss (error) of the training set and the test set When the loss (error) of the training set and the test set is large, it belongs to the underfitting stage, and the process is in the early stage of training; when the loss (error) of the training set and the test set tends to be stable and very small, the two If the difference is not large, the model training is moderate, and the training can be terminated; when the loss (error) of the training set is small, but the loss (error) of the validation set is large, the model is overfitted, and the reliability of the prediction result is low. There is no fixed limit for this, and it is usually determined by observing the change of the loss curve. The general reference is that the training loss is still falling rapidly and the test loss starts to slow down and the inflection point occurs, you can stop the training.
  • the cat face recognition module includes a cat face recognition model 200
  • the training process of the recognition model 200 includes: recognition image collection 210, recognition image sorting 220, recognition network training 230, and recognition Model Test 240.
  • Cat face recognition is to obtain facial feature information of pet cats.
  • This application uses the Inception V4 module, which is basically the model with the highest accuracy rate for image classification tasks currently in ImageNet, a large-scale visualization database used for visual object recognition software research. Create a deep learning network with an output dimension of 128 to represent the facial feature information of pet cats.
  • the Inception V4 module adopts the distributed idea to divide the one-way network into multiple networks with different structures, so that the network can learn richer features.
  • the identification image collection 210 includes manually collecting a large amount of pet cat face information, the cat face is sufficiently clear, and the number of images collected by a single individual is sufficiently large.
  • the recognition image sorting 220 is to use the collected facial pictures of cats, crop the facial regions of cats, and classify the facial region pictures according to individuals, and only the faces of the same individual are placed in a single folder, The faces of different individuals belong to different folders, and each individual is placed in the same directory.
  • the recognition network training 230 adopts the joint training of Softmax loss and center loss, and the loss function used is:
  • L S is the Softmax loss function
  • L C is the center loss function
  • m is the number of samples taken in each batch
  • n is the total number of training categories
  • the weight W T is equivalent to a matrix composed of multiple vectors
  • b yi and b j are the biases of categories yi and j, respectively parameters
  • X i is the predicted value of the feature vector extracted through the network each time
  • C yi is the center of each category.
  • the extracted facial feature vector X of the cat determines which category it belongs to according to the label, and then performs an inner product with the corresponding vector in W. If the inner product of the feature vector X and the weight vector W is large, it means that the similarity between the two vectors is very high, and the probability of outputting the cat face feature vector through Softmax loss will also be high.
  • center loss sets a center point for each class. During training, the square of the norm of each extracted feature vector and the loss function of the corresponding class center point is used as the loss. The larger the loss, the greater the distance between the feature vectors. The farther the center point of the corresponding class is. By reducing this loss, the cat face feature of each class will be closer to the center point, that is, the intra-class distance will become smaller.
  • the training process of the recognition network training 230 is to input the training data into the recognition network to obtain the facial feature vector, calculate the center loss, randomly initialize the category center, and then calculate the distance between the current data and the category center in each batch, This gradient-form distance is then added to the class center. Similar to parameter correction, the same is similar to the gradient descent method, so that the class center does not jitter, and the ultimate goal is to gradually reduce the loss to stability. Each time the computer calculates, it will randomly extract some data from the training data, and calculate the average value of these feature vectors as the class center.
  • the advantage of using the center loss function is that the distance between classes becomes larger, the distance between classes decreases, and it can be better identified. different individuals with high similarity.
  • the recognition model test 240 is the same as the testing process of the detection model, and the test set is used to verify the training effect of the recognition model.
  • another embodiment of the present application provides a method for individual recognition of a cat's face, using the aforementioned facial recognition device to perform individual recognition on the cat's face, and the specific steps include:
  • This application combines biometric technology and deep learning algorithm to realize the facial individual recognition of cats, the recognition is accurate and efficient, and the effect is remarkable.
  • FIG. 5 Another embodiment of the present application provides a computer device, as shown in FIG. 5 , comprising a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, wherein the The processor implements the following steps when executing the computer-readable instructions:
  • FIG. 5 shows a schematic block diagram of a computer device provided by an embodiment of the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown.
  • the computer device 5 may be a mobile phone, a tablet computer, a desktop computer, a notebook, a palmtop computer, a cloud server and other computing devices.
  • the computer device 5 may comprise a processor 50, a memory 51, and computer-readable instructions 52 stored in the memory 51 and executable on the processor 50, such as a computer for performing the above-described facial individual recognition of a cat read command.
  • the processor 50 executes the computer-readable instructions 52
  • the steps in each of the above embodiments of individual cat facial recognition are implemented, for example, steps S10 to S40 shown in FIG. 4 .
  • the processor 50 executes the computer-readable instructions 52
  • the functions of the modules/units in the foregoing device embodiments are implemented, for example, the functions of the modules 10 to 50 shown in FIG. 1 .
  • the computer-readable instructions 52 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 51 and executed by the processor 50, to complete the present invention.
  • the one or more modules/units may be a series of computer-readable instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer-readable instructions 52 in the computer device 5 .
  • the processor 50 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 51 may be an internal storage unit of the computer device 5 , such as a hard disk or a memory of the computer device 5 .
  • the memory 51 may also be an external storage device of the computer device 5, such as a plug-in hard disk equipped on the computer device 5, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, Flash Card, etc.
  • the memory 51 may also include both an internal storage unit of the computer device 5 and an external storage device.
  • the memory 51 is used to store the computer readable instructions and other instructions and data required by the computer device 5 .
  • the memory 51 can also be used to temporarily store data that has been output or will be output.
  • Each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the technical solution of the present invention is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several computer-readable instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present invention.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk, etc. that can store computer-readable instructions medium.
  • Another embodiment of the present application provides one or more readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to execute follows the steps below:

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

本申请提供一种猫的面部个体识别装置,包括:猫脸采集模块:用于获取目标的面部图像;猫脸检测模块:用于从所述面部图像进行猫的面部坐标点的提取,依据所述坐标点裁剪所述面部图像,获取猫的面部区域;猫脸识别模块:用于从所述面部区域获取猫的面部特征向量;猫脸建档模块:用于根据预先获取的面部特征向量建立宠物猫档案库,以便用于宠物猫的身份识别;猫脸判别模块:用于将待识别目标的面部特征向量与档案库中宠物猫的面部特征向量进行比对,判别目标的个体身份。

Description

猫的面部个体识别装置、方法、计算机设备及存储介质 技术领域
本申请涉及面部识别技术领域,具体涉及一种猫的面部个体识别装置、方法计算机设备及存储介质。
背景技术
随着人们生活质量的提升,宠物猫在人们生活中扮演着越来越重要的角色,由此衍生出宠物猫管理、宠物猫保险等应用场景,因而在宠物猫管理平台、宠物猫保险理赔等环节中面临宠物猫个体识别的问题。当前,宠物猫的个体识别主要分为生物识别技术和非生物识别技术。非生物识别技术如RFID宠物识别技术,识别芯片靠扫描仪发出的信号产生感应电流,靠这些能量,将存储在芯片中的信息发送出去,并被扫描仪接收和识别。但需要将芯片植入宠物体内,且需要专业设备发射信号才能进行识别,对宠物可能造成伤害,且使用不方便。生物识别技术包括多种。鼻纹识别,具备唯一性,但部分品种猫的鼻纹较浅,现有技术难以采集,技术难度高。虹膜识别,虹膜具备唯一性,精度高;但需要专业设备采集图像,开发成本高,使用不方便。声纹、步态识别:声纹和步态也具备唯一性,理论上可行,但技术不够成熟。面部识别,对猫面部特征进行提取,图像采集方便,但准确性不足。
发明内容
鉴于上述技术问题,为了克服上述现有技术的不足,本申请提出了一 种猫的面部个体识别装置、方法、计算机设备及存储介质,以高效、准确地完成猫的个体识别。
本申请提供一种猫的面部个体识别装置,包括:猫脸采集模块:用于获取目标的面部图像;猫脸检测模块:用于从所述面部图像进行猫的面部坐标点的提取,依据所述坐标点裁剪所述面部图像,获取猫的面部区域;猫脸识别模块:用于从所述面部区域获取猫的面部特征向量;猫脸建档模块:用于根据预先获取的面部特征向量建立宠物猫档案库,以便用于宠物猫的身份识别;猫脸判别模块:用于将待识别目标的面部特征向量与档案库中宠物猫的面部特征向量进行比对,判别目标的个体身份。
在一些实施例中,所述猫脸检测模块包括猫脸检测模型,所述检测模型的训练流程包括:检测图像收集、检测图像标注、检测网络训练和检测模型测试。
在一些实施例中,所述检测图像收集为从宠物领养网站上按照宠物名称爬取图片,再通过人工PS合成各类不同背景的训练图片,将训练图片分为训练集和测试集,训练集用于网络训练,测试集用于模型测试,验证网络训练的效果。
在一些实施例中,所述检测图像标注为在训练图片上标注猫的面部坐标点信息以及面部轮廓外接矩形。
在一些实施例中,所述检测网络训练采用训练集图片进行训练,训练所用的损失函数为
Figure PCTCN2021086547-appb-000001
其中:p i(ω,θ)为预测的坐标,y i为真实坐标,N为抽取数量,ω、θ为网络参数。
在一些实施例中,所述检测测试模型为用测试集图片验证训练效果, 用训练过的检测网络及参数预测测试集,并结合测试集标签求出验证集的损失,然后与训练集损失对比,判断所述检测模型训练效果的好坏。
在一些实施例中,所述猫脸识别模块包括猫脸识别模型,所述识别模型的训练流程包括:识别图像采集、识别图像整理、识别网络训练和识别模型测试。
在一些实施例中,所述识别图像采集包括人工采集大量宠物猫面部信息。
在一些实施例中,所述识别图像整理为用采集到的猫的面部图片,裁剪猫的面部区域,并按个体对面部区域图片进行归类。
在一些实施例中,所述识别网络训练采用Softmax loss和center loss联合训练,所用的损失函数为
Figure PCTCN2021086547-appb-000002
其中,L S为Softmax loss函数,L C为center loss函数,
Figure PCTCN2021086547-appb-000003
为中心损失的权重值,m为每批次取样本的数量,n为训练类别的总数,权重W T相当于多个向量组成的矩阵,b yi和b j分别为类别yi和j的偏置参数,X i为每次通过网络提取到的预测值,C yi为每个类别的中心。
在一些实施例中,所述识别网络训练的训练过程为将训练数据输入识别网络得到面部特征向量,并计算中心损失,随机初始化类别中心,而后每一个batch里计算当前数据与类别中心的距离,而后将这个梯度形式的距离加到类别中心上。
本申请还提供一种猫的面部个体识别方法,利用前述的面部识别装置对猫的面部进行个体识别,具体步骤包括:
获取待识别目标的面部图像;
从所述面部图像进行猫的面部坐标点的提取,依据所述坐标点裁剪所述面部图像,获取猫的面部区域;
从所述面部区域获取待识别目标的面部特征向量;
将待识别目标的面部特征向量与档案库中宠物猫的面部特征向量进行比对,判别目标的个体身份。
本申请还提供一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时实现如下步骤:
获取待识别目标的面部图像;
从所述面部图像进行猫的面部坐标点的提取,依据所述坐标点裁剪所述面部图像,获取猫的面部区域;
从所述面部区域获取待识别目标的面部特征向量;
将待识别目标的面部特征向量与档案库中宠物猫的面部特征向量进行比对,判别目标的个体身份。
本申请还提供一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
获取待识别目标的面部图像;
从所述面部图像进行猫的面部坐标点的提取,依据所述坐标点裁剪所述面部图像,获取猫的面部区域;
从所述面部区域获取待识别目标的面部特征向量;
将待识别目标的面部特征向量与档案库中宠物猫的面部特征向量进行比对,判别目标的个体身份。
本申请结合生物识别技术和深度学习算法来实现猫的面部个体识别,识别准确高效,效果显著。
附图说明
图1为本申请猫的面部个体识别装置的示意图;
图2为本申请猫脸检测模型的训练流程图;
图3为本申请猫脸识别模型的训练流程图;
图4为本申请猫的面部个体识别的示意图;
图5为本申请的计算机设备的示意图。
具体实施方式
本申请某些实施例于后方将参照所附附图做更全面性地描述,其中一些但并非全部的实施例将被示出。实际上,本申请的各种实施例可以许多不同形式实现,而不应被解释为限于此数所阐述的实施例;相对地,提供这些实施例使得本申请满足适用的法律要求。
为使本申请的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本申请进一步详细说明。
生物识别技术:通过计算机与光学、声学、生物传感器和生物统计学原理等高科技手段密切结合,利用人体固有的生理特性,(如指纹、脸象、虹膜等)和行为特征(如笔迹、声音、步态等)来进行个人身份的鉴定。
深度学习:是学习样本数据的内在规律和表示层次,这些学习过程中获得的信息对诸如文字,图像和声音等数据的解释有很大的帮助。它的最终 目标是让机器能够像人一样具有分析学习能力,能够识别文字、图像和声音等数据。深度学习是一个复杂的机器学习算法,在语音和图像识别方面取得的效果,远远超过先前相关技术。
本申请一实施例提供一种猫的面部个体识别装置,如图1所示,包括:猫脸采集模块10:用于获取目标的面部图像;猫脸检测模块20:用于从所述面部图像进行猫的面部坐标点的提取,依据所述坐标点裁剪所述面部图像,获取猫的面部区域;猫脸识别模块30:用于从所述面部区域获取猫的面部特征向量;猫脸建档模块40:用于根据预先获取的面部特征向量建立宠物猫档案库,以便用于宠物猫的身份识别;猫脸判别模块50:用于将待识别目标的面部特征向量与档案库中宠物猫的面部特征向量进行比对,判别目标的个体身份。如果面部特征向量的欧式距离足够低,且小于设定的阈值,可判定待识别目标与档案库中比对的宠物猫是同一只宠物猫,否则为不同只。
如图2所示,本申请提供的一种所述猫脸检测模块20包括猫脸检测模型100,所述检测模型100的训练流程包括:检测图像收集110、检测图像标注120、检测网络训练130和检测模型测试140。
猫面部特征具有唯一性,可作为宠物身份识别的依据。猫脸检测是为获取面部区域以及7个面部关键点,左猫耳,右猫耳,左眼,右眼,左侧鼻孔,右侧鼻孔,嘴巴中心。因为不同品种猫脸都大体相近,所以使用这7个点比较好泛化,更能体现出猫的轮廓,训练数据标注也会方便点,排除其他因素如躯干等对识别的干扰。传统的图像处理方式也可以获取面部区域,但严重依赖场景,精度和泛化能力较差。深度学习能够从大量的数 据中学习到有用的信息,广泛应用于图像处理,精度和泛化能力相对传统方法有显著的优势。本申请采用深度神经网络的方式,用于检测猫的面部在图像中的位置信息。
本实施例中,所述检测图像收集110为从宠物领养网站上按照宠物名称爬取图片,在这类网站上会有各种品种的猫且同一只猫会存放多张照片,在从中人工剔除不合格的图片。再通过人工PS合成各类不同背景的训练图片,将猫脸作为正样本标注出来,图片的其它的背景尽可能的丰富并作为负样本,这样生成的训练集训练出来的模型就能准确的检测出猫脸。所述检测图像收集110将收集的图片进行数据加工后(旋转、光照、去噪、翻转)分为两部分,90%用于训练集和10%用于测试集,训练集用于网络训练,测试集用于模型测试,验证检测网络训练的效果。
本实施例中,所述检测图像标注120为在训练图片上标注猫的面部坐标点信息以及面部轮廓外接矩形。所述检测网络通过预先标注的坐标点信息,最后会被网络学习,面对待识别的猫图像时也会得到坐标点信息,这时就能通过坐标点信息,生成面部轮廓外接矩形,对猫的面部进行裁剪,获取合适的可以正确识别的猫的面部区域。
本实施例中,所述检测网络训练130采用训练集图片进行训练,训练所用的均方差损失函数为:
Figure PCTCN2021086547-appb-000004
其中:p i(ω,θ)为预测的坐标,y i为真实坐标,N为抽取数量,ω、θ为网络参数。
本实施例中,训练过程的步骤分为:
正向传播
将训练集数据按批次输入检测网络得到预测坐标,然后与真实坐标求损失。
反向传播
根据梯度下降法,沿着损失的梯度方向反向更新网络参数。
反复训练
不断循环步骤a和b,直至损失值逐步减小至稳定。
本实施例中,所述检测模型测试140为用测试集图片验证训练效果,用训练过的检测网络及参数预测测试集,并结合测试集标签求出验证集的损失,然后与训练集损失对比,判断所述检测模型训练效果的好坏。
验证过程一般和训练过程同时进行,动态判断训练效果及终止训练。判断依据为:当训练集和测试集的损失(误差)均处于下降阶段时,属于欠拟合阶段,这个过程处于训练的前期;当训练集和测试集的损失(误差)逐步下降至趋于稳定,且两者相差不大,模型训练适度,可终止训练;若继续训练,训练集损失(误差)将逐步下降,且远低于测试集损失(误差),模型过拟合,预测结果可信度低。
当训练集和测试集的损失(误差)都很大时,属于欠拟合阶段,这个过程处于训练的前期;当训练集和测试集的损失(误差)趋于稳定,且都很小,两者相差不大,模型训练适度,可终止训练;当训练集损失(误差)很小,但验证集损失(误差)很大,模型过拟合,预测结果可信度低。这个没有固定的界限,通常观察loss曲线变化情况自己定。一般参考是训练loss还在快速下降而测试的loss下降开始变缓慢出现拐点时候,就可以停 止训练。
如图3所示,本实施例中,所述猫脸识别模块包括猫脸识别模型200,所述识别模型200的训练流程包括:识别图像采集210、识别图像整理220、识别网络训练230和识别模型测试240。
猫脸识别是获取宠物猫的面部特征信息。本申请采用Inception V4模块,Inception V4基本上是当前在用于视觉对象识别软件研究的大型可视化数据库ImageNet中的图像分类任务正确率最高的模型。创建深度学习网络,输出维度为128维,用于表征宠物猫面部特征信息。Inception V4模块采用分布式的思想,将单向网络分割为多个不同结构的网络,使得网络能够学习到更丰富的特征。
本实施例中,所述识别图像采集210包括人工采集大量宠物猫面部信息,猫脸拍摄足够清晰,单个个体采集图片数足够多。
本实施例中,所述识别图像整理220为用采集到的猫的面部图片,裁剪猫的面部区域,并按个体对面部区域图片进行归类,单个文件夹内仅放同一个个体的面部,不同个体的面部归属不同文件夹,且各个个体放入同一目录下。
本实施例中,所述识别网络训练230采用Softmax loss和center loss联合训练,所用的损失函数为:
Figure PCTCN2021086547-appb-000005
其中,L S为Softmax loss函数,L C为center loss函数,
Figure PCTCN2021086547-appb-000006
为中心损失的权重值,m为每批次取样本的数量,n为训练类别的总数,权重W T相当于多个向量组成的矩阵,b yi和b j分别为类别yi和j的偏置参数,X i为每次通 过网络提取到的特征向量预测值,C yi为每个类别的中心。
在训练的时候,提取到的猫的面部特征向量X根据标签确定属于哪一类,然后和W中的对应向量做内积。如果特征向量X与权重向量W内积很大,代表两个向量相似度很高,通过Softmax loss输出此猫脸特征向量属于这个类的概率也会很大。center loss给每个类设定了一个中心点,在训练的时候,将每个提取到的特征向量与对应类中心点的损失函数的范数的平方作为损失,损失越大,说明特征向量距离对应类的中心点越远。降低此损失,每个类的猫脸特征会距离中心点更近,也就是类内距变小。
本实施例中,所述识别网络训练230的训练过程为将训练数据输入识别网络得到面部特征向量,并计算中心损失,随机初始化类别中心,而后每一个batch里计算当前数据与类别中心的距离,而后将这个梯度形式的距离加到类别中心上。类似于参数修正,同样的类似于梯度下降法,使得类别中心不会抖动,最终目标是使损失逐步减小至稳定。计算机每次计算时会随机从训练数据取出部分数据,计算这些特征向量的平均值作为类别中心,使用中心损失函数的优点是类间距离变大了,类内距离减少了,能更好的辨识出相似度较高的不同个体。
本实施例中,所述识别模型测试240同检测模型的测试过程,使用测试集验证识别模型训练的效果。
如图4所示,本申请的另一实施例提供一种猫的面部个体识别方法,利用前述的面部识别装置对猫的面部进行个体识别,具体步骤包括:
S10,获取待识别目标的面部图像;
S20,从所述面部图像进行猫的面部面部轮廓外接矩形所述面部图像, 获取猫的面部区域;
S30,从所述面部区域获取待识别目标的面部特征向量;
S40,将待识别目标的面部特征向量与档案库中宠物猫的面部特征向量进行比对,判别目标的个体身份。
本申请结合生物识别技术和深度学习算法来实现猫的面部个体识别,识别准确高效,效果显著。
本申请的另一实施例提供一种计算机设备,如图5所示,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时实现如下步骤:
S10,获取待识别目标的面部图像;
S20,从所述面部图像进行猫的面部面部轮廓外接矩形所述面部图像,获取猫的面部区域;
S30,从所述面部区域获取待识别目标的面部特征向量;
S40,将待识别目标的面部特征向量与档案库中宠物猫的面部特征向量进行比对,判别目标的个体身份。
图5示出了本发明实施例提供的一种计算机设备的示意框图,为了便于说明,仅示出了与本发明实施例相关的部分。
在本实施例中,所述计算机设备5可以是手机、平板电脑、桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。该计算机设备5可包括:处理器50、存储器51以及存储在所述存储器51中并可在所述处理器50上运行的计算机可读指令52,例如执行上述的猫的面部个体识别的计算机可读指令。所述处理器50执行所述计算机可读指令52时实现上述各 个猫的面部个体识别实施例中的步骤,例如图4所示的步骤S10至S40。或者,所述处理器50执行所述计算机可读指令52时实现上述各装置实施例中各模块/单元的功能,例如图1所示模块10至50的功能。
示例性的,所述计算机可读指令52可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器51中,并由所述处理器50执行,以完成本发明。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机可读指令段,该指令段用于描述所述计算机可读指令52在所述计算机设备5中的执行过程。
所述处理器50可以是中央处理单元(Central Processing Unit,CPU),还可以是其它通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器51可以是所述计算机设备5的内部存储单元,例如计算机设备5的硬盘或内存。所述存储器51也可以是所述计算机设备5的外部存储设备,例如所述计算机设备5上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器51还可以既包括所述计算机设备5的内部存储单元也包括外部存储设备。所述存储器51用于存储所述计算机可读指令以及所述计算机设备5所需的其它指令和数据。所述存储器51还可以用于暂时地存储已经输出或者将要输出的数据。
在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干计算机可读指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储计算机可读指令的介质。
本申请的另一实施例提供一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
S10,获取待识别目标的面部图像;
S20,从所述面部图像进行猫的面部面部轮廓外接矩形所述面部图像,获取猫的面部区域;
S30,从所述面部区域获取待识别目标的面部特征向量;
S40,将待识别目标的面部特征向量与档案库中宠物猫的面部特征向量进行比对,判别目标的个体身份。
应注意,附图中各部件的形状和尺寸不反映真实大小和比例,而仅示意本申请实施例的内容。
实施例中提到的方向用语,例如“上”、“下”、“前”、“后”、“左”、“右”等,仅是参考附图的方向,并非用来限制本申请的保护范围。并且上述实施例可基于设计及可靠度的考虑,彼此混合搭配使用或与其他实施例混合搭配使用,即不同实施例中的技术特征可以自由组合形成更多的实施例。
实施例中涉及的方法步骤并不限于其描述的顺序,各步骤的顺序根据实际需要的来进行调整。
需要说明的是,在附图或说明书正文中,未绘示或描述的实现方式,均为所属技术领域中普通技术人员所知的形式,并未进行详细说明。此外,上述对各元件和方法的定义并不仅限于实施例中提到的各种具体结构、形状或方式,本领域普通技术人员可对其进行简单地更改或替换。
以上所述的具体实施例,对本申请的目的、技术方案和有益效果进行了进一步详细说明,应理解的是,以上所述仅为本申请的具体实施例而已,并不用于限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (14)

  1. 一种猫的面部个体识别装置,其特征在于,包括:
    猫脸采集模块:用于获取目标的面部图像;
    猫脸检测模块:用于从所述面部图像进行猫的面部坐标点的提取,依据所述坐标点裁剪所述面部图像,获取猫的面部区域;
    猫脸识别模块:用于从所述面部区域获取猫的面部特征向量;
    猫脸建档模块:用于根据预先获取的面部特征向量建立宠物猫档案库,以便用于宠物猫的身份识别;
    猫脸判别模块:用于将待识别目标的面部特征向量与档案库中宠物猫的面部特征向量进行比对,判别目标的个体身份。
  2. 根据权利要求1所述的识别装置,其特征在于,所述猫脸检测模块包括猫脸检测模型,所述检测模型的训练流程包括:检测图像收集、检测图像标注、检测网络训练和检测模型测试。
  3. 根据权利要求2所述的识别装置,其特征在于,所述检测图像收集为从宠物领养网站上按照宠物名称爬取图片,再通过人工PS合成各类不同背景的训练图片,将训练图片分为训练集和测试集,训练集用于网络训练,测试集用于模型测试,验证网络训练的效果。
  4. 根据权利要求2所述的识别装置,其特征在于,所述检测图像标注为在训练图片上标注猫的面部坐标点信息和面部轮廓外接矩形。
  5. 根据权利要求2所述的识别装置,其特征在于,所述检测网络训练采用训练集图片进行训练,训练所用的损失函数为
    Figure PCTCN2021086547-appb-100001
    其中:p i(ω,θ)为预测的坐标,y i为真实坐标,N为抽取数量,ω、θ为网络 参数。
  6. 根据权利要求2所述的识别装置,其特征在于,所述检测测试模型为用测试集图片验证训练效果,用训练过的检测网络及参数预测测试集,并结合测试集标签求出验证集的损失,然后与训练集损失对比,判断所述检测模型训练效果的好坏。
  7. 根据权利要求1所述的识别装置,其特征在于,所述猫脸识别模块包括猫脸识别模型,所述识别模型的训练流程包括:识别图像采集、识别图像整理、识别网络训练和识别模型测试。
  8. 根据权利要求7所述的识别装置,其特征在于,所述识别图像采集包括人工采集大量宠物猫面部信息。
  9. 根据权利要求7所述的识别装置,其特征在于,所述识别图像整理为用采集到的猫的面部图片,裁剪猫的面部区域,并按个体对面部区域图片进行归类。
  10. 根据权利要求7所述的识别装置,其特征在于,所述识别网络训练采用Softmax loss和center loss联合训练,所用的损失函数为:
    Figure PCTCN2021086547-appb-100002
    其中,L S为Softmax loss函数,L C为center loss函数,
    Figure PCTCN2021086547-appb-100003
    为中心损失的权重值,m为每批次取样本的数量,n为训练类别的总数,W T为权重矩阵,b yi和b j分别为类别y和j的偏置参数,X i为每次通过网络提取到的预测值,C yi为每个类别的中心。
  11. 根据权利要求7所述的识别装置,其特征在于,所述识别网络训练的训练过程为将训练数据输入识别网络得到面部特征向量,并计算中心损失, 随机初始化类别中心,而后每一个batch里计算当前数据与类别中心的距离,而后将这个梯度形式的距离加到类别中心上。
  12. 一种猫的面部个体识别方法,利用权利要求1-11任一所述的面部识别装置对猫的面部进行个体识别,具体步骤包括:
    获取待识别目标的面部图像;
    从所述面部图像进行猫的面部坐标点的提取,依据所述坐标点裁剪所述面部图像,获取猫的面部区域;
    从所述面部区域获取待识别目标的面部特征向量;
    将待识别目标的面部特征向量与档案库中宠物猫的面部特征向量进行比对,判别目标的个体身份。
  13. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时实现如下步骤:
    获取待识别目标的面部图像;
    从所述面部图像进行猫的面部坐标点的提取,依据所述坐标点裁剪所述面部图像,获取猫的面部区域;
    从所述面部区域获取待识别目标的面部特征向量;
    将待识别目标的面部特征向量与档案库中宠物猫的面部特征向量进行比对,判别目标的个体身份。
  14. 一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
    获取待识别目标的面部图像;
    从所述面部图像进行猫的面部坐标点的提取,依据所述坐标点裁剪所述面部图像,获取猫的面部区域;
    从所述面部区域获取待识别目标的面部特征向量;
    将待识别目标的面部特征向量与档案库中宠物猫的面部特征向量进行比对,判别目标的个体身份。
PCT/CN2021/086547 2021-04-09 2021-04-12 猫的面部个体识别装置、方法、计算机设备及存储介质 WO2022213396A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110380907.5 2021-04-09
CN202110380907.5A CN113076886A (zh) 2021-04-09 2021-04-09 一种猫的面部个体识别装置和方法

Publications (1)

Publication Number Publication Date
WO2022213396A1 true WO2022213396A1 (zh) 2022-10-13

Family

ID=76615667

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/086547 WO2022213396A1 (zh) 2021-04-09 2021-04-12 猫的面部个体识别装置、方法、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN113076886A (zh)
WO (1) WO2022213396A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115909403A (zh) * 2022-11-25 2023-04-04 天津大学四川创新研究院 基于深度学习的低成本高精度猪脸识别方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657231B (zh) * 2021-08-09 2024-05-07 广州中科智云科技有限公司 一种基于多旋翼无人机的图像识别方法及装置
CN113822177A (zh) * 2021-09-06 2021-12-21 苏州中科先进技术研究院有限公司 一种宠物脸关键点检测方法、装置、存储介质及设备
CN115661144B (zh) * 2022-12-15 2023-06-13 湖南工商大学 基于可变形U-Net的自适应医学图像分割方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190095704A1 (en) * 2017-09-28 2019-03-28 Nec Laboratories America, Inc. Long-tail large scale face recognition by non-linear feature level domain adaption
CN110909618A (zh) * 2019-10-29 2020-03-24 泰康保险集团股份有限公司 一种宠物身份的识别方法及装置
CN111428666A (zh) * 2020-03-31 2020-07-17 齐鲁工业大学 基于快速人脸检测的智能家庭陪伴机器人系统及方法
CN112329573A (zh) * 2020-10-27 2021-02-05 苏州中科先进技术研究院有限公司 基于猫鼻纹特征提取模型的猫鼻纹识别方法及装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446617B (zh) * 2018-03-09 2022-04-22 华南理工大学 抗侧脸干扰的人脸快速检测方法
CN109858435B (zh) * 2019-01-29 2020-12-01 四川大学 一种基于脸部图像的小熊猫个体识别方法
CN112434556A (zh) * 2020-10-20 2021-03-02 深圳市悦保科技有限公司 宠物鼻纹识别方法、装置、计算机设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190095704A1 (en) * 2017-09-28 2019-03-28 Nec Laboratories America, Inc. Long-tail large scale face recognition by non-linear feature level domain adaption
CN110909618A (zh) * 2019-10-29 2020-03-24 泰康保险集团股份有限公司 一种宠物身份的识别方法及装置
CN111428666A (zh) * 2020-03-31 2020-07-17 齐鲁工业大学 基于快速人脸检测的智能家庭陪伴机器人系统及方法
CN112329573A (zh) * 2020-10-27 2021-02-05 苏州中科先进技术研究院有限公司 基于猫鼻纹特征提取模型的猫鼻纹识别方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115909403A (zh) * 2022-11-25 2023-04-04 天津大学四川创新研究院 基于深度学习的低成本高精度猪脸识别方法
CN115909403B (zh) * 2022-11-25 2023-08-22 天津大学四川创新研究院 基于深度学习的低成本高精度猪脸识别方法

Also Published As

Publication number Publication date
CN113076886A (zh) 2021-07-06

Similar Documents

Publication Publication Date Title
WO2022213396A1 (zh) 猫的面部个体识别装置、方法、计算机设备及存储介质
WO2019232866A1 (zh) 人眼模型训练方法、人眼识别方法、装置、设备及介质
WO2017107957A9 (zh) 人脸图像的检索方法及装置
WO2019119505A1 (zh) 人脸识别的方法和装置、计算机装置及存储介质
WO2019232862A1 (zh) 嘴巴模型训练方法、嘴巴识别方法、装置、设备及介质
TW202006602A (zh) 三維臉部活體檢測方法、臉部認證識別方法及裝置
WO2019033525A1 (zh) Au特征识别方法、装置及存储介质
CN107463865B (zh) 人脸检测模型训练方法、人脸检测方法及装置
WO2020164278A1 (zh) 一种图像处理方法、装置、电子设备和可读存储介质
CN111401145B (zh) 一种基于深度学习与ds证据理论的可见光虹膜识别方法
CN105760858A (zh) 一种基于类Haar中间层滤波特征的行人检测方法及装置
CN110909618A (zh) 一种宠物身份的识别方法及装置
CN106874877A (zh) 一种结合局部和全局特征的无约束人脸验证方法
CN111488943A (zh) 人脸识别的方法及设备
US20240087368A1 (en) Companion animal life management system and method therefor
US20240087352A1 (en) System for identifying companion animal and method therefor
Du High-precision portrait classification based on mtcnn and its application on similarity judgement
Andiani et al. Face recognition for work attendance using multitask convolutional neural network (MTCNN) and pre-trained facenet
CN112613471B (zh) 人脸活体检测方法、装置及计算机可读存储介质
Lang et al. Study of face detection algorithm for real-time face detection system
JP2013008093A (ja) 画像認識装置、画像認識方法及びプログラム
WO2015131710A1 (zh) 人眼定位方法及装置
CN112001318A (zh) 一种身份证件信息采集方法及其系统
Jacob Comparison of popular face detection and recognition techniques
WO2020244076A1 (zh) 人脸识别方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21935620

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.01.2024)