CN110334593A - Pet recognition algorithms and system - Google Patents

Pet recognition algorithms and system Download PDF

Info

Publication number
CN110334593A
CN110334593A CN201910449924.2A CN201910449924A CN110334593A CN 110334593 A CN110334593 A CN 110334593A CN 201910449924 A CN201910449924 A CN 201910449924A CN 110334593 A CN110334593 A CN 110334593A
Authority
CN
China
Prior art keywords
pet
picture
face
registration
dog
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910449924.2A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Zexi Technology Co Ltd
Original Assignee
Zhejiang Zexi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Zexi Technology Co Ltd filed Critical Zhejiang Zexi Technology Co Ltd
Priority to CN201910449924.2A priority Critical patent/CN110334593A/en
Publication of CN110334593A publication Critical patent/CN110334593A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of pet recognition algorithms and systems, comprising the following steps: S1. obtains picture by the end APP, and judges in the picture with the presence or absence of pet face, and if it exists, thens follow the steps S2;S2. the picture is uploaded to server, face recognition is carried out to the pet in the picture from server and returns to recognition result to the end APP;In step sl, judged in the picture by judgment models with the presence or absence of pet face, judgment models use mobile_net_v2 network model, in the server end training mobile_net_v2 network model, and the mobile_net_v2 network model after training is converted into tflite file as the off-line files at the end APP.The present invention carries out the step of whether there is or not targets in picture at the end APP, mitigates server end load pressure, improves recognition efficiency, and use mobile_net_v2 network model as judgment models, network size is smaller, and the final mask file scale of construction is small, has little effect to APP.

Description

Pet recognition algorithms and system
Technical field
The invention belongs to AI technical field more particularly to a kind of pet recognition algorithms and system.
Background technique
As urban population living standard is continuously improved, many city dwellers start to raise pet, but in daily life Constantly hear that message that pet is lost and seeking of being seen everywhere of the street dote on enlightenment.Pet lose brought to pet owner it is huge Strike, pet owner can expend considerable time and effort and look for pet, and the probability for eventually finding pet is very little;It loses The pet of mistake is likely to become the vagrant dog in roadside, constitutes a threat to the living in peace of citizen, traffic, health, personal safety.
In order to solve the above-mentioned technical problem, the long-term exploration that people have carried out, such as Chinese patent disclose a kind of base In seeking dog system and method [application number: CN201810499850.9] for dog face image identification technology, including mobile terminal, service Device and database, mobile terminal include Lost module, Found module and Adop module, and Lost module seeks dog enlightenment for issuing, Found module adopts enlightenment user interface for issuing for verifying vagrant dog, Adopt module;Server is used to respond User requests to complete each function, realizes and identifies to the operation of database and pet face image;Database storage publication is opened Show the information of inner dog.
Above-mentioned patent formula is realized by pet face detection seeks dog on the net, and that improves loss dog dog gives probability for change, But above scheme have the defects that it is certain, for example, above scheme during carrying out the identification of dog face in be that directly will acquire It is uploaded to server to video, picture, carries out the identification of dog face by storing deep learning model in the server, but user Operation is essentially random behavior, can generate the invalid data of significant proportion, and server end can receive much without pet The picture of face, this kind of picture is nonsensical invalid picture, and this kind of meaningless picture is unanimously uploaded to by above scheme Server end increases the load of server, so that server recognition efficiency substantially reduces.
Summary of the invention
Regarding the issue above, the present invention provides a kind of pet recognition algorithms convenient for pet management;
Regarding the issue above, the present invention provides a kind of system based on the above method.
In order to achieve the above objectives, the invention proposes a kind of pet recognition algorithms, include the following steps
S1. picture is obtained by the end APP, and judged in the picture with the presence or absence of pet face, and if it exists, thened follow the steps S2;
S2. the picture is uploaded to server, from server in the picture pet carry out face recognition and to The end APP returns to recognition result.
In above-mentioned pet recognition algorithms, in step sl, by judgment models judge in the picture whether There are pet faces.
In above-mentioned pet recognition algorithms, the judgment models use mobile_net_v2 network model, are taking The business device end training mobile_net_v2 network model, and the mobile_net_v2 network model after training is converted to Off-line files of the tflite file as the end APP.
In above-mentioned pet recognition algorithms, step S2 is specifically included:
S21. the picture is uploaded to server;
S22. identification model is called, extracts the feature vector in the picture, and data are searched based on described eigenvector With the presence or absence of corresponding registration ID in library, and if it exists, then return to the registration ID, otherwise return and search failure.
In above-mentioned pet recognition algorithms, step S2 further include:
S23. disaggregated model is transferred after step S22 searches failure or directly;
S24. a possibility that pet in picture being judged by disaggregated model kind and it is corresponding a possibility that probability value, when most A possibility that big, probability value returned to maximum likelihood probability value corresponding variety name to the end APP when being higher than the first probability threshold value, It is general to return to possibility to the end APP less than the first probability threshold value and when being greater than the second probability threshold value for probability value when the maximum a possibility that The corresponding variety name of rate value front three, otherwise returns to recognition failures.
In above-mentioned pet recognition algorithms, specifically included in step S22:
S221. the picture is zoomed into default size to obtain scaling pictures, and the contracting is extracted by identification model Put the feature vector of the dimension of picture 512;
S222. described eigenvector is carried out with feature vector corresponding in database using COS distance or Euclidean distance Distance calculates;
S223. judge whether minimum range is less than first distance threshold value, if so, exporting corresponding registration ID.
In above-mentioned pet recognition algorithms, after step S223 further include:
S224. when minimum range is greater than first distance threshold value, judge whether minimum range is less than second distance threshold value, if It is to judge corresponding ID the register ID corresponding with the second small distance that register of minimum range whether to be same, if so, output institute Registration ID is stated, it is no to then follow the steps S245;
S225. judge whether minimum range is less than third distance threshold, if so, judge the corresponding registration ID of minimum range with Second is small, whether the corresponding ID that registers of third small distance is to be same, if so, the registration ID is exported, it is no to then follow the steps S245;
S226. judge minimum range whether less than the 4th distance threshold, if so, judge the corresponding registration ID of minimum range with Second is small, third is small and whether the corresponding ID that registers of the 4th small distance is to be same, if so, exporting the registration ID, otherwise Execute step S23.
In above-mentioned pet recognition algorithms, in step S221, before extraction feature vector further include: by rectifying Positive model carries out face's correction to the pet face in the picture.
In above-mentioned pet recognition algorithms, the correction model uses MTCNN network model, and the MTCNN Network model is by positioning pet face 6 including left ear, auris dextra piece, left eye eyeball, right eye eyeball, nose, forehead Mark point carries out face's correction to pet face.
A kind of pet face identification system, including server and client side, the server include database, correction mould Block, categorization module and identification module, the client include judgment module, wherein
Judgment module whether there is pet face in the picture for judging to get;
Database, for storing the registration ID and its identity information of registration pet;
Rectification module, for carrying out face's correction to the pet face in picture;
Identification module, whether the pet for judging in the picture has been subjected to registration, and is being judged as by registering it Backward client returns to corresponding registration ID;
Categorization module, for judging pet variety classification belonging to the pet in the picture.
Compared with prior art, the invention has the following advantages that 1, by whether there is or not targets (i.e. pet face) in picture Step carries out at the end APP, mitigates server end load pressure, improves recognition efficiency;2, using mobile_net_v2 network model As judgment models, network size is smaller, and the final mask file scale of construction is small, has little effect to APP;3, it is mentioned in progress feature Correction process first is carried out to face before taking, it is accurate to improve final identification compared to obtained dog face picture is directly positioned Rate.
Detailed description of the invention
Fig. 1 is the flow diagram of registering functional in the embodiment of the present invention one;
Fig. 2 is mobile_net_v2 network architecture figure in the embodiment of the present invention one
Fig. 3 is the method flow diagram of pet face recognition in the embodiment of the present invention one;
Fig. 4 is the flow diagram of classification feature in the embodiment of the present invention one;
Fig. 5 is the flow diagram of identity recognition function in the embodiment of the present invention one;
Fig. 6 is the sub-network composite structural diagram of MTCNN network model in the embodiment of the present invention one;
Fig. 7 is the method flow diagram of one assorting process of the embodiment of the present invention;
Fig. 8 is the system construction drawing of pet face recognition in the embodiment of the present invention two.
Appended drawing reference, server 1;Database 11;Rectification module 12;Categorization module 13;Identification module 14;Detection module 15;Client 2;Judgment module 21.
Specific embodiment
Embodiment one
Recently as the development of depth learning technology, image recognition accuracy rate is significantly improved, such as the standard of recognition of face True rate is up to 99.8%.Image recognition technology based on deep learning can be automatically performed feature extraction and classification task, and having makes With it is simple, be easy to industrialize, recognition accuracy is high the advantages that, the basis of compacting is established for the identification of pet face image.This implementation Example proposes a kind of pet recognition algorithms using the depth learning technology of the prior art.
The present embodiment mainly includes server end and the end APP (client) two parts, and user can be by the way that APP to be mounted on Pet face recognition is carried out on mobile terminal, and whether has the judgement of pet in the end APP progress picture, is only doted in picture Picture can be just uploaded to server when object, mitigate server load.
Pet face recognition process mainly includes pet classification and pet identification two parts, and the pet of the present embodiment Mainly for canine pet.Pet classification is for carrying out variety ecotype, pet dog identity to the pet dog in user's uploading pictures It identifies mainly for pet dog registered in the present system, so needing before carrying out pet dog identification to pet dog It is registered.
As shown in Figure 1, the register method of the present embodiment is executed by server end, and the following steps are included:
A. the short-sighted frequency and registration information for receiving user's input, use opencv (Open Source Computer Vision Library, computer vision library) it the short-sighted frequency is carried out cutting frame is processed and translated into RGB color, to obtain Multiple video frames;Registration information includes some essential informations, such as age, temper, the dog owner's contact method of dog etc..
B1. here when total video frame quantity, which is unsatisfactory for video frame number, to be required, registration failure is directly returned to, if meeting view The requirement of frequency frame number, then successively detect video frame using detection model, and meet the requirements with 11 as standard, and will test The pet dog face that probability is greater than in all video frames of detection threshold value, which cuts out, to be come;B2. the video that will be cut by step B1 Frame zooms to default size, and preferably 160x160 is to obtain pet face video frame, and by the pet face video after all scalings Frame is put into list;Detection threshold value can be 60%, at this moment by there are the views that pet face probability is higher than 60% in video frame Frequency frame, which is cut out, to be come.
C1. several pet dog face video frames are chosen, preferably 11 here, extract 11 pet dogs using identification model The feature vector that face's video frame 512 is tieed up, then judges whether repeated registration using Euclidean distance, if so, finding existing ID is registered, and the feature vector of several pet dog face video frames is updated to registration ID, is then returned to and is updated successfully message, Otherwise it is assumed that being the dog of new registration, and execute step C2;C2. judge whether pet dog face video frame quantity is more than quantity threshold Otherwise value, returns to registration failure if so, thening follow the steps C3;C3. registration ID is generated, continues to extract remaining pet dog face view The feature vector that frequency frame 512 is tieed up, and be stored in database after all feature vectors, registration information are bound with registration ID phase.With The information of ID binding is registered other than corresponding feature vector, further includes the other information of dog dog, such as name, the product of dog dog Kind, the essential informations such as name, phone of dog owner are lost in order to dog dog to be given for change and convenient for supporting the management of dog managerial staff member Work.
Guarantee that the pet face picture for registering new dog is enough by the judgment step of step C2 video frame quantity, to protect It demonstrate,proves the later period and searches accuracy rate.Here amount threshold can voluntarily determine by engineering staff according to actual conditions, such as 25 Pet face video frame is it is ensured that later period accuracy rate, then amount threshold here can be for 25.
The present embodiment to the training of detection model use be TensorFlow official object detection module, Using transfer learning method, wherein including the detection network and pre-training model of various classics, downloading uses ssd_ here Inception_v2_coco, i.e. basic network are ssd network good inspection of pre-training on coco data set of inception_v2 Survey model.
Here VGG-16 basic network is changed to inception_v2 network.Inception_v2 network it is main Improvement is using Batch Normalization batches of normalization, and the convolution of 5x5 is revised as to the convolution of two 3x3. SSD is the RPN network of enhanced edition, prediction on the characteristic pattern of a scale is promoted on the characteristic pattern of multiple scales pre- simultaneously It surveys.
The training environment of training detection model is GeForce GTX 1080Ti, cuda9.0, cudnn-7, tensorflow-gpu-1.10.0。
Facilitate the feeding dog management work of administrative staff by carrying out identity registration mode to pet dog.For example, administration department Door can require each feeding dog user by uploading satisfactory short-sighted frequency to carry out unique ID's to dog dog in this platform Identity registration is registered support dog, leash license can also be issued by logon mode, in this way, each pet dog is just There can be oneself corresponding identity ID.When finding stray pets dog/put pet dog in a suitable place to breed, administrative staff can pass through convection current Wave/put in a suitable place to breed pet dog shoot/the scanning search dog information, if the dog is the dog dog by identity registration, administrative staff Dog dog and its master message can be searched by this system, greatly facilitate the management work of administrative staff.
A unique ID is generated, when pet loss, when pet abandoned or abandoned pet are hurted sb.'s feelings, correlation can be found Person liable;Mitigate the management role of related management personnel, while improving the efficiency of management, makes city pets raising is more civilized to have Sequence.
Further, user using this system when being searched, on the mobile terminal using installation APP client It passes existing or shoots picture on the spot, the end APP obtains the picture, and the present embodiment is directly embedded at the end APP for judging in picture With the presence or absence of the judgment models of pet dog face.By being embedded judgment models at the end APP, whether have in the end APP identification picture Dog, when judging result is to have dog probability to be considered as in picture having there are dog the picture of dog can just be uploaded to when being greater than 80% Server-side carries out further identification or variety ecotype, substantially increases the efficiency of identification, effectively reduces the negative of server It carries.
Specifically, as shown in Fig. 2, the deep learning algorithm of the judgment models of the present embodiment is using Google open source Mobile_net_v2 network, in server end training mobile_net_v2 network model, and by the mobile_net_ after training V2 network model is converted to off-line files of the tflite file as the end APP, and directly realizing in offline inspection picture at the end APP is It is no to have dog, significantly improve judging efficiency.Mainly there are two innovative points for the mobile_net_v2 network model: Inverted Residuals and Linear bottlenecks so that people is able to satisfy certain accurate under the premise of network size significantly reduces Rate, final model file only have 4.2M size, influence very little to the scale of construction of APP.To the instruction of mobile_net_v2 network model Practicing the data set used can include the figure of 12500 dogs using cat and dog data set used in kaggle match, the data set The picture of piece and 12500 cats, the kind of the data dog is also relatively more comprehensive, it will be appreciated by those skilled in the art that training set number Amount and kind are more, and the network model after training more has powerful recognition capability, so more comprehensive using dog dog kind Mass data collection, the mobile_net_v2 network model judgement with higher after training can be made.
As shown in figure 3, when the end APP judging result is that the picture is uploaded to server there are when pet face in picture It is further processed:
S1. the picture is input to correction model, face is carried out to the pet face in the picture by correction model Portion's correction;
S2. the picture after the correction of pet face is input to identification model, extracts the feature vector in the picture, and base It searches in database in described eigenvector with the presence or absence of corresponding registration ID, and if it exists, then return to the registration ID, otherwise look into It looks for failure or executes step S3;
S3. disaggregated model is transferred according to uploading pictures after step S2 searches failure or directly;
S4. a possibility that pet in picture being judged by disaggregated model kind and it is corresponding a possibility that probability value.
It should be noted that identification can be carried out together or individually be carried out, specific root with pet classification feature It selects to determine according to the specific setting of the present embodiment or user.When carrying out together, identification is first carried out to a picture, if identification To registered, then registration ID is returned, classification judgement is otherwise carried out;As shown in Figure 4 and Figure 5, when individually carrying out, if being known using identity Other function, then to picture carry out identification, if recognize it is registered, return registration ID, otherwise return to registration failure;If making With classification feature to picture, then variety ecotype directly is carried out to the pet in picture.
Further, in step sl, correction model uses MTCNN network model, and MTCNN network model passes through positioning 6 mark points of the pet face including left ear, auris dextra piece, left eye eyeball, right eye eyeball, nose, forehead to pet face into The correction of row face.Dog dog is various in style, and face's large area is all hair, and traditional method is difficult to find stable characteristic point, The present embodiment corrects dog face by using MTCNN network model.
As shown in fig. 6, MTCNN network model is made of three network structures: P-Net, O-Net, R-Net:
Proposal Network (P-Net): the network structure is mainly used for obtaining candidate window and the side in dog face region The regression vector of boundary's frame.And returned with the bounding box, candidate window is calibrated, non-maxima suppression is then passed through (NMS) merge the candidate frame of high superposed;
Refine Network (R-Net): the network structure, which is still returned by bounding box with NMS, removes those The region false-positive;
Output Network (O-Net): R-Net layers of the layer ratio and more Liao Yicengjuan bases, so the result meeting of processing More fine, effect is as R-Net layers of effect, but the layer has carried out more supervision to dog face region, while can also be defeated This 6 mark points of left ear, auris dextra piece, left eye eyeball, right eye eyeball, nose, forehead out.
Data of the training data of MTCNN network model using the various dog kinds of Stanford University after mark point mark Collection.
Compared to the dog face directly positioned in picture, accuracy rate has to be obviously improved dog face after the correction of MTCNN network, Substantially increase the accuracy rate of subsequent dog face identification.
Further, step S2 is specifically included:
S21. the picture is zoomed into default size to obtain scaling pictures, default size here can be 160x160, and the feature vector that the scaling pictures 512 are tieed up is extracted by identification model;
S22. described eigenvector is carried out with feature vector corresponding in database using COS distance or Euclidean distance Distance calculates, and arranges from small to large according to distance, the corresponding corresponding registration ID of also arrangement;
S23. judge whether minimum range is less than first distance threshold value, if so, exporting corresponding registration ID;
It should be noted that due to that can not determine in picture completely when minimum range is greater than first distance threshold value Dog is the dog that do not register, and in order to improve the accuracy of judgement, the present embodiment is after step S23 further include:
S24. when minimum range is greater than first distance threshold value, judge whether minimum range is less than second distance threshold value, if It is to judge corresponding ID the register ID corresponding with the second small distance that register of minimum range whether to be same, if so, output institute Registration ID is stated, it is no to then follow the steps S25;
S25. judge whether minimum range is less than third distance threshold, if so, judge the corresponding registration ID of minimum range with Second is small, whether the corresponding ID that registers of third small distance is to be same, if so, the registration ID is exported, it is no to then follow the steps S245;
S26. judge minimum range whether less than the 4th distance threshold, if so, judge the corresponding registration ID of minimum range with Second is small, third is small and whether the corresponding ID that registers of the 4th small distance is to be same, if so, exporting the registration ID, otherwise Execute step S3.
By the judgment mode of aforesaid plurality of distance, multiple threshold values, the accuracy of judgement is improved.
Here identification model uses facenet network model, and dog face is first inputted facenet network, generates one The feature vector of 512 dimensions, which is the feature vector of current dog face, then one by one with the dog face feature vector in database Distance (here using Euclidean distance) is calculated, adjusts the distance and is ranked up, if minimum range is less than a certain threshold value, that is, is thought The current dog is same dog apart from corresponding dog with this, can also find many dogs similar with current dog by this method.
The core concept of FaceNet is to make similar distance as close as possible, the distance between inhomogeneity as far as possible, with tradition Disaggregated model is that loss function, the ternary loss function of use, the loss function are needed in training compared to maximum innovative point Three input dog face pictures are wanted, to guarantee convergence speed, the identical dog face for selecting distance farthest, and apart from nearest difference Dog face is trained.
In addition, the training environment of identification model is also possible to GeForce GTX 1080Ti, cuda9.0, cudnn-7, tensorflow-gpu-1.10.0。
As shown in fig. 7, step S4 is specifically included:
S41. picture is pre-processed, preprocessing process includes that picture is zoomed to 299x299, then expands dimension and arrives 4 dimensions, using normalization normalized, and calling classification model and variety name, picture is obtained by disaggregated model A possibility that middle pet kind and it is corresponding a possibility that probability value;
S42. when probability value is higher than the first probability threshold value when the maximum a possibility that, maximum likelihood probability is returned to the end APP It is worth corresponding variety name, the probability value less than the first probability threshold value and when being greater than the second probability threshold value when the maximum a possibility that, to The end APP returns to the corresponding variety name of possibility probability value front three, otherwise returns to recognition failures.
Certainly, the first probability threshold value is greater than the second probability threshold value, for example, the first probability threshold value is 90%, the second probability threshold Value is 70%, if this model is Sa Moye dog 75%, safe enlightening 80%, golden hair to the judging result to pet face in a picture 96%, 96% > 90%, then return to golden hair to the end APP, its probability can be exported simultaneously, such as output result be " golden hair, 96% ";If this model is Sa Moye dog 75% to the judging result of pet face in a picture, safe enlightening 80%, golden hair 88%, Tibetan mastiff 50%, then returning to probability value in first three kind to the end APP, output result is " golden hair 88%, safe enlightening 80%, Sa Mo Dog 75% ".In addition, the probability value less than the first probability threshold value and when being greater than the second probability threshold value when the maximum a possibility that, and When being only less than 3 kinds between first probability threshold value and the second probability threshold value, return is entirely located in the first probability threshold value And the second kind between probability threshold value.
The disaggregated model application of the present embodiment is the Inception_resnet_v2 net provided in tensorflow source code Network, using the method for transfer learning, using the network, the good model of pre-training is trained on Imagenet.Due to dog kind Compare more (the AKC standard that we use there are 149 kinds of dogs), using the side of Inception_resnet_v2 model and transfer learning Formula substantially increases the accuracy rate of identification.
The training environment of disaggregated model is GeForce GTX 1080Ti, cuda9.0, cudnn-7, tensorflow- gpu-1.10.0。
First data set is pre-processed before entering training, by the English of Stamford data set and kaggle data set Name is translated as Chinese name, and data scrubbing, will be alias but the data set of dog of the same race merges.All videos are cut into frame, then Artificial screening, the frame cut out is put into the file of corresponding dog dog kind.
It should be noted that pet here can be canine pet, feline pet etc..The present embodiment can special needle To a kind of pet, such as canine, various pets can also be gathered.
Although 120 dog dogs classified that the present embodiment is used to that the training set of each model to be trained to use this Tufts University The data set provided in data set, kaggle match and several hundred a videos about dog dog, but in actually coming into operation, and Aforementioned training set is not limited, how many final kind do not limited yet, and those skilled in the art can select other training sets to each Model is trained.
Embodiment two
As shown in figure 8, another embodiment provided by the invention is a kind of pet face identification system, including 1 He of server Client 2, server 1 include database 11, rectification module 12, categorization module 13 and identification module 14, and client 2 includes judgement Module 21, wherein
Judgment module 21 whether there is pet face in the picture for judging to get, and only be judged as in the presence of doting on When object face, which is uploaded to server 1 and carries out further recognition detection by 2 ability of client;
Database 11, registration ID and its identity information etc. for storing registration pet;
Rectification module 12, for carrying out face's correction to the pet face in picture;
Identification module 14, whether the pet for judging in the picture has been subjected to registration, and is being judged as by registration Backward client return to corresponding registration ID;
Categorization module 13, for judging pet kind belonging to the pet in the picture.
Further, server 1 further includes detection module 15, for detecting to the video that user uploads and cutting symbol Desired video frame is closed to carry out registration or information update to the pet in video;Identification module 14 is also used to according to video frame Judge that whether pet is by registration in video, if being updated to corresponding registration ID, by registration otherwise in video frame number It measures and registers new ID under the premise of meeting the requirements for the pet in video.
Specific embodiment described herein is only an example for the spirit of the invention.The neck of technology belonging to the present invention The technical staff in domain can make various modifications or additions to the described embodiments or replace by a similar method In generation, however, it does not deviate from the spirit of the invention or beyond the scope of the appended claims.
Although server 1 is used more herein;Database 11;Rectification module 12;Categorization module 13;Identification module 14;Detection module 15;Client 2;The terms such as judgment module 21, but it does not exclude the possibility of using other terms.Use this A little terms are only for the convenience of describing and explaining the nature of the invention;It is construed as any additional limitation It is all to be disagreed with spirit of that invention.

Claims (10)

1. a kind of pet recognition algorithms, which comprises the following steps:
S1. picture is obtained by the end APP, and judged in the picture with the presence or absence of pet face, and if it exists, then follow the steps S2;
S2. the picture is uploaded to server, face recognition is carried out to the pet in the picture from server and to the end APP Return to recognition result.
2. pet recognition algorithms according to claim 1, which is characterized in that in step sl, pass through judgment models Judge in the picture with the presence or absence of pet face.
3. pet recognition algorithms as claimed in claim 2, which is characterized in that the judgment models use mobile_net_ V2 network model, in the server end training mobile_net_v2 network model, and by the mobile_net_v2 after training Network model is converted to off-line files of the tflite file as the end APP.
4. pet recognition algorithms described in claim 1-3 any one, which is characterized in that step S2 is specifically included:
S21. the picture is uploaded to server;
S22. identification model is called, extracts the feature vector in the picture, and search in database based on described eigenvector With the presence or absence of corresponding registration ID, and if it exists, then return to the registration ID, otherwise return and search failure.
5. pet recognition algorithms as claimed in claim 4, which is characterized in that step S2 further include:
S23. disaggregated model is transferred after step S22 searches failure or directly;
S24. a possibility that pet in picture being judged by disaggregated model kind and it is corresponding a possibility that probability value, when maximum When possibility probability value is higher than the first probability threshold value, the corresponding variety name of maximum likelihood probability value is returned to the end APP, when most A possibility that big, probability value returned to possibility probability value to the end APP less than the first probability threshold value and when being greater than the second probability threshold value The corresponding variety name of front three, otherwise returns to recognition failures.
6. pet recognition algorithms according to claim 5, which is characterized in that specifically included in step S22:
S221. the picture is zoomed into default size to obtain scaling pictures, and the scaling figure is extracted by identification model The feature vector that piece 512 is tieed up;
S222. described eigenvector is subjected to distance with feature vector corresponding in database using COS distance or Euclidean distance It calculates;
S223. judge whether minimum range is less than first distance threshold value, if so, exporting corresponding registration ID.
7. pet recognition algorithms according to claim 6, which is characterized in that after step S223 further include:
S224. when minimum range is greater than first distance threshold value, judge whether minimum range is less than second distance threshold value, if so, Corresponding ID the register ID corresponding with the second small distance that register of minimum range is judged whether to be same, if so, described in output ID is registered, it is no to then follow the steps S245;
S225. judge whether minimum range is less than third distance threshold, if so, judging the corresponding registration ID and second of minimum range Whether small, the corresponding ID that registers of third small distance is to be same, if so, the registration ID is exported, it is no to then follow the steps S245;
S226. minimum range is judged whether less than the 4th distance threshold, if so, judging the corresponding registration ID and second of minimum range It is small, third is small and whether the corresponding ID that registers of the 4th small distance to be same, if so, exporting the registration ID, otherwise execute Step S23.
8. pet recognition algorithms according to claim 6, which is characterized in that in step S221, extract feature to Before amount further include: carry out face's correction to the pet face in the picture by correction model.
9. pet recognition algorithms according to claim 8, which is characterized in that the correction model uses MTCNN net Network model, and the MTCNN network model includes left ear, auris dextra piece, left eye eyeball, right eye eyeball, nose by positioning pet face 6 mark points including son, forehead carry out face's correction to pet face.
10. a kind of pet face identification system, which is characterized in that including server and client side, the server includes data Library, rectification module, categorization module and identification module, the client include judgment module, wherein
Judgment module whether there is pet face in the picture for judging to get;
Database, for storing the registration ID and its identity information of registration pet;
Rectification module, for carrying out face's correction to the pet face in picture;
Whether identification module, the pet for judging in the picture have been subjected to registration, and be judged as after registration to Client returns to corresponding registration ID;
Categorization module, for judging pet variety classification belonging to the pet in the picture.
CN201910449924.2A 2019-05-28 2019-05-28 Pet recognition algorithms and system Pending CN110334593A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910449924.2A CN110334593A (en) 2019-05-28 2019-05-28 Pet recognition algorithms and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910449924.2A CN110334593A (en) 2019-05-28 2019-05-28 Pet recognition algorithms and system

Publications (1)

Publication Number Publication Date
CN110334593A true CN110334593A (en) 2019-10-15

Family

ID=68140233

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910449924.2A Pending CN110334593A (en) 2019-05-28 2019-05-28 Pet recognition algorithms and system

Country Status (1)

Country Link
CN (1) CN110334593A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110956149A (en) * 2019-12-06 2020-04-03 中国平安财产保险股份有限公司 Pet identity verification method, device and equipment and computer readable storage medium
CN111191066A (en) * 2019-12-23 2020-05-22 厦门快商通科技股份有限公司 Image recognition-based pet identity recognition method and device
CN111447410A (en) * 2020-03-24 2020-07-24 安徽工程大学 Dog state identification monitoring system and method
CN111753697A (en) * 2020-06-17 2020-10-09 新疆爱华盈通信息技术有限公司 Intelligent pet management system and management method thereof
CN113065473A (en) * 2021-04-07 2021-07-02 浙江天铂云科光电股份有限公司 Mask face detection and body temperature measurement method suitable for embedded system
TWI775077B (en) * 2019-11-25 2022-08-21 大陸商支付寶(杭州)信息技術有限公司 Feedstock identification method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107509655A (en) * 2017-08-30 2017-12-26 杨旭升 A kind of animal identification identification management method and system
CN108304882A (en) * 2018-02-07 2018-07-20 腾讯科技(深圳)有限公司 A kind of image classification method, device and server, user terminal, storage medium
CN108509976A (en) * 2018-02-12 2018-09-07 北京佳格天地科技有限公司 The identification device and method of animal
CN108764109A (en) * 2018-05-23 2018-11-06 西安理工大学 It is a kind of that dog system and method is sought based on dog face image identification technology
CN108921026A (en) * 2018-06-01 2018-11-30 平安科技(深圳)有限公司 Recognition methods, device, computer equipment and the storage medium of animal identification
CN109544589A (en) * 2018-11-24 2019-03-29 四川川大智胜系统集成有限公司 A kind of video image analysis method and its system
CN109548691A (en) * 2018-12-26 2019-04-02 北京量子保科技有限公司 A kind of pet recognition methods, device, medium and electronic equipment
CN109582810A (en) * 2018-12-03 2019-04-05 泰芯科技(杭州)有限公司 A kind of dog dog management system and its implementation based on deep learning
CN109583400A (en) * 2018-12-05 2019-04-05 成都牧云慧视科技有限公司 One kind is registered automatically without intervention for livestock identity and knows method for distinguishing

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107509655A (en) * 2017-08-30 2017-12-26 杨旭升 A kind of animal identification identification management method and system
CN108304882A (en) * 2018-02-07 2018-07-20 腾讯科技(深圳)有限公司 A kind of image classification method, device and server, user terminal, storage medium
CN108509976A (en) * 2018-02-12 2018-09-07 北京佳格天地科技有限公司 The identification device and method of animal
CN108764109A (en) * 2018-05-23 2018-11-06 西安理工大学 It is a kind of that dog system and method is sought based on dog face image identification technology
CN108921026A (en) * 2018-06-01 2018-11-30 平安科技(深圳)有限公司 Recognition methods, device, computer equipment and the storage medium of animal identification
CN109544589A (en) * 2018-11-24 2019-03-29 四川川大智胜系统集成有限公司 A kind of video image analysis method and its system
CN109582810A (en) * 2018-12-03 2019-04-05 泰芯科技(杭州)有限公司 A kind of dog dog management system and its implementation based on deep learning
CN109583400A (en) * 2018-12-05 2019-04-05 成都牧云慧视科技有限公司 One kind is registered automatically without intervention for livestock identity and knows method for distinguishing
CN109548691A (en) * 2018-12-26 2019-04-02 北京量子保科技有限公司 A kind of pet recognition methods, device, medium and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴广伟: "基于移动终端的轻量级卷积神经网络研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
陈晓鹏: "基于视频的多光谱手掌特征识别系统", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI775077B (en) * 2019-11-25 2022-08-21 大陸商支付寶(杭州)信息技術有限公司 Feedstock identification method and device
CN110956149A (en) * 2019-12-06 2020-04-03 中国平安财产保险股份有限公司 Pet identity verification method, device and equipment and computer readable storage medium
CN111191066A (en) * 2019-12-23 2020-05-22 厦门快商通科技股份有限公司 Image recognition-based pet identity recognition method and device
CN111447410A (en) * 2020-03-24 2020-07-24 安徽工程大学 Dog state identification monitoring system and method
CN111753697A (en) * 2020-06-17 2020-10-09 新疆爱华盈通信息技术有限公司 Intelligent pet management system and management method thereof
CN113065473A (en) * 2021-04-07 2021-07-02 浙江天铂云科光电股份有限公司 Mask face detection and body temperature measurement method suitable for embedded system

Similar Documents

Publication Publication Date Title
CN110334593A (en) Pet recognition algorithms and system
Yang et al. Restoring negative information in few-shot object detection
CN105913039B (en) Interactive processing method and device for dialogue data based on vision and voice
US20190228211A1 (en) Au feature recognition method and device, and storage medium
CN109886145A (en) Pet recognition algorithms and system
Salunke et al. A new approach for automatic face emotion recognition and classification based on deep networks
CN106295558A (en) A kind of pig Behavior rhythm analyzes method
CN111753697B (en) Intelligent pet management system and management method thereof
Wu et al. Decentralised learning from independent multi-domain labels for person re-identification
Lai et al. Dog identification using soft biometrics and neural networks
US11749019B2 (en) Systems and methods for matching facial images to reference images
CN110427881A (en) The micro- expression recognition method of integration across database and device based on the study of face local features
CN112214748A (en) Identity recognition system, method and device
Jakhete et al. Object recognition app for visually impaired
US20140025624A1 (en) System and method for demographic analytics based on multimodal information
Lu et al. Algorithm for cattle identification based on locating key area
CN116994285A (en) Bird species identification method and device based on deep neural network
CN113947780B (en) Sika face recognition method based on improved convolutional neural network
Park et al. Intensity classification background model based on the tracing scheme for deep learning based CCTV pedestrian detection
Avanzato et al. Dairy cow behavior recognition using computer vision techniques and CNN networks
CN109829359A (en) Monitoring method, device, computer equipment and the storage medium in unmanned shop
KR102332252B1 (en) Apparatus and method for analyzing oestrus behavior pattern of ruminant animal based on image analysis
Pan et al. A CNN-based animal behavior recognition algorithm for wearable devices
CN109376860A (en) A kind of neural network and its training method
Chen et al. Using deep learning to track stray animals with mobile device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191015

RJ01 Rejection of invention patent application after publication