CN109858435A - A kind of lesser panda individual discrimination method based on face image - Google Patents

A kind of lesser panda individual discrimination method based on face image Download PDF

Info

Publication number
CN109858435A
CN109858435A CN201910086446.3A CN201910086446A CN109858435A CN 109858435 A CN109858435 A CN 109858435A CN 201910086446 A CN201910086446 A CN 201910086446A CN 109858435 A CN109858435 A CN 109858435A
Authority
CN
China
Prior art keywords
lesser panda
face image
lesser
face
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910086446.3A
Other languages
Chinese (zh)
Other versions
CN109858435B (en
Inventor
赵启军
何柒
张志和
侯蓉
陈鹏
齐敦武
吴孔菊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHENGDU RESEARCH BASE OF GIANT PANDA BREEDING
Sichuan University
Original Assignee
CHENGDU RESEARCH BASE OF GIANT PANDA BREEDING
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHENGDU RESEARCH BASE OF GIANT PANDA BREEDING, Sichuan University filed Critical CHENGDU RESEARCH BASE OF GIANT PANDA BREEDING
Priority to CN201910086446.3A priority Critical patent/CN109858435B/en
Publication of CN109858435A publication Critical patent/CN109858435A/en
Application granted granted Critical
Publication of CN109858435B publication Critical patent/CN109858435B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a kind of lesser panda individual discrimination methods based on face image of Computer Applied Technology and computer vision field.Step of the invention includes: 1, finds out lesser panda face image identification region from a shooting photo;2, critical point detection is carried out to lesser panda face image identification region;3, it is aligned lesser panda face image;4, feature is extracted from the lesser panda face image after alignment;5, by feature and default registration sample comparison, lesser panda individual identity information in shooting photo is identified.Using the method for the present invention, only need to input the positive face of a lesser panda or the small posture face image comprising two eyes and nose, without hand labeled, the identification to lesser panda individual can be realized automatically, have the advantages that non-intrusive, sustainable, Yi Shixian, at low cost.

Description

A kind of lesser panda individual discrimination method based on face image
Technical field
The invention belongs to Computer Applied Technologies and computer vision field, are related to the living things feature recognition point of lesser panda Analysis, in particular to a kind of lesser panda individual discrimination method based on face image.
Background technique
Lesser panda (Ailurus fulgens) belongs to the distinctive rare species of Himalaya-Hengduan mountain range, has ten Divide important researching value.It is only distributed in China, Nepal, India, Bhutan and Burma at present.Four are distributed mainly in China River, Yunnan and Tibet, wherein most with Sichuan.Existing lesser panda is classified as endangered species by IUCN, and CITES is included in annex I, II class, which is classified as, in China lays special stress on protecting animal.
It saves the species of lesser panda and changes its severe status, need pointedly to take lesser panda and timely protect And management, it is necessary to basic survey is carried out to its population, and the top priority of census is exactly to accomplish individual identification.Traditional Animal individual recognition methods mainly has: according to the attribute of animal individual, such as figure, hair color, decorative pattern, individual characteristic feature (such as limb Body is incomplete), cry, gender, habit, the differences such as DNA distinguish it is individual;Or artificial label, such as pierce line method, branding method, dyestuff mark The subcutaneous burial method etc. of notation and microelectronic chip.But there is such as at high cost, operation again in traditional animal individual recognition methods It is miscellaneous, have the shortcomings that injury, accuracy rate and stability are poor to animal.
In recent years, with the progress of image and video acquisition and processing technique, computer technology is applied to animal protection Attract more and more concerns, and achieves a series of achievement, such as: African penguin individual discrimination method overlooks group Pig raising individual identification, northeastern tiger individual automatic identifying method based on BP neural network etc..
The above prior art is limited in that, relatively high to the quality requirement of animal shooting photo, to video monitoring field The recognition accuracy for the animal painting captured in scape is lower, needs closely to shoot animal, generates to the life of animal Agitation, operation difficulty is big, at high cost.
Summary of the invention
It is an object of the invention to make full use of the biological nature of lesser panda, overcome in the presence of the prior art it is above-mentioned not Foot, provides a kind of lesser panda individual discrimination method based on face image.
In order to achieve the above-mentioned object of the invention, the present invention provides following technical schemes:
A kind of lesser panda individual discrimination method based on face image, step include:
S1: from a shooting photo, the candidate frame of different scale is generated, lesser panda face image is found out from candidate frame Identification region;
S2: using lesser panda face key point identification model, carries out key point inspection to lesser panda face image identification region It surveys;
S3: ratio shared by fitting lesser panda face image identification region each section, and be aligned according to proportional cutting Lesser panda face image afterwards;
S4: from the lesser panda face image after alignment, feature is extracted;
S5: feature and preset registration sample are compared, and identify lesser panda individual identity information in shooting photo.
The face image identification region of lesser panda, specific steps are found out in step S1 from candidate frame are as follows:
S11: multiple candidate frames are generated on photo with image pyramid algorithm;
S12: multiple candidate frames are inputted into preset candidate frame screening model, multiple candidate frames are screened;
S13: by screening, the candidate of face image identification region of the part comprising lesser panda is selected from multiple candidate frames Frame.
The training process of candidate frame screening model specifically:
S21: shooting in advance one includes lesser panda face image, is including that lesser panda face is marked on lesser panda face image Position coordinates;
S22: shooting the image that multiple include lesser panda face, constitutes image set;
S23: positive sample and negative sample are cut out from image set according to position coordinates, generates training set;
S24: candidate frame screening model is trained using the training method of convolutional neural networks based on training set.
Positive sample is that the overlay region thresholding of clipping region and marked region is greater than the image of max-thresholds, and negative sample is to cut The overlay region thresholding of region and marked region is less than the image of minimum threshold, and the definition of overlay region thresholding is formulated are as follows:
Wherein, IoU is overlay region thresholding, AreacropedIt is the area after cutting, AreagroudtruthIt is the target area of label The area in domain.
It is fitted ratio shared by lesser panda face image identification region each section, and after being aligned according to proportional cutting Lesser panda face image, the formula of fitting are as follows:
β '=arg min (| | X β-y | |2)
Wherein,
Every a line of X indicates m characteristic value of a training sample, and n indicates that sample size, y indicate each two, sample The true value of distance;β is the target to be optimized, i.e., the ratio of each part and eyes distance, the optimal solution of β ' expression β.
Using lesser panda face key point identification model, critical point detection step is carried out to lesser panda face image identification region Suddenly include:
S31: the facial characteristics atlas based on lesser panda trains lesser panda face key point identification model;
S32: lesser panda face area image is inputted into lesser panda face key point identification model, output is believed containing key point The thermal map of breath;
S33: choosing binarization threshold, carries out binary conversion treatment to key point thermal map;
S34: being divided into three channels for the connected region centered on three key points, and calculating separately each channel value is 1 The center of the coordinate in region.
The definition of key point thermal map is formulated are as follows:
Wherein, the range of heatmap (j, i, k) expression key point thermal map, j and i are 1 width and height for arriving original image, the model of k Enclosing is 1 to 3, distancex(k) and distancey(k) indicate current location (i, j) to the direction x of k-th of key point on and y Distance on direction, A, sigmaxAnd sigmayIndicate the standard deviation of amplitude and range deviation.
From the lesser panda face image identification region after alignment, the specific steps of feature are extracted are as follows:
S41: being based on preset lesser panda face area atlas, and training obtains lesser panda foundation characteristic and extracts model;
S42: the lesser panda face image identification region input lesser panda foundation characteristic after alignment is extracted into model, output is former The feature of beginning;
S43: primitive character is mapped in positive face feature space via residual error network module.
Residual error network module is built-up based on mapping coefficient and front-side sample pair primitive character, mapping coefficient It is expressed as with the relationship of front-side sample pair primitive character:
ψ(xprofile)+w(xprofile)R(ψ(xprofile))=ψ (xfrontal)
Wherein, ψ (xprofile) indicate side face image original feature vector, w (xprofile) indicate mapping coefficient, ψ (xfrontal) be face image original feature vector, R expression side face feature vector is mapped.
A kind of lesser panda individual identification device based on face image, including at least one processor, and at least one The memory of a processor communication connection;Memory is stored with the instruction that can be executed by least one processor, instructs by least One processor executes, so that the method that at least one processor is able to carry out any one of claims 1 to 9.
Compared with prior art, beneficial effects of the present invention:
1, after using the method for the present invention, it is only necessary to input the positive face of a lesser panda or the small appearance comprising two eyes and nose State face image is not necessarily to hand labeled, the identification to lesser panda individual can be realized.
2, since the characteristic point for needing to extract in the present invention is less, the shooting quality of photo without requirement too much and is limited System, in common video monitoring image, can realize the individual identification to lesser panda automatically.
3, the present invention can be used for the management work of lesser panda daily life and lesser panda basic image data library is collected, tool Have the advantages that non-intrusive, sustainable, Yi Shixian, at low cost: a, is non-intrusive: this method has untouchable, it is only necessary to shoot one Photo is opened, psychology or physiological damage will not be caused to lesser panda;B, sustainable: registry can be with continuous updating, a body Part information will not lose over time;C, it easily uses: being desirably integrated into the various electronics such as mobile phone, tablet computer, It only needs to clap a photo when use, does not also need additional equipment auxiliary;D, at low cost:, can be permanent once exploitation is completed It uses, even if there is new lesser panda individual birth, additional cost will not be needed, it is only necessary to take pictures registration i.e. to new individual It can.
Detailed description of the invention:
Fig. 1 is a kind of flow chart of the lesser panda individual discrimination method based on face image of the present invention;
Fig. 2 is the lesser panda image containing multiple candidate frames in the embodiment of the present invention 1;
Fig. 3 is the obtained photo comprising lesser panda face region after the correction candidate frame in the embodiment of the present invention 1;
Fig. 4 is the lesser panda face area image cut out according to candidate frame in the embodiment of the present invention 1;
Fig. 5 is the image carried out thermal map after binary conversion treatment in the embodiment of the present invention 1;
Fig. 6 is that the key point of the binaryzation in the embodiment of the present invention 1 is seated in the effect picture in original image;
Fig. 7 is in the embodiment of the present invention 1 according to the image after crucial point alignment;
Fig. 8 is schematic diagram of the right and left eyes in the embodiment of the present invention 1 to intersection point distance.
Specific embodiment
Below with reference to test example and specific embodiment, the present invention is described in further detail.But this should not be understood It is all that this is belonged to based on the technology that the content of present invention is realized for the scope of the above subject matter of the present invention is limited to the following embodiments The range of invention.
Embodiment 1
(1) RGB image for giving a shooting, forms image pyramid, obtains the candidate frame of a large amount of different scales, and look for The face image identification region of lesser panda out.
For the RGB image of arbitrary equipment shooting, to be identified, first have to judge whether there is lesser panda face in image And the position where lesser panda face.Under normal conditions, background parts account for most of area of image, be from the candidate frame of magnanimity In find out the top priority that a small amount of candidate frame comprising target is the recognition methods.In order to improve as much as possible the efficiency of algorithm with Achieve the purpose that apply in real time, takes the thought of Coarse-to-fine, i.e., by coarse to fine.
First stage: the generation of candidate frame;
Firstly, the image set of a large amount of photos including lesser panda face of shooting, chooses an image, in hand labeled image The coordinate of lesser panda face cuts other images in image set, shape according to the coordinate of this image lesser panda face At positive sample and negative sample, positive sample and negative sample collectively form training set.In cutting process, positive sample is clipping region and mark Remember that the overlay region thresholding in region is greater than the image of max-thresholds, max-thresholds value is 0.6 herein, negative sample be clipping region with The overlay region thresholding of marked region is less than the image of minimum threshold, and minimum threshold value is 0.3 herein.Overlay region thresholding IoU It indicates as shown in formula (1),
Wherein, AreacropedIt is the area after cutting, AreagroudtruthIt is the area of the target area of label.Positive sample New coordinate be original label coordinate on plus a random offset, the generation of negative sample be on original image at random generate cut Frame, as long as overlay region thresholding is less than minimum threshold 0.3, as negative sample.Every figure generates the quantity ratio of positive sample and negative sample Example is 1:3.
After obtaining positive negative sample, according to the training method of convolutional neural networks, simple two classification task is trained Neural network, for filter out a small amount of maximum probability include target candidate frame.
Screening process is as follows:
Capture the photo of one with the RGB comprising lesser panda face area;
A large amount of candidate frames are generated on the photo that this is captured with image pyramid algorithm, the Little Bear containing multiple candidate frames Cat image is as shown in Figure 2;
The neural network that these candidate frames are fully entered to trained two classification task, filters out a small amount of maximum probability Candidate frame comprising lesser panda face target area is used for next stage finer operation.
Second stage: candidate frame correction.
The candidate frame that first stage generates has obtained a small amount of candidate frame although deleting a large amount of backgrounds, due to two points The neural network structure of generic task is simple, and there may be inaccurate problems for the candidate frame of generation, therefore new double with one The candidate frame that Task Network generates the first stage is differentiated and is corrected.The candidate frame that first stage generates is divided by this stage Two parts, a part are the difficult samples of model on last stage, i.e., the sample that can not be appropriately determined, by difficult sample just Newly-generated positive sample is added in example sample;Another part is newly-generated positive sample and part sample, and part sample herein is Sample of the overlay region thresholding of finger clipping region and marked region between max-thresholds and minimum threshold.To it is newly-generated just Sample and part sample are cut, and the overlay region thresholding in the region of positive sample and hand labeled is greater than intermediate threshold 0.75, portion Divide the overlay region thresholding in the region of sample and hand labeled between max-thresholds and minimum threshold, herein max-thresholds value It is 0.6, minimum threshold value is 0.3, and each coordinate value for the candidate frame that each cutting obtains corresponds to its hand labeled Region calculate an amount of bias, it may be assumed that
Wherein, xnew_iIt is the abscissa of new crop box, xgt_iIt is the abscissa of hand labeled frame, ynew_iIt is new crop box Ordinate, ygt_iIt is the ordinate of hand labeled frame, as i=1, corresponds to the top left co-ordinate of frame, as i=2, correspond to The bottom right angular coordinate of frame, widthgtAnd lengthgtIt is the length and width of the target area of manual markings.These amount of bias are to be used for The correction offset of training candidate frame in double Task Networks.Double Task Network classification tasks further differentiate part sample and positive sample This, and return task and classification task and the candidate comprising lesser panda face region has been obtained by the shared interaction of fractional weight Frame, the photo comprising lesser panda face region and candidate frame obtained after correction according to candidate frame as shown in figure 3, can accurately cut out Lesser panda face area image, cut image it is as shown in Figure 4.
(2) detection of key point.
The task in this stage is accurately to find pass in lesser panda face region from the candidate frame obtained on last stage Key point, i.e. left eye center (xle,yle), right eye center (xre,yre) and nose center (xnc,ync).It has been observed that lesser panda face Image has a feature of a highly significant, i.e. Little Bear broadleaf monkeyflower herb and nose is with respect to other regions of face, the obvious inclined black of color, and Shape is circle, is based on this priori knowledge, devises the crucial point location neural network model an of figure to figure to assist closing Key point location inputs in any one candidate frame lesser panda face area image to trained crucial point location neural network model Afterwards, network output includes the thermal map of key point information, shown in the definition of thermal map such as formula (4),
Wherein the range of j and i is 1 width and height for arriving original image, and the range of k is 1 to 3, distancex(k) and distancey(k) distance on current location (i, j) to the direction x of k-th of key point and on the direction y, A, sigma are indicatedxWith sigmayIndicate the standard deviation of amplitude and range deviation, they feature the peak value and pace of change and model of each point of thermal map It encloses, is adjustable parameter.Each channel of thermal map indicates the hand labeled region of a key point.It obtains believing containing key point After the thermal map of breath, it is also necessary to further handle obtained thermal map, can just obtain the position of key point, treatment process is such as Under:
Suitable threshold value is chosen to thermal map and carries out binary conversion treatment, image such as Fig. 5 after thermal map to be carried out to binary conversion treatment Shown, it is as shown in Figure 6 that the key point of binaryzation is seated in the effect picture in original image.
Defined label thermal map when according to training, the point response closer apart from true key point is higher, and distance is got over Far it is worth smaller.The each channel of the thermal map for selecting threshold value appropriate to keep binaryzation later is respectively the company centered on three key points Logical region calculates separately the center of the coordinate in the region that each channel value is 1 to get the crucial dot center of prediction is arrived.
(3) face-image is aligned.
After obtaining the key point of face image, the angle of two lines of centres and horizontal direction is calculated first, it then will figure Image rotation turns to make two line levels.By in all samples, between the forehead of entire face, left face, right face and chin and two The proportionate relationship of distance fits ratio shared by each section, and is cut the image after being aligned according to this ratio. It is as shown in Figure 7 according to the image after crucial point alignment.It is fitted shown in the formula such as formula (5) that each section ratio uses.
Wherein every a line of X indicates a training sample, and 4 values of m=4, every a line are respectively forehead, left face to left eye The distance between distance, right face and right eye, chin to the horizontal distance of eyes;N indicates sample size;Y indicates each sample This two eye distance from true value;β is the target to be optimized, i.e., the ratio of each part and eyes distance.β ' indicates that β's is optimal Solution, shown in the formula such as formula (6) for solving β optimal result:
β '=arg min (| | X β-y | |2)……(6)
(4) facial feature extraction.
Image after alignment is used for the training and test of individual identification model.Common feature extraction algorithm have very much, than Such as LBP, HOG, PCA, the present invention use the method based on convolutional neural networks.In preparatory trained recognition of face network It is finely adjusted on model, training obtains the basic network model of lesser panda identification feature extraction.Due in practical scene, generally Lesser panda can not be required to cooperate, therefore to photograph that complete positive face image is extremely difficult, and the image usually photographed has one Determine the side face of the angular deflection of degree, therefore, the feature extracted must be to non-extreme attitude robust.Non-extreme posture refers to angle Degree deflection can at least see two eyes and nose generally less than 45 °.
In view of this factor, the inner crucial point location nerve of Feature Selection Model and preceding step (2) based on basis Network model goes out one with the small posture sample training in pairs of front-and the sample characteristics for having deflection is mapped to positive face feature sky Between residual error network module.Specific implementation process is a face picture arbitrarily to be inputted first, by crucial point location neural network Model prediction goes out the coordinate at right and left eyes center and nose center, and according to right and left eyes centre coordinate, defines a mapping coefficient:
It crosses nose central point and is vertical line, d to the right and left eyes line of centres1And d2It is distance of the right and left eyes to intersection point respectively, left and right The schematic diagram of eye to intersection point distance is as shown in Figure 8.
Based on mapping coefficient and front-side sample pair primitive character, residual error network module is constructed.The original spy of positive face Sign and the relationship between the primitive character and mapping coefficient of side face are as follows:
ψ(xprofile)+w(xprofile)R(ψ(xprofile))=ψ (xfrontal)
Wherein ψ (xprofile) indicate side face image original feature vector, w (xprofile) indicate mapping coefficient, ψ (xfrontal) be face image original feature vector, R expression side face feature vector is mapped.When image is positive face completely When, d1=d2, w (x at this timeprofile)=0 is added without residual error item, ψ (xprofile)=ψ (xfrontal);When image is non-extreme appearance State, when horizontal left avertence or right avertence, the feature of extraction is individually subtracted or increases a residual error item via residual error network module, carries out Adjustment, will be in its Feature Mapping to positive face feature space.Experiment shows that this method is highly effective to lesser panda face recognition.
(5) facial characteristics compares.
It is special with sample is registered in library after obtaining final feature via above-mentioned steps to any lesser panda face image Sign compares one by one:
Wherein, n is registration total sample number, and X is characterized expression, Xr_iFor i-th of sample characteristics in registry, T indicates vector Transposition, | | ... | |2Two norms of vector are sought in expression.It is corresponding in corresponding registry when score obtains maximum value The class label of the input sample of the affiliated class of sample, as this system prediction.If the maximum value of score is less than threshold value, sentence The fixed individual is not belonging to any register individual generic, is new individual.
In such a way that food guides, indoor and outdoor is had collected in Chengdu Panda Breeding Research Base and amounts to 57 lesser pandas 5300 fronts or approximate positive (deflection of low-angle or upper and lower pitching) image.In the data set collected On, with method proposed by the present invention, achieve 99.014% ± 0.15% discrimination.

Claims (10)

1. a kind of lesser panda individual discrimination method based on face image, which is characterized in that step includes:
S1: from a shooting photo, the candidate frame of different scale is generated, lesser panda face image is found out from the candidate frame Identification region;
S2: using lesser panda face key point identification model, carries out key point inspection to the lesser panda face image identification region It surveys;
S3: ratio shared by fitting lesser panda face image identification region each section, and be aligned according to the proportional cutting Lesser panda face image afterwards;
S4: from the lesser panda face image after the alignment, feature is extracted;
S5: the feature and preset registration sample are compared, identify lesser panda individual identity information in the shooting photo.
2. a kind of lesser panda individual discrimination method based on face image as described in claim 1, which is characterized in that step S1 In the face image identification region of lesser panda, specific steps are found out from the candidate frame are as follows:
S11: multiple candidate frames are generated on the photo with image pyramid algorithm;
S12: the multiple candidate frame is inputted into preset candidate frame screening model, the multiple candidate frame is screened;
S13: by screening, face image identification region of the part comprising the lesser panda is selected from the multiple candidate frame Candidate frame.
3. a kind of lesser panda individual discrimination method based on face image as claimed in claim 2, which is characterized in that the time Select the training process of frame screening model specifically:
S21: shooting in advance one includes lesser panda face image, described including marking lesser panda face on lesser panda face image Position coordinates;
S22: shooting the image that multiple include lesser panda face, constitutes image set;
S23: positive sample and negative sample are cut out from image set according to the position coordinates, generates training set;
S24: candidate frame screening model is trained using the training method of convolutional neural networks based on the training set.
4. a kind of lesser panda individual discrimination method based on face image as claimed in claim 3, which is characterized in that it is described just Sample is that the overlay region thresholding of clipping region and marked region is greater than the image of max-thresholds, the negative sample be clipping region with The overlay region thresholding of marked region is less than the image of minimum threshold, and the definition of the overlay region thresholding is formulated are as follows:
Wherein, IoU is overlay region thresholding, AreacropedIt is the area after cutting, AreagroudtruthIt is the target area of label Area.
5. a kind of lesser panda individual discrimination method based on face image as described in claim 1, which is characterized in that described quasi- Ratio shared by lesser panda face image identification region each section is closed, and the lesser panda after being aligned according to the proportional cutting Face image, the formula of fitting are as follows:
β '=argmin (| | X β-y | |2)
Wherein,
Every a line of X indicates m characteristic value of a training sample, and n indicates sample size, each two eye distance of sample of y expression from True value;β is the target to be optimized, i.e., the ratio of each part and eyes distance, the optimal solution of β ' expression β.
6. a kind of lesser panda individual discrimination method based on face image as described in claim 1, which is characterized in that described to adopt With lesser panda face key point identification model, critical point detection step packet is carried out to the lesser panda face image identification region It includes:
S31: the facial characteristics atlas based on lesser panda trains lesser panda face key point identification model;
S32: the lesser panda face area image is inputted into the lesser panda face key point identification model, output contains key The thermal map of point information;
S33: choosing binarization threshold, carries out binary conversion treatment to the key point thermal map;
S34: being divided into three channels for the connected region centered on three key points, calculates separately the region that each channel value is 1 Coordinate center.
7. a kind of lesser panda individual discrimination method based on face image as claimed in claim 6, which is characterized in that the pass The definition of key point thermal map is formulated are as follows:
Wherein, the range of heatmap (j, i, k) expression key point thermal map, j and i are 1 width and height for arriving original image, and the range of k is 1 to 3, distancex(k) and distancey(k) on current location (i, j) to the direction x of k-th of key point and direction y is indicated On distance, A, sigmaxAnd sigmayIndicate the standard deviation of amplitude and range deviation.
8. a kind of lesser panda individual discrimination method based on face image as described in claim 1, which is characterized in that it is described from In lesser panda face image identification region after alignment, the specific steps of feature are extracted are as follows:
S41: being based on preset lesser panda face area atlas, and training obtains lesser panda foundation characteristic and extracts model;
S42: the lesser panda face image identification region input lesser panda foundation characteristic after the alignment is extracted into model, output is former The feature of beginning;
S43: the primitive character is mapped in positive face feature space via residual error network module.
9. a kind of lesser panda individual discrimination method based on face image as claimed in claim 8, which is characterized in that described residual Poor network module is built-up based on mapping coefficient and front-side sample pair primitive character, the mapping coefficient and front- The relationship of the primitive character of side sample pair is expressed as:
ψ(xprofile)+w(xprofile)R(ψ(xprofile))=ψ (xfrontal)
Wherein, ψ (xprofile) indicate side face image original feature vector, w (xprofile) indicate mapping coefficient, ψ (xfrontal) be The original feature vector of face image, R expression map side face feature vector.
10. a kind of lesser panda individual identification device based on face image, which is characterized in that including at least one processor, with And the memory being connect at least one described processor communication;The memory is stored with can be by least one described processor The instruction of execution, described instruction are executed by least one described processor, so that at least one described processor is able to carry out power Benefit require any one of 1 to 9 described in method.
CN201910086446.3A 2019-01-29 2019-01-29 Small panda individual identification method based on face image Active CN109858435B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910086446.3A CN109858435B (en) 2019-01-29 2019-01-29 Small panda individual identification method based on face image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910086446.3A CN109858435B (en) 2019-01-29 2019-01-29 Small panda individual identification method based on face image

Publications (2)

Publication Number Publication Date
CN109858435A true CN109858435A (en) 2019-06-07
CN109858435B CN109858435B (en) 2020-12-01

Family

ID=66896734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910086446.3A Active CN109858435B (en) 2019-01-29 2019-01-29 Small panda individual identification method based on face image

Country Status (1)

Country Link
CN (1) CN109858435B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110245625A (en) * 2019-06-19 2019-09-17 山东浪潮人工智能研究院有限公司 A kind of field giant panda recognition methods and system based on twin neural network
CN110288583A (en) * 2019-06-27 2019-09-27 中国科学技术大学 Key point extraction method in hip joint image
CN110738654A (en) * 2019-10-18 2020-01-31 中国科学技术大学 Key point extraction and bone age prediction method in hip joint image
CN110781866A (en) * 2019-11-08 2020-02-11 成都大熊猫繁育研究基地 Panda face image gender identification method and device based on deep learning
CN110895809A (en) * 2019-10-18 2020-03-20 中国科学技术大学 Method for accurately extracting key points in hip joint image
CN110909618A (en) * 2019-10-29 2020-03-24 泰康保险集团股份有限公司 Pet identity recognition method and device
CN110990488A (en) * 2019-12-02 2020-04-10 东莞西尼自动化科技有限公司 Artificial intelligence sample graph management method based on database
CN112926479A (en) * 2021-03-08 2021-06-08 新疆爱华盈通信息技术有限公司 Cat face identification method and system, electronic device and storage medium
CN113076886A (en) * 2021-04-09 2021-07-06 深圳市悦保科技有限公司 Face individual identification device and method for cat
CN113158870A (en) * 2021-04-15 2021-07-23 华南理工大学 Countermeasure type training method, system and medium for 2D multi-person attitude estimation network
CN113469302A (en) * 2021-09-06 2021-10-01 南昌工学院 Multi-circular target identification method and system for video image
CN113822177A (en) * 2021-09-06 2021-12-21 苏州中科先进技术研究院有限公司 Pet face key point detection method, device, storage medium and equipment
CN115273155A (en) * 2022-09-28 2022-11-01 成都大熊猫繁育研究基地 Method and system for identifying pandas through portable equipment
CN115393904A (en) * 2022-10-20 2022-11-25 星宠王国(北京)科技有限公司 Dog nose print identification method and system

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030108244A1 (en) * 2001-12-08 2003-06-12 Li Ziqing System and method for multi-view face detection
US7228570B2 (en) * 2004-04-29 2007-06-12 Willis Laurie D Bib with wiping extensions
JP2014000250A (en) * 2012-06-19 2014-01-09 Masaru Imaizumi Face mask
CN103959284A (en) * 2011-11-24 2014-07-30 微软公司 Reranking using confident image samples
CN104850820A (en) * 2014-02-19 2015-08-19 腾讯科技(深圳)有限公司 Face identification method and device
CN105868769A (en) * 2015-01-23 2016-08-17 阿里巴巴集团控股有限公司 Method and device for positioning face key points in image
CN106022264A (en) * 2016-05-19 2016-10-12 中国科学院自动化研究所 Interactive face in vivo detection method and device based on multi-task self encoder
CN107609497A (en) * 2017-08-31 2018-01-19 武汉世纪金桥安全技术有限公司 The real-time video face identification method and system of view-based access control model tracking technique
US10002286B1 (en) * 2015-04-28 2018-06-19 Carnegie Mellon University System and method for face recognition robust to multiple degradations
CN108229432A (en) * 2018-01-31 2018-06-29 广州市动景计算机科技有限公司 Face calibration method and device
CN108875602A (en) * 2018-05-31 2018-11-23 珠海亿智电子科技有限公司 Monitor the face identification method based on deep learning under environment
CN108932495A (en) * 2018-07-02 2018-12-04 大连理工大学 A kind of automobile front face parameterized model automatic Generation
CN108960076A (en) * 2018-06-08 2018-12-07 东南大学 Ear recognition and tracking based on convolutional neural networks
CN109086739A (en) * 2018-08-23 2018-12-25 成都睿码科技有限责任公司 A kind of face identification method and system of no human face data training

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030108244A1 (en) * 2001-12-08 2003-06-12 Li Ziqing System and method for multi-view face detection
US7228570B2 (en) * 2004-04-29 2007-06-12 Willis Laurie D Bib with wiping extensions
CN103959284A (en) * 2011-11-24 2014-07-30 微软公司 Reranking using confident image samples
JP2014000250A (en) * 2012-06-19 2014-01-09 Masaru Imaizumi Face mask
CN104850820A (en) * 2014-02-19 2015-08-19 腾讯科技(深圳)有限公司 Face identification method and device
CN105868769A (en) * 2015-01-23 2016-08-17 阿里巴巴集团控股有限公司 Method and device for positioning face key points in image
US10002286B1 (en) * 2015-04-28 2018-06-19 Carnegie Mellon University System and method for face recognition robust to multiple degradations
CN106022264A (en) * 2016-05-19 2016-10-12 中国科学院自动化研究所 Interactive face in vivo detection method and device based on multi-task self encoder
CN107609497A (en) * 2017-08-31 2018-01-19 武汉世纪金桥安全技术有限公司 The real-time video face identification method and system of view-based access control model tracking technique
CN108229432A (en) * 2018-01-31 2018-06-29 广州市动景计算机科技有限公司 Face calibration method and device
CN108875602A (en) * 2018-05-31 2018-11-23 珠海亿智电子科技有限公司 Monitor the face identification method based on deep learning under environment
CN108960076A (en) * 2018-06-08 2018-12-07 东南大学 Ear recognition and tracking based on convolutional neural networks
CN108932495A (en) * 2018-07-02 2018-12-04 大连理工大学 A kind of automobile front face parameterized model automatic Generation
CN109086739A (en) * 2018-08-23 2018-12-25 成都睿码科技有限责任公司 A kind of face identification method and system of no human face data training

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ALEXANDER LOOS等: "An automated chimpanzee identification system using face detection and recognition", 《EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING 2013》 *
谢素仪: "宠物猫脸检测的方法研究", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110245625A (en) * 2019-06-19 2019-09-17 山东浪潮人工智能研究院有限公司 A kind of field giant panda recognition methods and system based on twin neural network
CN110288583B (en) * 2019-06-27 2021-10-01 中国科学技术大学 Automatic extraction method of key points in hip joint image
CN110288583A (en) * 2019-06-27 2019-09-27 中国科学技术大学 Key point extraction method in hip joint image
CN110738654A (en) * 2019-10-18 2020-01-31 中国科学技术大学 Key point extraction and bone age prediction method in hip joint image
CN110895809B (en) * 2019-10-18 2022-07-15 中国科学技术大学 Method for accurately extracting key points in hip joint image
CN110895809A (en) * 2019-10-18 2020-03-20 中国科学技术大学 Method for accurately extracting key points in hip joint image
CN110738654B (en) * 2019-10-18 2022-07-15 中国科学技术大学 Key point extraction and bone age prediction method in hip joint image
CN110909618A (en) * 2019-10-29 2020-03-24 泰康保险集团股份有限公司 Pet identity recognition method and device
CN110909618B (en) * 2019-10-29 2023-04-21 泰康保险集团股份有限公司 Method and device for identifying identity of pet
CN110781866A (en) * 2019-11-08 2020-02-11 成都大熊猫繁育研究基地 Panda face image gender identification method and device based on deep learning
CN110990488A (en) * 2019-12-02 2020-04-10 东莞西尼自动化科技有限公司 Artificial intelligence sample graph management method based on database
CN112926479A (en) * 2021-03-08 2021-06-08 新疆爱华盈通信息技术有限公司 Cat face identification method and system, electronic device and storage medium
CN113076886A (en) * 2021-04-09 2021-07-06 深圳市悦保科技有限公司 Face individual identification device and method for cat
CN113158870A (en) * 2021-04-15 2021-07-23 华南理工大学 Countermeasure type training method, system and medium for 2D multi-person attitude estimation network
CN113158870B (en) * 2021-04-15 2023-07-18 华南理工大学 Antagonistic training method, system and medium of 2D multi-person gesture estimation network
CN113469302A (en) * 2021-09-06 2021-10-01 南昌工学院 Multi-circular target identification method and system for video image
CN113822177A (en) * 2021-09-06 2021-12-21 苏州中科先进技术研究院有限公司 Pet face key point detection method, device, storage medium and equipment
CN115273155A (en) * 2022-09-28 2022-11-01 成都大熊猫繁育研究基地 Method and system for identifying pandas through portable equipment
CN115273155B (en) * 2022-09-28 2022-12-09 成都大熊猫繁育研究基地 Method and system for identifying pandas through portable equipment
CN115393904A (en) * 2022-10-20 2022-11-25 星宠王国(北京)科技有限公司 Dog nose print identification method and system

Also Published As

Publication number Publication date
CN109858435B (en) 2020-12-01

Similar Documents

Publication Publication Date Title
CN109858435A (en) A kind of lesser panda individual discrimination method based on face image
CN105740780B (en) Method and device for detecting living human face
CN104517102B (en) Student classroom notice detection method and system
CN104063722B (en) A kind of detection of fusion HOG human body targets and the safety cap recognition methods of SVM classifier
CN110163114A (en) A kind of facial angle and face method for analyzing ambiguity, system and computer equipment
CN109635875A (en) A kind of end-to-end network interface detection method based on deep learning
WO2017016240A1 (en) Banknote serial number identification method
US11194997B1 (en) Method and system for thermal infrared facial recognition
CN109559362B (en) Image subject face replacing method and device
CN104598883A (en) Method for re-recognizing target in multi-camera monitoring network
CN110163567A (en) Classroom roll calling system based on multitask concatenated convolutional neural network
CN105930798A (en) Tongue image quick detection and segmentation method based on learning and oriented to handset application
CN109766796A (en) A kind of depth pedestrian detection method towards dense population
CN108108760A (en) A kind of fast human face recognition
CN109117753A (en) Position recognition methods, device, terminal and storage medium
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
CN112200138B (en) Classroom learning situation analysis method based on computer vision
CN108960142A (en) Pedestrian based on global characteristics loss function recognition methods again
WO2022262763A1 (en) Image composition quality evaluation method and apparatus
CN108334870A (en) The remote monitoring system of AR device data server states
CN110458200A (en) A kind of flower category identification method based on machine learning
CN114445879A (en) High-precision face recognition method and face recognition equipment
CN106778621A (en) Facial expression recognizing method
CN108446639A (en) Low-power consumption augmented reality equipment
CN110458064A (en) Combined data is driving and the detection of the low target of Knowledge driving type and recognition methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant