CN108073914B - Animal face key point marking method - Google Patents

Animal face key point marking method Download PDF

Info

Publication number
CN108073914B
CN108073914B CN201810023304.8A CN201810023304A CN108073914B CN 108073914 B CN108073914 B CN 108073914B CN 201810023304 A CN201810023304 A CN 201810023304A CN 108073914 B CN108073914 B CN 108073914B
Authority
CN
China
Prior art keywords
key point
animal
sample image
points
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810023304.8A
Other languages
Chinese (zh)
Other versions
CN108073914A (en
Inventor
陈丹
徐滢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Pinguo Technology Co Ltd
Original Assignee
Chengdu Pinguo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Pinguo Technology Co Ltd filed Critical Chengdu Pinguo Technology Co Ltd
Priority to CN201810023304.8A priority Critical patent/CN108073914B/en
Publication of CN108073914A publication Critical patent/CN108073914A/en
Application granted granted Critical
Publication of CN108073914B publication Critical patent/CN108073914B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for marking key points on the face of an animal, which comprises the following steps: detecting an animal face in an input sample image to obtain an animal face frame and the positions of key points of the animal face; processing the input sample image according to the animal face frame and the key point position of the animal face, and sending the processed sample image into a pre-trained key point prediction network to obtain a first key point prediction result; adjusting the first key point prediction result to obtain the labeling points of the key points of the animal face; and performing quality detection on the marking points according to a predefined standard image to obtain quality scores of the marking points. The technical scheme provided by the invention can accurately mark key points of the animal face and automatically check the marked points, thereby improving the marking precision and greatly improving the working efficiency.

Description

Animal face key point marking method
Technical Field
The invention relates to the technical field of image processing, in particular to a method for marking key points on the face of an animal.
Background
In recent years, self-portrait beauty cosmetics have received increasing attention, and the demand for lovely pet beauty cosmetics has also revealed the corners of the head. As the facial makeup depends on the accurate positioning of the key points of the face, the lovely pet makeup also has strong dependence on the key points of the face of an animal. At present, algorithms for positioning key points of an animal face are less in academic circles and industrial circles because of the fact that compared with key points of a human face, labeled samples of key points of the animal face are less, and a public evaluation database is lacked. The research on the animal face key point positioning algorithm depends on the labeling of the animal face key points to a great extent, and a training sample of the animal face key point positioning algorithm is formed by labeling the animal face key points and processing the labeled animal face key points. Therefore, the key point labeling can directly influence the precision of the key point positioning algorithm.
At present, a face key point labeling system is used for labeling key points of an animal face, and the system comprises the following modules: the system comprises a face detection module, a key point prediction module and a key point labeling module, wherein the key point prediction module and the key point labeling module are two completely independent modules, the system simply imports a sample picture, and the labeling of the key points is manually operated by an engineer through experience, so that the labeling of the key points on the face of the sample picture is not accurate, a plurality of invalid labeling samples can be generated, and the invalid labeling samples are helpless to improve the performance of a key point positioning algorithm. In addition, after the manual labeling is completed, manual review needs to be performed on all labeled samples, which is a very time-consuming and labor-consuming process, and is not beneficial to improving the working efficiency.
Disclosure of Invention
The invention aims to provide an animal face key point marking method, which can accurately mark the animal face key points and automatically check the marked points, thereby improving the marking accuracy and greatly improving the working efficiency.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
an animal facial keypoint labeling method comprising:
detecting an animal face in an input sample image to obtain an animal face frame and the positions of key points of the animal face; processing the input sample image according to the animal face frame and the key point position of the animal face, and sending the processed sample image into a pre-trained key point prediction network to obtain a first key point prediction result; adjusting the first key point prediction result to obtain the labeling points of the key points of the animal face; and performing quality detection on the marking points according to a predefined standard image to obtain quality scores of the marking points.
Preferably, the method for performing quality detection on the annotation point according to a predefined standard image and acquiring the quality score of the annotation point includes: acquiring the key point coordinates of the predefined standard image; calculating a similarity transformation matrix of the coordinates of the labeling points and the coordinates of the key points of the standard image; performing affine transformation on the input sample image according to the similarity transformation matrix to obtain a normalized image; extracting image features of the normalized image; and sending the image features into a pre-trained classifier to obtain the quality score of the labeling point.
Preferably, the image features include: HOG features and/or depth features; the selection mode of the classifier comprises the following steps: decision trees, or logistic regression, or deep networks.
Further, before the detecting the face of the animal in the input sample image, the method further includes: and evaluating the importance of the sample image to be labeled.
Preferably, the method for evaluating the importance of the sample image to be labeled comprises: obtaining a key point prediction result of the sample image to be marked as a second key point prediction result; turning the sample image to be marked left and right to obtain a turned sample image; obtaining a key point prediction result of the turnover sample image, wherein the key point prediction result is a third key point prediction result; the coordinates of the third key point prediction result are turned left and right to obtain a fourth key point prediction result; calculating an error between the fourth keypoint predictor and the second keypoint predictor; and comparing the error with a preset second threshold value to obtain the importance of the sample image to be marked.
Preferably, the method of calculating the error between the fourth keypoint predictor and the second keypoint predictor is:
Figure BDA0001544219450000031
where error is the error, S1Predicting a result for the second keypoint, S3Predicting results for the fourth keypoint,(x10,y10),(x11,y11) Coordinates of two of the keypoints in the second keypoint prediction are obtained.
Further, still include: the sample images of the particular category are supplemented.
Preferably, the method of supplementing a sample image of a specific category includes: counting the attribute distribution of the labeled sample images, and acquiring the lacking sample images according to the attribute distribution; and downloading the missing sample image from the network according to preset keywords, and performing duplicate removal processing on the downloaded missing sample image.
Preferably, the attributes of the labeled sample image include: closing eyes, opening eyes, closing mouth, opening mouth, frontal face, and lateral face.
Preferably, the method for performing deduplication processing on the downloaded missing sample image is as follows: and carrying out deduplication based on pixel similarity or based on semantic hash deduplication.
According to the animal face key point marking method provided by the embodiment of the invention, the key point prediction result of the input sample image is obtained by performing face detection on the input sample image, and the marking points of the animal face key points can be obtained by adjusting on the basis of the key point prediction result without manual operation of an engineer by completely depending on experience, so that the difficulty of manual marking is greatly reduced, and the precision of the marking points is improved. Meanwhile, the quality of the marking points is detected, and an operator can be reminded in real time in the marking process, so that the marking quality and the working efficiency are improved. In addition, the technical scheme provided by the invention also evaluates the importance of the sample images to be labeled, and further sequences the importance of the sample images to be labeled, so that the possibility of labeling invalid samples is reduced to a great extent, the input sample images are all valid samples, and the performance of the key point positioning algorithm training model is improved; by supplementing the sample images of specific categories, the attribute distribution of the sample images meets the requirements, and the performance of the key point positioning algorithm training model is further improved.
Drawings
FIG. 1 is a flow chart of a method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings.
Step 101, detecting an animal face in an input sample image, and acquiring an animal face frame and the positions of key points of the animal face;
102, processing the input sample image according to the animal face frame and the key point position of the animal face, wherein the processing comprises the steps of carrying out conversion such as cutting, scaling and rotation on the input sample image, sending the processed sample image into a pre-trained key point prediction network, and obtaining a first key point prediction result;
in this embodiment, a special face detection and keypoint prediction module is provided. The "animal face key point position" refers to positions of 5 key points predicted at the same time when face detection is performed: left eye center, right eye center, nose tip, left mouth corner, right mouth corner; the "keypoint prediction result" refers to the keypoint predicted by the existing keypoint localization algorithm, and is usually more than the above 5 points.
103, adjusting the first key point prediction result to obtain the labeling points of the key points of the animal face;
in this embodiment, a special key point adjusting module is provided, which is a main part of manual annotation, and the basic function of the module is that the annotator can adjust the first key point prediction result given by the face detection and key point prediction module by dragging a mouse, so that the result is more accurate. In addition, the module also comprises a picture zooming function, and the designated area can be zoomed so as to mark key points more accurately.
And 104, performing quality detection on the marked points according to a predefined standard image to obtain quality scores of the marked points, and setting a special automatic quality detection module for detecting the quality of the marked samples in real time and feeding back the quality to a marker in time during specific implementation. The specific method comprises the following steps:
acquiring the key point coordinates of the predefined standard image; calculating a similarity transformation matrix of the coordinates of the labeling points and the coordinates of the key points of the standard image; performing affine transformation on the input sample image according to the similarity transformation matrix to obtain a normalized image; extracting image features of the normalized image; sending the image features into a pre-trained classifier to obtain the quality score of the labeling point; and when the quality score is smaller than a preset first threshold value, returning a 'serious error' warning, and reminding a annotator of paying attention to the annotation result modification.
The image features include: HOG features and/or depth features; the selection mode of the classifier comprises the following steps: decision trees, or logistic regression, or deep networks. The classifier needs to be trained in advance, the training samples comprise correctly labeled positive samples and negative samples generated by random disturbance of the labeling results of the positive samples, and the output result of the classifier is a floating point number between (0,1) and indicates the locality of the labeling samples as the positive samples.
In actual operation, the face detection and keypoint prediction module passes the data to the keypoint adjustment module. The key point adjusting module transmits the marked data to the automatic quality detection module, the automatic quality detection module evaluates the quality of data marking, and then feeds an evaluation result back to the key point adjusting module.
In this embodiment, before detecting the face of the animal in the input sample image, the method further includes: and evaluating the importance of the sample image to be labeled. The importance of the sample to be labeled means that the addition of the sample image has an influence on the performance of the training model, if the sample image is added, the accuracy of the model prediction key point can be improved, the sample is considered to be important, otherwise, the sample image is considered to be less important. In specific implementation, a special marked sample importance evaluation module is arranged, and the module evaluates the importance of the sample images to be marked and ranks the importance of the sample images to be marked. The specific method comprises the following steps:
(1) obtaining a key point prediction result of the sample image to be marked as a second key point prediction result S1(ii) a Second keyPoint prediction result S1The face detection and key point prediction module in the present embodiment performs the acquisition in the same manner as the acquisition of the first key point prediction result;
(2) turning the sample image to be marked left and right to obtain a turned sample image; obtaining a key point prediction result of the turnover sample image, and obtaining a third key point prediction result S2(ii) a Third key point prediction result S2The face detection and key point prediction module in the present embodiment performs the acquisition in the same manner as the acquisition of the first key point prediction result;
(3) predicting the third key point prediction result S2The coordinates are turned left and right to obtain a fourth key point prediction result S3
For convenience of description, it is assumed herein that the number of predicted keypoints is 5, which are: left eye center, right eye center, nose, left mouth corner, right mouth corner. Predicting the third key point to obtain the result S2Is marked as (x)20,y20,x21,y21,x22,y22,x23,y23,x24,y24) Predicting the result S of the fourth key point3Is marked as (x)30,y30,x31,y31,x32,y32,x33,y33,x34,y34) If the width of the picture is w, the fourth key point prediction result S is obtained3The following formula can be used:
(x30,y30)=(w-x21,y21)
(x31,y31)=(w-x20,y20)
(x32,y32)=(x22,y22)
(x33,y33)=(w-x24,y24)
(x34,y34)=(w-x23,y23)
(4) calculating the fourth keypoint prediction result S3And the second key point prediction resultS1Error therebetween, the formula is as follows:
Figure BDA0001544219450000081
where error is the error, S1Predicting a result for the second keypoint, S3(ii) predicting a result for the fourth keypoint, (x)10,y10),(x11,y11) Predicting a result S for the second keypoint1The coordinates of two key points in the drawing, in this embodiment, are S1Coordinates of the first and second key points.
(5) And comparing the error with a preset second threshold value to acquire the importance of the sample image to be marked. And when the error is larger than a preset second threshold value, the sample image is considered as a valid sample, and the larger the error is, the higher the importance of the sample image is.
Practice has shown that in the keypoint localization algorithm, there is a so-called mirror bias problem, i.e. for the same prediction algorithm, the prediction result Y1 of the input image I1 is substantially correct, whereas the prediction result Y2 of the mirror-flipped image I2 may be completely wrong. In general, when Y2 is correct, Y1 is also correct, and when Y2 is completely wrong, the error of Y1 is also large; in addition, if Y2 is mirror-inverted to obtain Y3, the error between Y1 and Y3 can be used to measure the performance of the algorithm on the image I1, if the error is small, the prediction result is good, the image I1 is a simple sample, and conversely, the image I1 is a difficult sample, and the addition of the sample helps to improve the model performance. Therefore, the importance of the sample image can be judged by calculating the value of the error. Of course, other methods may be used to calculate the error, such as normalization by using the coordinates of the fourth keypoint prediction result S3, normalization by using the length or width of the animal face frame, and so on, and different calculation methods may be selected according to actual needs.
In the embodiment, a sample module for supplementing a specific category is specially arranged, so that samples of a specific category can be supplemented conveniently according to needs. Because of the problem of sample distribution imbalance in the early used training samples, the lack of samples of certain classes, such as eye-closing samples, head-twisting samples, samples with large head out-of-plane rotation, samples with large mouth opening, and so on, which directly results in the poor prediction effect of the trained model on the pictures of these classes, the samples of certain classes need to be supplemented, and the specific method thereof comprises the following steps:
(1) counting the attribute distribution of the labeled sample images, and acquiring the lacking sample images according to the attribute distribution; the attributes include: closing eyes, opening eyes, closing mouth, opening mouth, frontal face, lateral face and the like.
(2) And downloading the missing sample image from the network according to preset keywords. The main body of the module is a web crawler, and the web pictures can be downloaded to the local by inputting specific keywords.
(3) And carrying out de-duplication processing on the downloaded lacking sample image. Because the picture repetition degree of the network crawling is high, a duplication removing module is needed. There are several methods available, such as, for example, the simplest pixel similarity based deduplication; more complex semantic hash-based deduplication, etc.
The attribute of the labeled sample can be obtained by simple calculation of the key point coordinates, for example, the closed eye or open eye attribute can be conveniently obtained by the eye key point coordinates; the closed mouth or open mouth attribute can be conveniently obtained through the key point coordinate of the mouth part; the head declination can be calculated through the coordinates of the key points of the eyes, the nose and the mandible, and the attributes of the front face or the side face are obtained. The following illustrates a specific calculation method:
the aspect ratio of the eyes can be calculated according to the coordinates of the key points predicted by the existing model, and if the aspect ratio is larger than a certain threshold value, the eyes can be considered to be closed. For example, the left eye contains 6 key points { x }0,y0,x1,y1,x2,y2,x3,y3,x4,y4,x5,y5Then the width of the left eye can be approximated as: w ═ max (x)0,x1,x2,x3,x4,x5)-min(x0,x1,x2,x3,x4,x5) The approximate height is: h is max (y)0,y1,y2,y3,y4,y5)-min(y0,y1,y2,y3,y4,y5) The aspect ratio is: ratio is w/h. Similarly, the lip attribute may be calculated.
The head declination angle is calculated as follows: mapping the predicted 6 key points of the eyes, the nose and the mandible to corresponding key points of a standard three-dimensional model of the pet face to obtain a rotation vector and a translation vector (which can be realized by a solvePnP function of opencv); then, three angles of yaw angle yaw, roll angle roll and pitch angle pitch are obtained from these two vectors (which can be realized by the deconposprotionmatrix function of opencv).
The supplement specific category sample module requires the invocation of a face detection and keypoint prediction module, which is invoked when selecting samples that require supplementation.
According to the animal face key point marking method provided by the embodiment of the invention, the key point prediction result of the input sample image is obtained by performing face detection on the input sample image, and the marking points of the animal face key points can be obtained by adjusting on the basis of the key point prediction result without manual operation of an engineer by completely depending on experience, so that the difficulty of manual marking is greatly reduced, and the precision of the marking points is improved. Meanwhile, the quality of the marking points is detected, and an operator can be reminded in real time in the marking process, so that the marking quality and the working efficiency are improved. In addition, the technical scheme provided by the invention also evaluates the importance of the sample images to be labeled, and further sequences the importance of the sample images to be labeled, so that the possibility of labeling invalid samples is reduced to a great extent, the input sample images are all valid samples, and the performance of the key point positioning algorithm training model is improved; by supplementing the sample images of specific categories, the attribute distribution of the sample images meets the requirements, and the performance of the key point positioning algorithm training model is further improved. Therefore, the animal face key point marking method provided by the invention can quickly and efficiently mark effective samples, and further improves the accuracy of the animal face key point positioning algorithm. The animal face key point positioning point algorithm research can also be used for the aspects of animal face expression recognition, pain recognition and the like.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention.

Claims (9)

1. An animal face key point marking method is characterized by comprising the following steps:
detecting an animal face in an input sample image to obtain an animal face frame and the positions of key points of the animal face;
processing the input sample image according to the animal face frame and the key point position of the animal face, and sending the processed sample image into a pre-trained key point prediction network to obtain a first key point prediction result;
adjusting the first key point prediction result to obtain the labeling points of the key points of the animal face;
performing quality detection on the marked points according to a predefined standard image to obtain quality scores of the marked points; the method for acquiring the quality score of the labeling point comprises the following steps:
acquiring the key point coordinates of the predefined standard image;
calculating a similarity transformation matrix of the coordinates of the labeling points and the coordinates of the key points of the standard image;
performing affine transformation on the input sample image according to the similarity transformation matrix to obtain a normalized image;
extracting image features of the normalized image;
and sending the image features into a pre-trained classifier to obtain the quality score of the labeling point.
2. The animal facial keypoint labeling method of claim 1, wherein said image features comprise: HOG features and/or depth features; the selection mode of the classifier comprises the following steps: decision trees, or logistic regression, or deep networks.
3. The method of claim 1, further comprising, prior to said detecting the animal face in the input sample image: and evaluating the importance of the sample image to be labeled.
4. The method for labeling animal facial key points according to claim 3, wherein the method for evaluating the importance of the sample image to be labeled comprises:
obtaining a key point prediction result of the sample image to be marked as a second key point prediction result;
turning the sample image to be marked left and right to obtain a turned sample image; obtaining a key point prediction result of the turnover sample image, wherein the key point prediction result is a third key point prediction result;
the coordinates of the third key point prediction result are turned left and right to obtain a fourth key point prediction result;
calculating an error between the fourth keypoint predictor and the second keypoint predictor;
and comparing the error with a preset second threshold value to obtain the importance of the sample image to be marked.
5. The animal facial keypoint labeling method of claim 4, wherein said method of calculating the error between said fourth keypoint prediction and said second keypoint prediction is:
Figure FDA0003324161150000021
where error is the error, S1Is that it isSecond keypoint prediction result, S3(ii) predicting a result for the fourth keypoint, (x)10,y10),(x11,y11) Coordinates of two of the keypoints in the second keypoint prediction are obtained.
6. The animal facial keypoint labeling method of claim 1, further comprising: the sample images of the particular category are supplemented.
7. The animal facial keypoint labeling method of claim 6, wherein said method of supplementing sample images of a specific category comprises:
counting the attribute distribution of the labeled sample images, and acquiring the lacking sample images according to the attribute distribution;
and downloading the missing sample image from the network according to preset keywords, and performing duplicate removal processing on the downloaded missing sample image.
8. The animal facial keypoint labeling method of claim 7, wherein the attributes of the labeled sample images comprise: closing eyes, opening eyes, closing mouth, opening mouth, frontal face, and lateral face.
9. The method for labeling the key points on the face of the animal according to claim 7, wherein the method for de-duplicating the downloaded missing sample image comprises: and carrying out deduplication based on pixel similarity or based on semantic hash deduplication.
CN201810023304.8A 2018-01-10 2018-01-10 Animal face key point marking method Active CN108073914B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810023304.8A CN108073914B (en) 2018-01-10 2018-01-10 Animal face key point marking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810023304.8A CN108073914B (en) 2018-01-10 2018-01-10 Animal face key point marking method

Publications (2)

Publication Number Publication Date
CN108073914A CN108073914A (en) 2018-05-25
CN108073914B true CN108073914B (en) 2022-02-18

Family

ID=62156750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810023304.8A Active CN108073914B (en) 2018-01-10 2018-01-10 Animal face key point marking method

Country Status (1)

Country Link
CN (1) CN108073914B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241910B (en) * 2018-09-07 2021-01-01 高新兴科技集团股份有限公司 Face key point positioning method based on deep multi-feature fusion cascade regression
CN109214343B (en) * 2018-09-14 2021-03-09 北京字节跳动网络技术有限公司 Method and device for generating face key point detection model
CN111382612A (en) * 2018-12-28 2020-07-07 北京市商汤科技开发有限公司 Animal face detection method and device
CN111488759A (en) * 2019-01-25 2020-08-04 北京字节跳动网络技术有限公司 Image processing method and device for animal face
CN110210526A (en) * 2019-05-14 2019-09-06 广州虎牙信息科技有限公司 Predict method, apparatus, equipment and the storage medium of the key point of measurand
CN110288512B (en) * 2019-05-16 2023-04-18 成都品果科技有限公司 Illumination remapping method, device, storage medium and processor for image synthesis
CN110110811A (en) * 2019-05-17 2019-08-09 北京字节跳动网络技术有限公司 Method and apparatus for training pattern, the method and apparatus for predictive information
CN110414369B (en) * 2019-07-05 2023-04-18 安徽省农业科学院畜牧兽医研究所 Cow face training method and device
CN111259822A (en) * 2020-01-19 2020-06-09 杭州微洱网络科技有限公司 Method for detecting key point of special neck in E-commerce image
WO2021164251A1 (en) * 2020-02-21 2021-08-26 平安科技(深圳)有限公司 Image annotation task pre-verification method and apparatus, device, and storage medium
CN112990335B (en) * 2021-03-31 2021-10-15 江苏方天电力技术有限公司 Intelligent recognition self-learning training method and system for power grid unmanned aerial vehicle inspection image defects
CN113470077B (en) * 2021-07-15 2022-06-07 郑州布恩科技有限公司 Mouse open field experiment movement behavior analysis method based on key point detection
CN114550207B (en) * 2022-01-17 2023-01-17 北京新氧科技有限公司 Method and device for detecting key points of neck and method and device for training detection model
CN115546845B (en) * 2022-11-24 2023-06-06 中国平安财产保险股份有限公司 Multi-view cow face recognition method and device, computer equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504376A (en) * 2014-12-22 2015-04-08 厦门美图之家科技有限公司 Age classification method and system for face images
US9008366B1 (en) * 2012-01-23 2015-04-14 Hrl Laboratories, Llc Bio-inspired method of ground object cueing in airborne motion imagery
CN105426870A (en) * 2015-12-15 2016-03-23 北京文安科技发展有限公司 Face key point positioning method and device
CN105844582A (en) * 2015-01-15 2016-08-10 北京三星通信技术研究有限公司 3D image data registration method and device
CN105868769A (en) * 2015-01-23 2016-08-17 阿里巴巴集团控股有限公司 Method and device for positioning face key points in image
CN106203376A (en) * 2016-07-19 2016-12-07 北京旷视科技有限公司 Face key point localization method and device
CN106204429A (en) * 2016-07-18 2016-12-07 合肥赑歌数据科技有限公司 A kind of method for registering images based on SIFT feature
CN106570459A (en) * 2016-10-11 2017-04-19 付昕军 Face image processing method
CN107403145A (en) * 2017-07-14 2017-11-28 北京小米移动软件有限公司 Image characteristic points positioning method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9008366B1 (en) * 2012-01-23 2015-04-14 Hrl Laboratories, Llc Bio-inspired method of ground object cueing in airborne motion imagery
CN104504376A (en) * 2014-12-22 2015-04-08 厦门美图之家科技有限公司 Age classification method and system for face images
CN105844582A (en) * 2015-01-15 2016-08-10 北京三星通信技术研究有限公司 3D image data registration method and device
CN105868769A (en) * 2015-01-23 2016-08-17 阿里巴巴集团控股有限公司 Method and device for positioning face key points in image
CN105426870A (en) * 2015-12-15 2016-03-23 北京文安科技发展有限公司 Face key point positioning method and device
CN106204429A (en) * 2016-07-18 2016-12-07 合肥赑歌数据科技有限公司 A kind of method for registering images based on SIFT feature
CN106203376A (en) * 2016-07-19 2016-12-07 北京旷视科技有限公司 Face key point localization method and device
CN106570459A (en) * 2016-10-11 2017-04-19 付昕军 Face image processing method
CN107403145A (en) * 2017-07-14 2017-11-28 北京小米移动软件有限公司 Image characteristic points positioning method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Interspecies Knowledge Transfer for Facial Keypoint Detection;Maheen Rashid等;《2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)》;20171109;第6894-6903页 *
基于改进SIFT的SAR图像配准方法;张雄美 等;《计算机工程》;20150115;第223-226页 *

Also Published As

Publication number Publication date
CN108073914A (en) 2018-05-25

Similar Documents

Publication Publication Date Title
CN108073914B (en) Animal face key point marking method
US11836943B2 (en) Virtual face model creation based on key point
US9275273B2 (en) Method and system for localizing parts of an object in an image for computer vision applications
Li et al. Robust visual tracking based on convolutional features with illumination and occlusion handing
JP3962803B2 (en) Head detection device, head detection method, and head detection program
CN103443804B (en) Method of facial landmark detection
JP2016062610A (en) Feature model creation method and feature model creation device
JP2007538318A5 (en)
CN103593654A (en) Method and device for face location
CN108171133A (en) A kind of dynamic gesture identification method of feature based covariance matrix
Nuevo et al. RSMAT: Robust simultaneous modeling and tracking
WO2020037962A1 (en) Facial image correction method and apparatus, and storage medium
Du High-precision portrait classification based on mtcnn and its application on similarity judgement
Kerdvibulvech A methodology for hand and finger motion analysis using adaptive probabilistic models
Zheng et al. Multi-angle face detection based on DP-Adaboost
WO2022247527A1 (en) Method for determining head motion of driver, storage medium, and electronic apparatus
CN115171189A (en) Fatigue detection method, device, equipment and storage medium
Zhang et al. Object detection based on deep learning and b-spline level set in color images
CN112149559A (en) Face recognition method and device, readable storage medium and computer equipment
Wang et al. A Visual SLAM Algorithm Based on Image Semantic Segmentation in Dynamic Environment
Liu et al. Fast facial landmark detection using cascade classifiers and a simple 3D model
Batista Locating facial features using an anthropometric face model for determining the gaze of faces in image sequences
Nguwi et al. Automatic detection of lizards
CN116524572B (en) Face accurate real-time positioning method based on self-adaptive Hope-Net
Ptucha et al. Facial pose estimation using a symmetrical feature model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant