CN111695405B - Dog face feature point detection method, device and system and storage medium - Google Patents
Dog face feature point detection method, device and system and storage medium Download PDFInfo
- Publication number
- CN111695405B CN111695405B CN202010327006.5A CN202010327006A CN111695405B CN 111695405 B CN111695405 B CN 111695405B CN 202010327006 A CN202010327006 A CN 202010327006A CN 111695405 B CN111695405 B CN 111695405B
- Authority
- CN
- China
- Prior art keywords
- face
- dog
- feature points
- points
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 93
- 238000002372 labelling Methods 0.000 claims abstract description 63
- 238000000034 method Methods 0.000 claims abstract description 58
- 238000012549 training Methods 0.000 claims description 53
- 230000009466 transformation Effects 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 15
- 230000001815 facial effect Effects 0.000 claims description 10
- 210000000056 organ Anatomy 0.000 claims description 10
- 230000010354 integration Effects 0.000 claims description 9
- 230000001131 transforming effect Effects 0.000 claims 1
- 238000013528 artificial neural network Methods 0.000 description 11
- 238000012545 processing Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000013461 design Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biophysics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a method, a device and a system for marking dog face feature points and a computer storage medium. The method for labeling the dog face feature points comprises the following steps: performing feature point detection based on the image containing the dog face and the trained detection model to obtain precisely positioned feature points; the detection model comprises a first-level network and a second-level network, and the obtaining of the precisely positioned characteristic points comprises the following steps: performing feature point detection based on the full-face image of the dog face and a first-level network of a detection model to obtain coarsely positioned feature points; and positioning the coarsely positioned characteristic points based on the local image of the dog face and a second-stage network of the detection model to obtain the finely positioned characteristic points. According to the method, the device, the system and the computer storage medium, the accuracy and the instantaneity of dog face feature point detection can be effectively improved.
Description
The divisional application is a divisional application with the application date of 2018, 12, 28, 201811628345.6 and the name of a method, a device, a system and a storage medium for detecting the characteristic points of the face of a dog.
Technical Field
The invention relates to the technical field of image processing, in particular to dog face image processing.
Background
The feature point labeling is an important step before image alignment, and greatly influences the overall performance of an image recognition, analysis and search system. At present, in the face recognition process, a plurality of effective feature point labeling methods exist, but few feature point labeling methods are adopted in animal recognition, such as labeling of dog face feature points in dog face recognition.
If the dog face is marked by adopting the traditional feature point marking and detecting method, as each key point is detected independently, the global geometric information of the dog face is completely ignored, so that the dog face is very sensitive to fine disturbance, and the robustness of illumination change, posture change and the like is poor. Furthermore, the computation time, complexity and number of feature points are proportional, the more feature points to be detected, the more detectors are required, which makes it difficult to implement in applications with denser feature points.
Therefore, a better method for marking the feature points of the dog face is lacking in the prior art, the traditional feature point marking method is greatly affected by fine disturbance, missing report or false report is easily caused, the accuracy and recall rate are low, and when the number of the mark points is large, the operation efficiency is low.
Disclosure of Invention
The present invention has been made in view of the above-described problems. The invention provides a method, a device, a system and a computer storage medium for marking dog face feature points, which can effectively improve the accuracy and the instantaneity of dog face feature point detection through a multistage neural network established based on full face and local information.
According to an aspect of the present invention, there is provided a method for detecting feature points of a dog face, including:
performing feature point detection based on the image containing the dog face and the trained detection model to obtain precisely positioned feature points;
the detection model comprises a first-level network and a second-level network, and the obtaining of the precisely positioned characteristic points comprises the following steps:
performing feature point detection based on the full-face image of the dog face and a first-level network of a detection model to obtain coarsely positioned feature points;
and positioning the coarsely positioned characteristic points based on the local image of the dog face and a second-stage network of the detection model to obtain the finely positioned characteristic points.
Illustratively, the method further comprises: and dividing the whole face image of the dog face according to the position of the dog face organ to obtain a local image of the dog face.
Illustratively, locating the coarsely located feature points based on the local image of the dog face and the second-level network of the detection model, the obtaining the finely located feature points includes:
Positioning the coarsely positioned feature points based on the local image of the dog face and a second-level network of the detection model to obtain feature points of the local image;
and carrying out coordinate transformation and integration on the characteristic points of the local images to obtain the precisely positioned characteristic points of the dog face.
Illustratively, coordinate transformation and integration are performed on the feature points of the local image to obtain the precisely positioned feature points of the dog face, which comprises the following steps:
obtaining a reference position and a rotation angle of a partial image of the dog face relative to a full face image;
carrying out coordinate transformation on the corresponding characteristic points of the local image according to the reference position and the rotation angle to obtain the characteristic points of the transformed local image;
and integrating the characteristic points of each transformed partial image to obtain the characteristic points of the fine positioning of the dog face.
Illustratively, the training of the detection model includes: marking characteristic points of the dog faces in the training sample full-face image and the training sample partial image based on a preset rule;
and training the detection model based on the marked training sample full-face image and the training sample local image to obtain a trained detection model.
Illustratively, the predetermined rule includes labeling feature points based on at least one of an ear contour, an eye contour, a nose contour, a mouth contour, and a face contour of the dog face.
Illustratively, labeling feature points based on the ear contour of the dog face includes: marking left and right boundary feature points of the auricle, central feature points of the auricle, characteristic points of the auricle tip and characteristic points marked equidistantly by taking the central feature points of the auricle to the characteristic points of the auricle tip as a reference.
Illustratively, labeling feature points based on the eye contours of the dog face includes: marking a left eye center characteristic point, a characteristic point of intersection of a left eye center horizontal line and the left side and the right side of the left eye outline, and a characteristic point of intersection of a left eye center vertical line and the upper side and the lower side of the left eye outline; and
marking a right eye center characteristic point, a characteristic point of intersection of a right eye center horizontal line and the left side and the right side of the right eye outline, and a characteristic point of intersection of a right eye center vertical line and the upper side and the lower side of the right eye outline.
Illustratively, labeling feature points based on the eye contours of the dog face further comprises: marking feature points along the left eye contour at equal intervals by taking the feature points, which are intersected with the left eye contour, of the left eye center horizontal line and the left eye center vertical line as references; and
and marking the feature points along the right eye contour equidistantly by taking the feature points intersecting the right eye contour as a reference by using the right eye center horizontal line and the right eye center vertical line.
Illustratively, labeling based on the nose profile of the dog face includes: marking the center characteristic point of the nose tip.
Illustratively, labeling based on the mouth contour of the dog face includes: and marking a left mouth corner feature point, an upper lip left profile inflection point feature point, an upper lip center feature point, an upper lip right profile inflection point feature point, a right mouth corner feature point, a lower lip left profile inflection point feature point, a lower lip center feature point and a lower lip right profile inflection point feature point.
Illustratively, labeling based on the facial contours of the dog face includes:
marking a vertex center feature point, a feature point of intersection of a left eye center horizontal line and a face left side contour, a feature point of intersection of a nose tip center horizontal line and a face left side contour, a feature point of intersection of a right eye center horizontal line and a face right side contour, and a feature point of intersection of a nose tip center horizontal line and a face right side contour.
Illustratively, labeling based on the facial contours of the dog face further comprises: marking feature points along the left side contour of the face equidistantly by taking feature points, which are intersected with the left side contour of the face, of the left eye center horizontal line and feature points, which are intersected with the left side contour of the face, of the nose tip center horizontal line as references; and marking the feature points along the right side contour of the face equidistantly by taking the feature points of the intersection of the center line of the right eye and the right side contour of the face and the feature points of the intersection of the horizontal line of the center of the nose tip and the right side contour of the face as datum points.
According to another aspect of the present invention, there is provided a detection apparatus for dog face feature points, including:
the detection module is used for marking the characteristic points based on the image containing the dog face and the trained detection model to obtain the precisely positioned characteristic points;
the detection model comprises a first-level network and a second-level network, wherein the first-level network is used for detecting characteristic points based on the full-face image of the dog face and the first-level network of the detection model to obtain coarse positioning characteristic points;
and the second-level network is used for positioning the coarsely positioned characteristic points based on the local image of the dog face and the second-level network of the detection model to obtain the finely positioned characteristic points.
Illustratively, the training of the detection model includes: marking characteristic points of the dog faces in the full-face images and the partial images of the training samples based on a preset rule;
and training the detection model based on the marked training sample full-face image and the training sample local image to obtain a trained detection model.
Illustratively, the training of the first level network of the detection model includes:
obtaining a dog face full-face sample image before labeling based on the training sample image before labeling, and obtaining a dog face full-face sample image after labeling based on the training sample image after labeling;
Training the first neural network according to the marked dog face full-face sample image to obtain a first-stage network of a trained detection model.
Illustratively, the detection module is further configured to: and dividing the whole face image of the dog face according to the position of the dog face organ to obtain a local image of the dog face.
Illustratively, the training of the second level network of the detection model includes:
obtaining a dog face local sample image before marking based on the training sample image before marking, and obtaining a dog face local sample image after marking based on the training sample image after marking;
training the second neural network according to the marked dog face local sample image to obtain a second-level network of the trained detection model.
Illustratively, the second level network is further configured to:
positioning the coarsely positioned feature points based on the local image of the dog face to obtain the feature points of the local image;
and carrying out coordinate transformation and integration on the characteristic points of the local images to obtain the precisely positioned characteristic points of the dog face.
Illustratively, the second level network is further configured to:
obtaining a reference position and a rotation angle of a partial image of the dog face relative to a full face image;
Carrying out coordinate transformation on the corresponding characteristic points of the local image according to the reference position and the rotation angle to obtain the characteristic points of the transformed local image;
and integrating the characteristic points of each transformed partial image to obtain the characteristic points of the fine positioning of the dog face.
Illustratively, the apparatus further comprises: and the output module is used for outputting the dog face image comprising the dog face feature points and/or the coordinates of the dog face feature points.
According to another aspect of the present invention, there is provided a system for detecting dog face feature points, comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the steps of the above method are implemented when the processor executes the computer program.
According to another aspect of the present invention there is provided a computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a computer implements the steps of the above method.
According to the method, the device, the system and the computer storage medium for detecting the characteristic points of the dog face, disclosed by the embodiment of the invention, the position information of the characteristic points of the dog face is predicted step by step and accurately through the cascade neural network established based on the full face and the local information, so that the high-precision positioning of the characteristic points of the dog face is realized, the accuracy and the instantaneity of the detection of the characteristic points of the dog face can be effectively improved, and the method, the device and the system can be widely applied to various occasions related to the image processing of the dog face.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following more particular description of embodiments of the present invention, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, and not constitute a limitation to the invention. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 is a schematic block diagram of an example electronic device for implementing a method and apparatus for detecting dog face feature points in accordance with an embodiment of the invention;
fig. 2 is a schematic flowchart of a method of detecting dog face feature points according to an embodiment of the present invention;
fig. 3 is an exemplary diagram of dog face feature points according to an embodiment of the present invention;
fig. 4 is a schematic block diagram of a detection apparatus for dog face feature points according to an embodiment of the present invention;
fig. 5 is a schematic block diagram of a dog face feature point detection system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present invention and not all embodiments of the present invention, and it should be understood that the present invention is not limited by the example embodiments described herein. Based on the embodiments of the invention described in the present application, all other embodiments that a person skilled in the art would have without inventive effort shall fall within the scope of the invention.
First, an example electronic device 100 for implementing the method and apparatus for detecting dog face feature points according to the embodiment of the present invention will be described with reference to fig. 1.
As shown in fig. 1, electronic device 100 includes one or more processors 101, one or more storage devices 102, an input device 103, an output device 104, an image sensor 105, which are interconnected by a bus system 106 and/or other forms of connection mechanisms (not shown). It should be noted that the components and structures of the electronic device 100 shown in fig. 1 are exemplary only and not limiting, as the electronic device may have other components and structures as desired.
The processor 101 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities and may control other components in the electronic device 100 to perform desired functions.
The storage 102 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 102 to implement client functions and/or other desired functions in embodiments of the present invention as described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer readable storage medium.
The input device 103 may be a device used by a user to input instructions, and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 104 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image sensor 105 may take images (e.g., photographs, videos, etc.) desired by the user and store the taken images in the storage 102 for use by other components.
For example, the example electronic device for implementing the method and apparatus for detecting the dog face feature points according to the embodiments of the present invention may be implemented as a video capturing terminal such as a smart phone, a tablet computer, an access control system, and the like.
Next, a method 200 of detecting dog face feature points according to an embodiment of the present invention will be described with reference to fig. 2. The method 200 includes:
performing feature point detection based on the image containing the dog face and the trained detection model to obtain precisely positioned feature points;
the detection model comprises a first-level network and a second-level network, and the obtaining of the precisely positioned characteristic points comprises the following steps:
Performing feature point detection based on the full-face image of the dog face and a first-level network of a detection model to obtain coarsely positioned feature points;
and positioning the coarsely positioned characteristic points based on the local image of the dog face and a second-stage network of the detection model to obtain the finely positioned characteristic points.
The first-level network roughly estimates characteristic points of the dog face based on the full-face image of the dog face to obtain roughly positioned characteristic points of the dog face; in order to further improve the accuracy of feature point detection, the second-level network adjusts the feature points of coarse positioning based on the local images of the dog face on the basis of the feature points of coarse positioning, and finally obtains the feature points of accurate positioning. And the first-stage network established based on the full-face information of the dog face and the cascade neural network formed by the second-stage network established based on the local information gradually and accurately predict the position information of the dog face feature points, so that the high-precision positioning of the dog face feature points is realized. For example, the method for detecting the dog face feature points according to the embodiment of the invention can be implemented in a device, an apparatus or a system with a memory and a processor.
The detection method of the dog face feature points can be deployed at an image acquisition end, for example, the detection method can be deployed at the image acquisition end of an access control system; may be deployed at personal terminals such as smartphones, tablets, personal computers, and the like. Alternatively, the method for detecting the dog face feature points according to the embodiment of the invention can be distributed and deployed at the server (or cloud) and the personal terminal.
According to the method for detecting the dog face feature points, disclosed by the embodiment of the invention, the position information of the dog face feature points is predicted step by step and accurately through the cascade neural network established based on the full face and the local information, so that the high-precision positioning of the dog face feature points is realized, the accuracy and the instantaneity of the dog face feature point detection can be effectively improved, and the method can be widely applied to various occasions related to dog face image processing.
According to an embodiment of the present invention, the method 200 further includes: marking characteristic points of the dog faces in the full-face images and the partial images of the training samples based on a preset rule;
training is carried out based on the full-face image of the training sample after labeling and the partial image of the training sample, and a trained detection model is obtained.
Illustratively, the predetermined rule includes labeling feature points based on at least one of an ear contour, an eye contour, a nose contour, a mouth contour, and a face contour of the dog face.
Illustratively, labeling feature points based on the ear profile of the dog face includes: marking left and right boundary feature points of the auricle, central feature points of the auricle, characteristic points of the auricle tip and characteristic points marked equidistantly by taking the central feature points of the auricle to the characteristic points of the auricle tip as a reference.
Illustratively, labeling based on the eye contours of the dog face includes: marking a left eye center characteristic point, a characteristic point of intersection of a left eye center horizontal line and the left side and the right side of the left eye outline, and a characteristic point of intersection of a left eye center vertical line and the upper side and the lower side of the eye outline; and
marking a right eye center characteristic point, a characteristic point of intersection of a right eye center horizontal line and the left side and the right side of the right eye outline, and a characteristic point of intersection of a right eye center vertical line and the upper side and the lower side of the right eye outline.
Illustratively, labeling based on the eye contours of the dog face further comprises: marking feature points along the left eye contour at equal intervals by taking the feature points, which are intersected with the left eye contour, of the left eye center horizontal line and the left eye center vertical line as references; and
and marking the feature points along the right eye contour equidistantly by taking the feature points intersecting the right eye contour as a reference by using the right eye center horizontal line and the right eye center vertical line.
Illustratively, labeling based on the nose profile of the dog face includes: marking the center characteristic point of the nose tip.
Illustratively, labeling based on the mouth contour of the dog face includes: and marking a left mouth corner feature point, an upper lip left profile inflection point feature point, an upper lip center feature point, an upper lip right profile inflection point feature point, a right mouth corner feature point, a lower lip left profile inflection point feature point, a lower lip center feature point and a lower lip right profile inflection point feature point.
In one embodiment, labeling based on the mouth contour of the dog face includes: triggering from one side mouth corner, sequentially marking the side mouth corner feature points along the upper lip, the side outline inflection point feature points of the upper lip, the center feature points of the upper lip, the outline inflection point feature points of the other side of the upper lip and the other side mouth corner feature points;
and marking the outline inflection point characteristic points of the side of the lower lip along the lower lip in sequence, wherein the outline inflection point characteristic points of the other side of the lower lip are the center characteristic points of the lower lip.
Illustratively, labeling based on the facial contours of the dog face includes:
marking a vertex center feature point, a feature point of intersection of a left eye center horizontal line and a face left side contour, a feature point of intersection of a nose tip center horizontal line and a face left side contour, a feature point of intersection of a right eye center horizontal line and a face right side contour, and a feature point of intersection of a nose tip center horizontal line and a face right side contour.
Illustratively, labeling based on the facial contours of the dog face further comprises:
characteristic points which are intersected with the left side contour of the face and the center horizontal line of the nose tip are taken as datum points, and the characteristic points are marked along the left side contour of the face at equal intervals (from bottom to top); and marking the feature points along the right side contour of the face equidistantly by taking the feature points of the intersection of the right eye center line and the right side contour of the face and the feature points of the intersection of the nose tip center horizontal line and the right side contour of the face as reference points.
In one embodiment, as shown in fig. 3, fig. 3 shows an example of an image containing feature points of a dog face according to an embodiment of the present invention. Referring to fig. 3, a dog face image is taken as an example to further illustrate the labeling of dog faces according to a predetermined rule. Specifically, based on five basic parts of the ear, face, nose, mouth and face outline of the dog face in the dog face image shown in fig. 3, appropriate feature points are selected for labeling.
Firstly, marking characteristic points based on the ear contours of the dog face; the method specifically comprises the following steps: based on the ear contour of the dog face, marking left and right boundary feature points 1 and 2 of the left ear root, and marking two feature points 1 and 2 from top to bottom of the left ear root; marking a left auricle root central characteristic point 3, a left auricle tip characteristic point 4 and characteristic points 5 and 6 marked at equal intervals by taking the left auricle root central characteristic point 3 to the left auricle tip characteristic point 4 as a reference; similarly, the left and right boundary feature points 7 and 8 of the root of the right ear are marked, and the two feature points 7 and 8 can be marked from top to bottom of the root of the right ear; the right auricle root center feature point 9, the right auricle tip feature point 10, and feature points 11 and 12 marked equidistantly with the right auricle root center feature point 9 to the right auricle tip feature point 10 as references are marked.
Then, labeling based on the eye contours of the dog face; the method specifically comprises the following steps: marking a left eye center characteristic point 14, characteristic points 15 and 16, in which a left eye center horizontal line intersects with the left side and the right side of the left eye outline, and characteristic points 17 and 18, in which a left eye center vertical line intersects with the upper side and the lower side of the left eye outline; taking four characteristic points 15, 16, 17 and 18 of the left eye center horizontal line and the left eye center vertical line which are intersected with the left eye outline as references, dividing the left eye outline into four large blocks, starting from the upper left side clockwise, marking one characteristic point at each block at equal distance, and marking 4 characteristic points in total: 19. 20, 21, 22;
marking a right eye center characteristic point 23, characteristic points 24 and 25 intersecting the right eye center horizontal line with the left side and the right side of the right eye outline, and intersecting characteristic points 26 and 27 intersecting the right eye center vertical line with the upper side and the lower side of the right eye outline; taking the four characteristic points 24, 25, 26 and 27 of the right eye contour intersection as references, dividing the right eye contour into four large blocks, starting from the upper left side clockwise, marking one characteristic point at each block at equal distance, and marking 4 characteristic points in total: 28. 29, 30, 31.
Then, marking based on the nose outline of the dog face, specifically comprising: the tip center feature point 32 is noted.
Then, labeling based on the facial outline of the dog face, specifically including: marking a head top center characteristic point 13, a characteristic point 33 where a left eye center horizontal line intersects with a face left side contour, and a characteristic point 34 where a nose tip center horizontal line intersects with the face left side contour; marking the feature points along the left side contour of the face with equal distance by taking the two feature points 33 and 34 as references, and marking the two feature points 35 and 36 with equal distance from top to bottom;
marking a characteristic point 37 of intersection of the right eye center horizontal line and the face right side contour, and a characteristic point 38 of intersection of the nose tip center horizontal line and the face right side contour; the two feature points 37 and 38 are used as references, and feature points are marked along the contour of the right side of the face at equal intervals, and the two feature points 39 and 40 can be marked from top to bottom at equal intervals.
Finally, labeling based on the mouth contour of the dog face, specifically including: the left mouth corner feature point 41, the upper lip left side contour inflection point feature point 42, the upper lip center feature point 43, the upper lip right side contour inflection point feature point 44, the right mouth corner feature point 45, the lower lip left side contour inflection point feature point 46, the lower lip center feature point 47, and the lower lip right side contour inflection point feature point 48 are marked;
Starting from the left mouth corner, a left mouth corner feature point 41, an upper lip left outline inflection point feature point 42, an upper lip center feature point 43, an upper lip right outline inflection point feature point 44 and a right mouth corner feature point 45 can be marked in sequence along the upper lip outline; the lower lip left contour inflection point feature point 46, the lower lip center feature point 47, and the lower lip right contour inflection point feature point 48 are marked in this order along the lower lip contour. It will be appreciated that corresponding feature points may also be noted along the upper lip contour as well as the lower lip contour based on starting from the right hand corner.
As can be seen from this, in this embodiment, the dog face is labeled with feature points based on a predetermined rule, and finally 6 feature points are labeled for each ear, 9 feature points are labeled for each eye, 8 feature points are labeled for the lip portion, 1 feature point is labeled for the nose, and 9 feature points are labeled for the outline of the face, for a total of 48 feature points.
It should be noted that the labeling steps described above are merely examples, and do not represent the predetermined rule; the predetermined rule is not limited to the labeling order. In addition, the number of the feature points can be increased according to design requirements and actual conditions by the preset rules, so that the labeling accuracy is improved, and a good data basis is provided for a subsequent program.
According to an embodiment of the present invention, the method 200 further includes:
obtaining a dog face full-face sample image before labeling based on the training sample image before labeling, and obtaining a dog face full-face sample image after labeling based on the training sample image after labeling;
training the first neural network according to the marked dog face full-face sample image to obtain a first-stage network of a trained detection model.
According to an embodiment of the present invention, the method 200 further includes: and dividing the whole face image of the dog face according to the position of the dog face organ to obtain a local image of the dog face.
The partial image is an image of the whole face of the dog, which is divided into a plurality of parts according to the organs of the dog face, and each part contains the complete organs of the dog face. For example, dog facial organs include: the partial image of the dog's face may be an image including a left ear, right ear, left eye, right eye, nose, mouth, left face, right face, etc.
According to an embodiment of the present invention, the method 200 further includes:
obtaining a dog face local sample image before marking based on the training sample image before marking, and obtaining a dog face local sample image after marking based on the training sample image after marking;
Training the second neural network according to the marked dog face local sample image to obtain a second-level network of the trained detection model.
According to an embodiment of the present invention, the method 200 further includes: positioning the coarsely positioned feature points based on the local image of the dog face and a second-level network of the detection model, wherein the obtaining the finely positioned feature points comprises the following steps:
positioning the coarsely positioned feature points based on the local image of the dog face and a second-level network of the detection model to obtain feature points of the local image;
and carrying out coordinate transformation and integration on the characteristic points of the local images to obtain the precisely positioned characteristic points of the dog face.
Because the first-stage network of the detection model already carries out feature point detection on the full-face image of the dog face, coarse positioning feature points of the full-face image are obtained; in order to further improve the accuracy of the feature points, the coarse positioning feature points are further adjusted through a second-level network. Because the second-level network is based on the local images of the dog face, the local information is adopted to detect the characteristic points, the local images input into the second-level network are normalized to the standard size by the same local image corresponding to the same organ, and are rotationally aligned to the unified angle and then input into the second-level network, the output of the first-level network can be divided into a plurality of groups of roughly positioned characteristic points corresponding to each local image, each local image adjusts the corresponding group of roughly positioned characteristic points through the second-level network, a group of characteristic points corresponding to each local image is obtained, and the coordinate transformation and integration of the characteristic points corresponding to each local image can be carried out to obtain the finely positioned characteristic points of the whole dog face.
Illustratively, coordinate transformation and integration are performed on the feature points of the local image to obtain the precisely positioned feature points of the dog face, which comprises the following steps:
obtaining a reference position and a rotation angle of a partial image of the dog face relative to a full face image;
carrying out coordinate transformation on the corresponding characteristic points of the local image according to the reference position and the rotation angle to obtain the characteristic points of the transformed local image;
and integrating the characteristic points of each transformed partial image to obtain the characteristic points of the fine positioning of the dog face.
The coordinates of the local image feature points are determined relative to the local image, so that a reference position of the local image relative to the full-face image needs to be obtained, and the positions of the local image feature points in the full-face image are obtained according to the reference position. Further, since the partial image input to the second network is rotated, it is necessary to rotationally transform the feature points of the partial image to positions in the full face according to the rotation angle of the partial image with respect to the full face image. The coordinate transformation is linear transformation from two-dimensional coordinates to two-dimensional coordinates, flatness and parallelism of the image are maintained, and aiming at the pose change of an object on a plane, matrix description can be carried out on an input image and a transformation matrix to obtain transformed image coordinates.
In the process of detecting the characteristic points of the local image by the second-level network, the reference positions of the local image relative to the full-face image are rotationally aligned to the same angle, so that the accuracy of detecting the characteristic points is ensured.
According to an embodiment of the present invention, the method 200 further includes: and outputting the dog face image comprising the dog face feature points.
Fig. 4 shows a schematic block diagram of a dog face feature point detection apparatus 300 according to an embodiment of the present invention. As shown in fig. 4, a dog face feature point detection apparatus 400 according to an embodiment of the present invention includes:
the detection module 410 is configured to perform feature point detection based on an image including a dog face and a trained detection model, so as to obtain a precisely located feature point;
the detection model comprises a first-level network 411 and a second-level network 412, wherein the first-level network 411 is used for detecting feature points of the full-face image of the dog face to obtain coarsely positioned feature points;
the second level network 412 is configured to locate the coarsely located feature points according to the local image of the face of the dog, so as to obtain the finely located feature points.
According to an embodiment of the present invention, the training of the detection model includes: marking characteristic points of the dog faces in the full-face images and the partial images of the training samples based on a preset rule;
And training the detection model based on the marked training sample full-face image and the training sample local image to obtain a trained detection model.
Illustratively, the predetermined rule includes labeling feature points based on at least one of an ear contour, an eye contour, a nose contour, a mouth contour, and a face contour of the dog face.
Illustratively, labeling feature points based on the ear contour of the dog face includes: marking left and right boundary feature points of the auricle, central feature points of the auricle, point feature points of the auricle and feature points marked equidistantly by taking the central feature points of the auricle to the point feature points of the auricle as a reference.
Illustratively, labeling feature points based on the eye contours of the dog face includes: marking a left eye center characteristic point, a characteristic point that a left eye center horizontal line intersects with the left side and the right side of the left eye outline, and a characteristic point that a left eye center vertical line intersects with the upper side and the lower side of the eye outline; and
marking a right eye center characteristic point, a characteristic point of intersection of a right eye center horizontal line and the left side and the right side of the right eye outline, and a characteristic point of intersection of a right eye center vertical line and the upper side and the lower side of the right eye outline.
Illustratively, labeling feature points based on the eye contours of the dog face further comprises: marking feature points along the left eye contour at equal intervals by taking the feature points, which are intersected with the left eye contour, of the left eye center horizontal line and the left eye center vertical line as references; and
and marking the feature points along the right eye contour equidistantly by taking the feature points intersecting the right eye contour as a reference by using the right eye center horizontal line and the right eye center vertical line.
Illustratively, labeling feature points based on the nose profile of the dog face includes: marking the center characteristic point of the nose tip.
Illustratively, labeling feature points based on the mouth contour of the dog face includes: and marking a left mouth corner feature point, an upper lip left profile inflection point feature point, an upper lip center feature point, an upper lip right profile inflection point feature point, a right mouth corner feature point, a lower lip left profile inflection point feature point, a lower lip center feature point and a lower lip right profile inflection point feature point.
Illustratively, labeling feature points based on the facial contours of the dog face includes:
marking a vertex center feature point, a feature point of intersection of a left eye center horizontal line and a face left side contour, a feature point of intersection of a nose tip center horizontal line and a face left side contour, a feature point of intersection of a right eye center horizontal line and a face right side contour, and a feature point of intersection of a nose tip center horizontal line and a face right side contour.
Illustratively, labeling feature points based on the facial contours of the dog face further comprises:
characteristic points which are intersected with the left side contour of the face and the center horizontal line of the nose tip are taken as reference points, and the characteristic points are marked along the left side contour of the face at equal intervals; and marking the feature points along the right side contour of the face equidistantly by taking the feature points of the intersection of the right eye center line and the right side contour of the face and the feature points of the intersection of the nose tip center horizontal line and the right side contour of the face as reference points.
According to an embodiment of the present invention, the training of the first level network of the detection model includes:
obtaining a dog face full-face sample image before labeling based on the training sample image before labeling, and obtaining a dog face full-face sample image after labeling based on the training sample image after labeling;
training the first neural network according to the marked dog face full-face sample image to obtain a first-stage network of a trained detection model.
According to an embodiment of the present invention, the detection module 410 is further configured to: and dividing the whole face image of the dog face according to the position of the dog face organ to obtain a local image of the dog face.
According to an embodiment of the present invention, the training of the second level network of the detection model includes:
obtaining a dog face local sample image before marking based on the training sample image before marking, and obtaining a dog face local sample image after marking based on the training sample image after marking;
training the second neural network according to the marked dog face local sample image to obtain a second-level network of the trained detection model.
According to an embodiment of the present invention, the second level network 412 is further configured to:
positioning the coarsely positioned feature points based on the local image of the dog face to obtain the feature points of the local image;
and carrying out coordinate transformation and integration on the characteristic points of the local images to obtain the precisely positioned characteristic points of the dog face.
Illustratively, the second level network 412 is further configured to:
obtaining a reference position and a rotation angle of a partial image of the dog face relative to a full face image;
carrying out coordinate transformation on the corresponding characteristic points of the local image according to the reference position and the rotation angle to obtain the characteristic points of the transformed local image;
and integrating the characteristic points of each transformed partial image to obtain the characteristic points of the fine positioning of the dog face.
According to an embodiment of the present invention, the apparatus 400 further includes: and the output module 420 is configured to output a dog face image including the dog face feature points and/or coordinates of the dog face feature points.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Fig. 5 shows a schematic block diagram of annotation detection 500 of dog face feature points according to an embodiment of the invention. The dog face feature point detection system 500 includes an image sensor 510, a storage device 530, and a processor 540.
The image sensor 510 is used to collect image data.
The storage means 530 stores program codes for implementing the respective steps in the dog face feature point detection method according to the embodiment of the present invention.
The processor 540 is configured to execute the program code stored in the storage 530 to perform the corresponding steps of the method for detecting a dog face feature point according to an embodiment of the present invention, and is configured to implement the detection module 410 in the apparatus for detecting a dog face feature point according to an embodiment of the present invention.
In addition, according to an embodiment of the present invention, there is further provided a storage medium on which program instructions are stored, which when executed by a computer or a processor, are configured to perform the respective steps of the method for labeling dog face feature points according to the embodiment of the present invention, and are configured to implement the respective modules in the apparatus for labeling dog face feature points according to the embodiment of the present invention. The storage medium may include, for example, a memory card of a smart phone, a memory component of a tablet computer, a hard disk of a personal computer, read-only memory (ROM), erasable programmable read-only memory (EPROM), portable compact disc read-only memory (CD-ROM), USB memory, or any combination of the foregoing storage media. The computer readable storage medium may be any combination of one or more computer readable storage media, for example, one computer readable storage medium containing computer readable program code for randomly generating a sequence of action instructions and another computer readable storage medium containing computer readable program code for performing a labeling of dog face feature points.
In an embodiment, the computer program instructions may implement respective functional modules of the dog-face feature point detection apparatus according to the embodiment of the present invention when executed by a computer, and/or may perform the dog-face feature point detection method according to the embodiment of the present invention.
The modules in the dog face feature point detection system according to the embodiment of the present invention may be implemented by a processor of the electronic device for dog face feature point detection according to the embodiment of the present invention running computer program instructions stored in a memory, or may be implemented when computer instructions stored in a computer readable storage medium of a computer program product according to the embodiment of the present invention are run by a computer.
According to the method, the device, the system and the storage medium for detecting the dog face feature points, disclosed by the embodiment of the invention, the position information of the dog face feature points is predicted step by step and accurately through the cascade neural network established based on the full face and the local information, so that the high-precision positioning of the dog face feature points is realized, the accuracy and the instantaneity of dog face feature point detection can be effectively improved, and the method, the device, the system and the storage medium can be widely applied to various occasions related to dog face image processing.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present invention thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the division of the elements is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another device, or some features may be omitted or not performed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in order to streamline the invention and aid in understanding one or more of the various inventive aspects, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof in the description of exemplary embodiments of the invention. However, the method of the present invention should not be construed as reflecting the following intent: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be combined in any combination, except combinations where the features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some of the modules in an item analysis device according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
The foregoing description is merely illustrative of specific embodiments of the present invention and the scope of the present invention is not limited thereto, and any person skilled in the art can easily think about variations or substitutions within the scope of the present invention. The protection scope of the invention is subject to the protection scope of the claims.
Claims (12)
1. The method for detecting the characteristic points of the dog face is characterized by comprising the following steps:
performing feature point detection based on the image containing the dog face and the trained detection model to obtain precisely positioned feature points; wherein the training of the detection model comprises:
marking characteristic points of the dog faces in the training sample full-face image and the training sample partial image based on a preset rule; the predetermined rule includes labeling feature points based on the eye contour of the dog face, and the method includes: marking a left eye center characteristic point, a characteristic point of intersection of a left eye center horizontal line and the left side and the right side of a left eye outline, and a characteristic point of intersection of a left eye center vertical line and the upper side and the lower side of the left eye outline; and
marking a right eye center characteristic point, a characteristic point of intersection of a right eye center horizontal line and the left side and the right side of a right eye outline, and a characteristic point of intersection of a right eye center vertical line and the upper side and the lower side of the right eye outline;
training a detection model based on the marked training sample full-face image and the training sample local image to obtain a trained detection model;
the detection model comprises a first-level network and a second-level network, and the obtaining of the precisely positioned characteristic points comprises the following steps:
Performing feature point detection based on the full-face image of the dog face and a first-level network of a detection model to obtain coarsely positioned feature points;
dividing the whole face image of the dog face according to the position of the dog face organ to obtain a local image of the dog face;
positioning the coarsely positioned feature points based on the local image of the dog face and a second-level network of the detection model to obtain feature points of the local image;
and carrying out coordinate transformation and integration on the characteristic points of the local images to obtain the precisely positioned characteristic points of the dog face.
2. The method of claim 1, wherein transforming and integrating the feature points of the partial image to obtain the precisely located feature points of the dog face comprises:
obtaining a reference position and a rotation angle of a partial image of the dog face relative to a full face image;
carrying out coordinate transformation on the corresponding characteristic points of the local image according to the reference position and the rotation angle to obtain the characteristic points of the transformed local image;
and integrating the characteristic points of each transformed partial image to obtain the characteristic points of the fine positioning of the dog face.
3. The method of detecting according to claim 1, wherein the predetermined rule further includes labeling feature points based on at least one of an ear contour, a nose contour, a mouth contour, and a face contour of the dog face.
4. The method of detecting as in claim 3, wherein labeling feature points based on the ear profile of the face of the dog comprises: marking left and right boundary feature points of the auricle, central feature points of the auricle, characteristic points of the auricle tip and characteristic points marked equidistantly by taking the central feature points of the auricle to the characteristic points of the auricle tip as a reference.
5. The method of detecting according to claim 1, wherein labeling feature points based on the eye contours of the face of the dog further comprises: marking feature points along the left eye contour at equal intervals by taking the feature points, which are intersected with the left eye contour, of the left eye center horizontal line and the left eye center vertical line as references; and
and marking the feature points along the right eye contour equidistantly by taking the feature points intersecting the right eye contour as a reference by using the right eye center horizontal line and the right eye center vertical line.
6. The method of detecting as in claim 3, wherein labeling based on the nose profile of the face of the dog comprises: marking the center characteristic point of the nose tip.
7. The method of detecting as in claim 3, wherein labeling based on a mouth contour of the dog face comprises: and marking a left mouth corner feature point, an upper lip left profile inflection point feature point, an upper lip center feature point, an upper lip right profile inflection point feature point, a right mouth corner feature point, a lower lip left profile inflection point feature point, a lower lip center feature point and a lower lip right profile inflection point feature point.
8. The method of detecting as in claim 3, wherein labeling based on facial contours of the dog face comprises:
marking a vertex center feature point, a feature point of intersection of a left eye center horizontal line and a face left side contour, a feature point of intersection of a nose tip center horizontal line and a face left side contour, a feature point of intersection of a right eye center horizontal line and a face right side contour, and a feature point of intersection of a nose tip center horizontal line and a face right side contour.
9. The method of detecting as in claim 8, wherein labeling based on facial contours of the dog face further comprises: marking feature points along the left side contour of the face equidistantly by taking feature points, which are intersected with the left side contour of the face, of the left eye center horizontal line and feature points, which are intersected with the left side contour of the face, of the nose tip center horizontal line as references; and marking the feature points along the right side contour of the face equidistantly by taking the feature points of the intersection of the center line of the right eye and the right side contour of the face and the feature points of the intersection of the horizontal line of the center of the nose tip and the right side contour of the face as datum points.
10. A device for detecting dog face feature points, the device comprising:
the detection module is used for detecting the characteristic points based on the image containing the dog face and the trained detection model to obtain the precisely positioned characteristic points; marking characteristic points of the dog faces in the training sample full-face image and the training sample partial image based on a preset rule; the predetermined rule includes labeling feature points based on the eye contour of the dog face, and the method includes: marking a left eye center characteristic point, a characteristic point of intersection of a left eye center horizontal line and the left side and the right side of a left eye outline, and a characteristic point of intersection of a left eye center vertical line and the upper side and the lower side of the left eye outline; and
Marking a right eye center characteristic point, a characteristic point of intersection of a right eye center horizontal line and the left side and the right side of a right eye outline, and a characteristic point of intersection of a right eye center vertical line and the upper side and the lower side of the right eye outline;
training a detection model based on the marked training sample full-face image and the training sample local image to obtain a trained detection model;
the detection model comprises a first-level network and a second-level network, wherein the first-level network is used for detecting characteristic points of the full-face image of the dog face to obtain coarsely positioned characteristic points;
dividing the whole face image of the dog face according to the position of the dog face organ to obtain a local image of the dog face;
the second-level network is used for positioning the rough positioning feature points according to the local image of the dog face to obtain the fine positioning feature points, and comprises the following steps:
positioning the coarsely positioned feature points based on the local image of the dog face and a second-level network of the detection model to obtain feature points of the local image;
and carrying out coordinate transformation and integration on the characteristic points of the local images to obtain the precisely positioned characteristic points of the dog face.
11. A system for detecting dog face feature points, comprising a memory, a processor and a computer program stored on the memory and running on the processor, characterized in that the processor implements the steps of the method of any one of claims 1 to 9 when executing the computer program.
12. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a computer realizes the steps of the method according to any of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010327006.5A CN111695405B (en) | 2018-12-28 | 2018-12-28 | Dog face feature point detection method, device and system and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811628345.6A CN109829380B (en) | 2018-12-28 | 2018-12-28 | Method, device and system for detecting dog face characteristic points and storage medium |
CN202010327006.5A CN111695405B (en) | 2018-12-28 | 2018-12-28 | Dog face feature point detection method, device and system and storage medium |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811628345.6A Division CN109829380B (en) | 2018-12-28 | 2018-12-28 | Method, device and system for detecting dog face characteristic points and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111695405A CN111695405A (en) | 2020-09-22 |
CN111695405B true CN111695405B (en) | 2023-12-12 |
Family
ID=66861476
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811628345.6A Active CN109829380B (en) | 2018-12-28 | 2018-12-28 | Method, device and system for detecting dog face characteristic points and storage medium |
CN202010327006.5A Active CN111695405B (en) | 2018-12-28 | 2018-12-28 | Dog face feature point detection method, device and system and storage medium |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811628345.6A Active CN109829380B (en) | 2018-12-28 | 2018-12-28 | Method, device and system for detecting dog face characteristic points and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN109829380B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110909618B (en) * | 2019-10-29 | 2023-04-21 | 泰康保险集团股份有限公司 | Method and device for identifying identity of pet |
CN113723332A (en) * | 2021-09-07 | 2021-11-30 | 中国工商银行股份有限公司 | Facial image recognition method and device |
CN114155240A (en) * | 2021-12-13 | 2022-03-08 | 韩松洁 | Ear acupoint detection method and device and electronic equipment |
CN115240230A (en) * | 2022-09-19 | 2022-10-25 | 星宠王国(北京)科技有限公司 | Canine face detection model training method and device, and detection method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103218610A (en) * | 2013-04-28 | 2013-07-24 | 宁波江丰生物信息技术有限公司 | Formation method of dogface detector and dogface detection method |
CN103295024A (en) * | 2012-02-29 | 2013-09-11 | 佳能株式会社 | Method and device for classification and object detection and image shoot and process equipment |
CN103824049A (en) * | 2014-02-17 | 2014-05-28 | 北京旷视科技有限公司 | Cascaded neural network-based face key point detection method |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007094906A (en) * | 2005-09-29 | 2007-04-12 | Toshiba Corp | Characteristic point detection device and method |
CN103971112B (en) * | 2013-02-05 | 2018-12-07 | 腾讯科技(深圳)有限公司 | Image characteristic extracting method and device |
CN103208133B (en) * | 2013-04-02 | 2015-08-19 | 浙江大学 | The method of adjustment that in a kind of image, face is fat or thin |
CN104951743A (en) * | 2015-03-04 | 2015-09-30 | 苏州大学 | Active-shape-model-algorithm-based method for analyzing face expression |
CN107403141B (en) * | 2017-07-05 | 2020-01-10 | 中国科学院自动化研究所 | Face detection method and device, computer readable storage medium and equipment |
CN108875480A (en) * | 2017-08-15 | 2018-11-23 | 北京旷视科技有限公司 | A kind of method for tracing of face characteristic information, apparatus and system |
CN107704817B (en) * | 2017-09-28 | 2021-06-25 | 成都品果科技有限公司 | Method for detecting key points of animal face |
CN108985210A (en) * | 2018-07-06 | 2018-12-11 | 常州大学 | A kind of Eye-controlling focus method and system based on human eye geometrical characteristic |
-
2018
- 2018-12-28 CN CN201811628345.6A patent/CN109829380B/en active Active
- 2018-12-28 CN CN202010327006.5A patent/CN111695405B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103295024A (en) * | 2012-02-29 | 2013-09-11 | 佳能株式会社 | Method and device for classification and object detection and image shoot and process equipment |
CN103218610A (en) * | 2013-04-28 | 2013-07-24 | 宁波江丰生物信息技术有限公司 | Formation method of dogface detector and dogface detection method |
CN103824049A (en) * | 2014-02-17 | 2014-05-28 | 北京旷视科技有限公司 | Cascaded neural network-based face key point detection method |
Non-Patent Citations (2)
Title |
---|
Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks;Kaipeng Zhang等;《IEEE SIGNAL PROCESSING LETTERS》;第23卷(第10期);全文 * |
基于级联卷积神经网络的人脸关键点定位;陈锐等;《四川理工学院学报( 自然科学版)》;第30卷(第1期);摘要、第33-34页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111695405A (en) | 2020-09-22 |
CN109829380B (en) | 2020-06-02 |
CN109829380A (en) | 2019-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111695405B (en) | Dog face feature point detection method, device and system and storage medium | |
CN106203305B (en) | Face living body detection method and device | |
CN106650662B (en) | Target object shielding detection method and device | |
CN105938552B (en) | Face recognition method and device for automatically updating base map | |
CN108875524B (en) | Sight estimation method, device, system and storage medium | |
CN109543663B (en) | Method, device and system for identifying identity of dog and storage medium | |
CN108920580B (en) | Image matching method, device, storage medium and terminal | |
CN108875534B (en) | Face recognition method, device, system and computer storage medium | |
CN108932456B (en) | Face recognition method, device and system and storage medium | |
JP6815707B2 (en) | Face posture detection method, device and storage medium | |
CN108875731B (en) | Target identification method, device, system and storage medium | |
CN108875493B (en) | Method and device for determining similarity threshold in face recognition | |
CN109146932B (en) | Method, device and system for determining world coordinates of target point in image | |
CN108961149B (en) | Image processing method, device and system and storage medium | |
CN106385640B (en) | Video annotation method and device | |
JP2013012190A (en) | Method of approximating gabor filter as block-gabor filter, and memory to store data structure for access by application program running on processor | |
WO2018210047A1 (en) | Data processing method, data processing apparatus, electronic device and storage medium | |
JP2011508323A (en) | Permanent visual scene and object recognition | |
CN110826610A (en) | Method and system for intelligently detecting whether dressed clothes of personnel are standard | |
CN111459269A (en) | Augmented reality display method, system and computer readable storage medium | |
CN110796130A (en) | Method, device and computer storage medium for character recognition | |
CN108875506B (en) | Face shape point tracking method, device and system and storage medium | |
JP2022521540A (en) | Methods and systems for object tracking using online learning | |
CN106682187B (en) | Method and device for establishing image base | |
CN109858363B (en) | Dog nose print feature point detection method, device, system and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |