CN101877055A - Method and device for positioning key feature point - Google Patents

Method and device for positioning key feature point Download PDF

Info

Publication number
CN101877055A
CN101877055A CN2009102417600A CN200910241760A CN101877055A CN 101877055 A CN101877055 A CN 101877055A CN 2009102417600 A CN2009102417600 A CN 2009102417600A CN 200910241760 A CN200910241760 A CN 200910241760A CN 101877055 A CN101877055 A CN 101877055A
Authority
CN
China
Prior art keywords
point
boundary curve
feature point
boundary
human face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2009102417600A
Other languages
Chinese (zh)
Inventor
王俊艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vimicro Corp
Original Assignee
Vimicro Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vimicro Corp filed Critical Vimicro Corp
Priority to CN2009102417600A priority Critical patent/CN101877055A/en
Publication of CN101877055A publication Critical patent/CN101877055A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for positioning a key feature point, which comprises the following steps: detecting a face in a target image and determining the face area in the target image as an image to be aligned; extracting the edge information in the image to be aligned to obtain an edge graph of the image to be aligned; detecting all candidate feature points from the edge graph according to a pre-set feature point selecting standard; and selecting a plurality of candidate feature points with the highest match degree with a set reference feature point from the candidate feature points to serve as the finally obtained key feature points. The invention further discloses a device for positioning the key feature point, which comprises an image to be aligned determining module, an edge graph acquiring module, a candidate point determining module and a key feature point determining module. The method and device of the embodiment of the invention can accurately position the key feature point in the face area, and the algorithm of the scheme has the advantages of simplicity, small calculation amount and fast implementation.

Description

A kind of method and apparatus of key feature point
Technical field
The present invention relates to image recognition and treatment technology, be specifically related to a kind of method and apparatus of key feature point.
Background technology
Along with the current computer technology rapid development, computer process ability is greatly improved; Meanwhile, the correlation technique in field such as pattern-recognition, computer vision has also obtained fast development, people's face detects the focus as association area research and application, all has important practical value and has obtained widespread use in many fields such as safety, amusement, man-machine interactions.
People's face detects, and the technology such as the detection of employing people face, face feature point location that are meant are obtained the people's face positional information in the image, serve as the information that facial image was comprised is extracted on the basis by certain algorithm a kind of method with this people's face positional information.At present, increase gradually along with people's face detects the application relevant with recognition of face, the algorithm of people's face detection and Identification discrimination more and more diversified, algorithm is also more and more higher.But the algorithm of many recognitions of face does not reach due discrimination on this theory of algorithm, reason just is: decision algorithm identified rate be not only this algorithm discrimination in theory itself, also whether to reach the basic demand of this algorithm relevant with the image of this algorithm process.If the pre-service in early stage of described image does not reach the basic demand of this algorithm, such as: for a kind of people's face detection algorithm, if the face feature point location that this algorithm carries out on the facial image of detection and Identification is inaccurate, then described algorithm carries out feature extraction and identification on these mistakes or inaccurate face feature point basis, obviously can not obtain good identification effect.This process of accurately orienting the human face characteristic point position on the facial image is commonly referred to key feature point.As seen, influence the factor of the discrimination of recognizer, not only comprise the theoretical discrimination of algorithm itself, also whether effectively aim at closely related with people's face.Thereby effective aligning of people's face is the important prerequisite of underwriter's face discrimination, and accurately face's key feature point technology then is prerequisite and the basis that human face detection tech is realized fast.Simultaneously, described key feature point can also be applied in the middle of other multiple occasions such as human face animation, expression special efficacy except being applied to the detection of people's face.
In order to extract people's key feature point on the face, the early stage method that adopts comprises based on the method for people's face geometric properties with based on the method for Adaboost etc., these methods are mated carrying out the rule that key feature point coupling the time mainly is based on single key feature point distribution, deviations appears easily, therefore often be merely able to find approximate key point position, the key point position of finally selecting is not very accurate---such as, the eyeball point location may be navigated to eyeball point etc. to eyebrow, canthus point; Simultaneously, the robustness of described method is relatively poor, and relatively more responsive to the influence of illumination; Therefore, the stability of positioning feature point and reliability are not high; In addition, also have a kind of method based on Elastic forming board, this method locating effect is better, but because the complicacy of algorithm, its processing time of determining that final key feature point needs is longer usually, so this algorithm only can use under the less demanding situation of real-time, and range of application is more limited.
Summary of the invention
The invention provides a kind of method and apparatus of key feature point, can realize the location of key feature point in the human face region accurately and fast.
For achieving the above object, technical scheme of the present invention specifically is achieved in that
A kind of method of key feature point, this method comprises:
Target image is carried out people's face detect, determine that human face region in the target image is for treating alignment image; The marginal information in the alignment image is treated in extraction, obtains treating the outline map of alignment image;
Detect from outline map according to default unique point choice criteria and to obtain all candidate feature points; The key feature point that several candidate feature points conducts that the reference characteristic point group matching degree of selecting and setting from described candidate feature point is the highest finally obtain.
Describedly target image carried out people's face detect, determine that the human face region in the target image is that the method for the treatment of alignment image comprises:
The Adaboost sorter model that utilizes people's face that training in advance obtains to detect judges whether target image is facial image, if, determine the human face region in the described target image, with described human face region as treating alignment image.
When the Adaboost sorter model that people's face of utilizing training in advance to obtain detects is judged target image for facial image and after determining human face region in the described target image, with described human face region as treating that before the alignment image, this method comprises:
The human face region of determining is carried out normalized, and described normalization includes but not limited to size normalization, illumination condition normalization.
The marginal information in the alignment image is treated in described extraction, obtains treating that the method for the outline map of alignment image comprises:
The marginal information in the alignment image is treated in extraction, obtains treating the initial edge figure of alignment image;
Described initial edge figure is carried out pre-service before the feature point detection, obtain being used to carry out the final outline map that candidate feature point detects.
The pretreated method that described initial edge figure is carried out before the feature point detection comprises:
Calculate the pixel number that comprises in each connected domain,, judge that then this connected domain is noise and it is removed from initial edge figure if the pixel number that comprises is less than the noise threshold of setting in the connected domain; Described connected domain comprises boundary curve or the closed region that is made of boundary curve; When described connected domain was boundary curve, the pixel number that comprises in the connected domain was meant the number of the pixel that this boundary curve is occupied; And when connected domain is the closed region of boundary curve formation, the total number of all pixels that the pixel number that this connected domain comprises then is meant this closed region is surrounded.
Described noise is removed from initial edge figure after, this method further comprises:
To the initial edge figure that removes through noise, with the method for edge swell each marginal point is expand into 8 neighborhoods that are adjacent, if latter two boundary curve that expands is continuous, think that then described two boundary curves need connect; Wherein, described marginal point is the end points at boundary curve two ends;
For the boundary curve that needs connect, calculate the direction of two boundary curves; If the angular separation of two boundary curves then directly connects nearest end points on described two boundary curves in default angular range; Otherwise, the intersection point of two boundary curves of calculating: if the distance of nearest point then couples together two marginal points as tie point with this intersection point less than default distance threshold on intersection point to two boundary curve; If the distance of nearest point meets or exceeds described distance threshold on intersection point to two boundary curve, then do not connect.
After the described boundary curve connection that will need to connect, this method further comprises:
For many boundary curves from same pixel, calculate the angle of two adjacent boundary curves, when angle during less than the merging threshold value set, described two boundary curves are merged into a boundary curve, the method that merges comprises: if the length of a boundary curve for the twice of another boundary curve length or more than, the direct short boundary curve of deletion then, and with long boundary curve as the boundary curve after merging; If the length of a boundary curve does not surpass the twice of another boundary curve length, then two boundary curves are carried out curve fitting.
Describedly from outline map, detect the method that obtains all candidate feature points according to default unique point choice criteria and comprise:
From the each point of outline map, select all each points that meet default unique point choice criteria as the candidate feature point, described unique point choice criteria comprises: in outline map, at least draw the boundary curve of two different directions by this point, and the angle between the boundary curve of described different directions is in the candidate angle scope of setting.
The method of the key feature point that several candidate feature points conducts that the reference characteristic point group matching degree of selecting and setting from described candidate feature point is the highest finally obtain comprises:
Selection in advance calibrates the training sample of the facial image of described reference characteristic point group, obtains the statistics feature that described reference characteristic point group distributes through training in human face region;
The matching degree of the statistics feature that candidate feature point group that the calculated candidate unique point is formed and described reference characteristic point group distribute in human face region, select the candidate feature point group that wherein matching degree is the highest, as the final key feature point of determining, described reference characteristic point group comprises the canthus and the corners of the mouth totally 6 positions.
Described reference characteristic point group comprises:
One or more unique points in the first feature group, the combination in any of forming with the one or more unique point in the second feature group, wherein, the first feature group comprises: canthus, eyebrow, eyebrows and eyeball, the second feature group comprises: nostril, the wing of nose, the corners of the mouth and lip.
A kind of key feature point device, this device comprises:
Treat the alignment image determination module, be used for that target image is carried out people's face and detect, determine that human face region in the target image is for treating alignment image;
The outline map acquisition module is used for extracting the marginal information for the treatment of alignment image for the treatment of that the alignment image determination module obtains, obtains treating the outline map of alignment image;
The candidate point determination module is used for obtaining all candidate feature points according to default unique point choice criteria from the outline map detection that the outline map acquisition module obtains;
Key feature point determination module is used for the candidate feature point that obtains from the candidate point determination module, and the highest several candidate feature points of the reference characteristic point group matching degree of selecting and setting are as the key feature point that finally obtains.
The described alignment image determination module for the treatment of comprises:
People's face detecting unit, the Adaboost sorter model that the people's face that is used to utilize training in advance to obtain detects judges whether target image is facial image, and notifies the human face region acquiring unit when judging target image for facial image;
The human face region acquiring unit is used for the notice of recipient's face detecting unit, determines the human face region in the facial image, with described human face region as treating alignment image.
Describedly treat further to comprise in the alignment image determination module normalized unit;
Described normalized unit, the human face region that is used for the human face region acquiring unit is determined carries out normalized, and described normalization includes but not limited to size normalization, illumination condition normalization;
At this moment, described human face region acquiring unit is further used for the human face region after the normalized as treating alignment image.
Described outline map acquisition module comprises:
The marginal information extraction unit is used for extracting the marginal information for the treatment of alignment image for the treatment of that the alignment image determination module obtains, obtains treating the initial edge figure of alignment image;
Pretreatment unit, the initial edge figure that is used for that the edge information extraction unit is obtained carries out the pre-service before the feature point detection, obtains being used to carry out the final outline map that candidate feature point detects.
Described pretreatment unit comprises:
Noise is removed subelement, is used for calculating the pixel number that each connected domain comprises, and is noise and it is removed from initial edge figure if the pixel number that comprises in the connected domain, is then judged this connected domain less than the noise threshold of setting; Described connected domain comprises boundary curve or the closed region that is made of boundary curve; When described connected domain was boundary curve, the pixel number that comprises in the connected domain was meant the number of the pixel that this boundary curve is occupied; And when connected domain is the closed region of boundary curve formation, the total number of all pixels that the pixel number that this connected domain comprises then is meant this closed region is surrounded.
Described pretreatment unit also comprises:
Crack edge connexon unit, be used for noise is removed the outline map that obtains after subelement is handled, method with edge swell expand into 8 neighborhoods that are adjacent with each marginal point, if latter two boundary curve that expands links to each other, thinks that then described two boundary curves need connect; Wherein, described marginal point is the end points at boundary curve two ends;
For the boundary curve that needs connect, calculate the direction of two boundary curves; If the angular separation of two boundary curves then adopts the mode of bee-line that described marginal point is connected in default angular range; Otherwise, the intersection point of two boundary curves of calculating: if the distance of nearest point then couples together two marginal points as tie point with this intersection point less than default distance threshold on intersection point to two boundary curve; If the distance of nearest point meets or exceeds described distance threshold on intersection point to two boundary curve, then the end points with two edges directly couples together.
Described pretreatment unit also comprises:
Edge fingers merges subelement, be used to receive the outline map that obtains after the crack edge connexon cell processing, to wherein from many boundary curves of same pixel, calculate the angle of two adjacent boundary curves, when angle during less than the merging threshold value set, described two boundary curves are merged into a boundary curve, the method that merges comprises: if the length of a boundary curve for the twice of another boundary curve length or more than, the direct short boundary curve of deletion then, and with long boundary curve as the boundary curve after merging; If the length of a boundary curve does not surpass the twice of another boundary curve length, then two boundary curves are carried out curve fitting.
Described candidate point determination module comprises:
Policy unit, be used to preserve default unique point choice criteria, described unique point choice criteria comprises: in outline map, draw the boundary curve of two different directions at least by this point, and the angle between the boundary curve of described different directions is in the candidate angle scope of setting;
Selected cell is used for the each point of the outline map that obtains from the outline map acquisition module, selects all each points that meet the unique point choice criteria of preserving in the policy unit as the candidate feature point.
Described key feature point determination module comprises:
The benchmark stack features is preserved the unit, be used for selecting in advance calibrating the training sample of the facial image of described reference characteristic point group, obtain the statistics feature that described reference characteristic point group distributes through training in human face region, described reference characteristic point group comprises the canthus and the corners of the mouth totally 6 positions;
Key feature point selection unit, the matching degree of the statistics feature that the reference characteristic point group that is used for candidate feature point group that the calculated candidate unique point forms and described benchmark stack features preservation unit preserving distributes in human face region, select the candidate feature point group that wherein matching degree is the highest, as the final key feature point of determining.
Described key feature point determination module comprises:
The benchmark stack features is preserved the unit, be used for selecting in advance calibrating the training sample of the facial image of described reference characteristic point group, obtain the statistics feature that described reference characteristic point group distributes through training in human face region, described reference characteristic point group comprises: the one or more unique points in the first feature group, the combination in any of forming with the one or more unique point in the second feature group, wherein, the first feature group comprises: canthus, eyebrow, eyebrows and eyeball, and the second feature group comprises: nostril, the wing of nose, the corners of the mouth and lip;
Key feature point selection unit, the matching degree of the statistics feature that the reference characteristic point group that is used for candidate feature point group that the calculated candidate unique point forms and described benchmark stack features preservation unit preserving distributes in human face region, select the candidate feature point group that wherein matching degree is the highest, as the final key feature point of determining.
As seen from the above technical solutions, the method and apparatus of the key feature point that the embodiment of the invention provides, by human face region being extracted marginal information and selecting the candidate feature point according to default unique point choice criteria, again candidate feature point and predefined reference characteristic point group are compared and select matching degree the highest as final key feature point, the inaccurate situation in the location that occurs easily in the time of can avoiding relying on the single key feature point regularity of distribution to mate occurs, so the accurate location that can realize key feature point in the human face region more accurately, and this scheme algorithm is simple, calculated amount is less, therefore the quick location that can also realize key feature point simultaneously.
Description of drawings
Fig. 1 is the schematic flow sheet of the method for key feature point in the embodiment of the invention.
The position feature that Fig. 2 distributes in human face region for the reference characteristic point group that is made of the canthus and the corners of the mouth in the embodiment of the invention and being used to is explained the synoptic diagram of the parameter of described position feature.
Fig. 3 is the composition structural representation of the device of key feature point in the embodiment of the invention.
Embodiment
For making purpose of the present invention, technical scheme and advantage clearer, below with reference to the accompanying drawing embodiment that develops simultaneously, the present invention is described in more detail.
The embodiment of the invention at first provides a kind of method of key feature point, and this method divides for the detection of people's face, human face region edge extracting, candidate feature point to extract and four steps of Feature Points Matching, its idiographic flow as shown in Figure 1, comprising:
Step 101: target image is carried out people's face detect, determine that human face region in the target image is for treating alignment image;
Wherein, in this step, target image is the input picture that need carry out key feature point, and describedly target image is carried out people's face detect to be target image is detected and judges the process that whether comprises human face region in this image, specific implementation can be used the method that various people's faces detect in the prior art, and the embodiment of the invention is not done concrete qualification, is classifier methods based on Adaboost such as the method that is most widely used, at this moment, the method for step 101 comprises:
The sorter model that utilizes people's face that training in advance obtains to detect judges whether target image is facial image, if, determine the human face region in the described target image, with described human face region as treating alignment image.
Those skilled in the art should understand, described classifier methods based on Adaboost, be exactly to gather a large amount of facial images in advance, be partitioned into the human face region in the described facial image after demarcating by craft or other modes then, with these human face regions that split as positive sample, gather a large amount of non-face images in addition in the lump as anti-sample, just calculate, the proper vector of anti-sample under selected algorithm, proper vector under the described selected algorithm can be the Harr feature, the proper vector of using always in the prior aries such as HOG feature, the training module that the proper vector that calculates is input to the Adaboost sorter is trained, thereby obtains being used for the sorter model that people's face detects.Afterwards, the sorter model that utilizes this training to obtain just can carry out people's face to target image and detect, thereby judge whether include human face region in this target image, the result of detection comprises relevant informations such as the position that whether comprises human face region and human face region in the target image, size.Because this partial content is existing mature technology, and the embodiment of the invention is not made amendment to it in step 101, therefore specific implementation can be introduced referring to the technical literature of various people's face detection algorithms based on the Adaboost sorter herein no longer in detail.
Need to prove, because the size dimension and position and the size dimension of human face region in target image of target image all are not quite similar, if the area of human face region is excessive, may cause calculated amount in the subsequent processes excessive and consume the too much time, if and the area of human face region is too small, to such an extent as to may cause subsequent characteristics point again obviously can't differentiate and detect inadequately, thereby the success ratio of effect characteristics point location, therefore, processing for the ease of follow-up flow process, after the human face region in determining target image, with described human face region as treating that before the alignment image, this method can further include:
Described human face region is carried out normalized, and described normalization includes but not limited to size normalization, illumination condition normalization etc.
Simultaneously, with the human face region after the normalized as (being between step 101 and the step 102) after treating alignment image, can also carry out operations such as some filtering relevant, inhibition noise to the described alignment image for the treatment of, further reduce the calculated amount in the subsequent step and improve alignment speed with size.
Step 102: extract the marginal information for the treatment of in the alignment image, obtain treating the outline map of alignment image;
Described step 102 comprises:
The marginal information in the alignment image is treated in extraction, obtains treating the initial edge figure of alignment image;
Described initial edge figure is carried out pre-service before the feature point detection, obtain being used to carry out the final outline map that candidate feature point detects, described pre-service includes but not limited to: noise is removed, the screening of the connection of crack edge and edge fingers and merging etc.
Wherein, described extraction treats that the purpose of the marginal information in the alignment image is in order to determine the violent place of grey scale change in the human face region, just gray-scale value sharply changes to another bigger value of difference by a value in very little zone, and these are local all to be the place that facial authentication information is concentrated usually.Similar with step 101, the method for extracting marginal information can adopt the method for various edge extractings in the prior art equally, and such as Canny boundary operator commonly used etc., the embodiment of the invention does not limit its specific implementation.Those skilled in the art can consult the data of relevant edge extracting algorithm, as space is limited, are not described in detail herein.
And described initial edge figure is carried out pre-service before the feature point detection, obtain being used to carrying out the method for the final outline map that candidate feature point detects, then need to depend on the kind decision of the concrete treatment measures that pre-service comprises, below only the concrete grammar of the above-mentioned several pre-service measures that list is simply introduced, need to prove, described pretreated step is the link of an opening, anyly help to reduce the marginal information quantity among the described initial edge figure and the treatment measures of marginal information complexity, can be included in the middle of the described pre-service link:
1, noise is removed
Extract among the initial edge figure that obtains behind the edge, inevitably will there be some breakpoints and noise---such as because the shade that illumination condition forms, people some skin wrinkles or the marginal information that forms of physiological characteristic (such as mole, scar etc.) etc. on the face, then need to remove these noises this moment, and concrete grammar comprises:
Calculate the pixel number that comprises in each connected domain,, judge that then this connected domain is noise and it is removed from initial edge figure if the pixel number that comprises is less than the noise threshold of setting in the connected domain; Described connected domain can be boundary curve, also can be the closed region that is made of boundary curve that when described connected domain was boundary curve, the pixel number that this connected domain comprises was meant the number of the pixel that this boundary curve is occupied; And when described connected domain is the closed region of boundary curve formation, the total number of all pixels that the pixel number that this connected domain comprises then is meant this closed region is surrounded.
Further, the dimension relationship of described noise threshold and human face region can also be got up, such as the width of establishing human face region is W, the setting noise figure is a, then for the human face region of various different sizes, corresponding noise threshold is aW, thereby can make this noise threshold carry out adaptive adjustment with the size of human face region.
2, the connection of crack edge
For the marginal information that obtains among the initial edge figure, because the influence of external factor such as edge strength difference or illumination, fracture appears in fact continuous edge in some human face region in the edge extracting process, lose for fear of the marginal information that this fracture causes, these crack edges need be coupled together.Therefore, need at first to judge which boundary curve in the outline map is a crack edge, concrete grammar comprises:
For the initial edge figure that remove to handle through noise, with the method for edge swell each marginal point is expand into 8 neighborhoods that are adjacent, if latter two boundary curve that expands is continuous, think that then described two boundary curves need connect; Wherein, described marginal point refers to the end points at boundary curve two ends.
After selecting the boundary curve that need connect by above-mentioned steps, next just these crack edges need be connected, concrete grammar comprises:
Boundary curve for needs connect has several situations usually:
A kind of is to be broken as two or several edges in the middle of the edge, it is same or similar that the edge of the both sides of the interruptions at this edge has direction, if nearest end points is connected, the direction at the direction of coupling part and the edge of former breaking part is same or similar.For this situation, adopt the mode that connects two nearest end points to connect---promptly, nearest end points on described two boundary curves is directly connected.
Another kind is that the angle point place disconnects, and this edge should be an angle point in certain position, and the edge does not extract herein, makes angle point disappear, and is fractured into the edge of two different directions.For this situation, calculate the intersection point of two boundary curves, if the distance of nearest point then couples together two edges as tie point with this intersection point less than certain default distance threshold on intersection point to two boundary curve.And if the distance of the nearest point at intersection point to two edge meets or exceeds this default distance threshold, think that then these two edges do not belong to same edge, do not connect.
3, the screening of edge fingers and merging
Because from same position and exist the situation of the edge fingers more than two the outline map of people's face, may exist simultaneously, therefore in the method for the screening of described edge fingers and merging, need screening and the just especially little edge fingers of angle that merges.Therefore, the method for the screening of described edge fingers and merging comprises:
For many edge fingers from same position, calculate the angle of two adjacent edge fingers, if angle is too small (such as spending less than 5, can certainly be set to other angles, do not do qualification in the concrete embodiment of the invention), then merge into same boundary curve, the method of described merging is, if the length of an edge fingers is much larger than another edge fingers (is more than the twice of the edge fingers length of weak point such as long edge fingers), the short edge fingers of deletion then, with long edge fingers as the boundary curve after merging; Otherwise, then two edge fingers are carried out match, certainly, the concrete grammar of match herein can adopt the algorithm of various existing curve fittings to realize, the embodiment of the invention is not done qualification, because related content belongs to those skilled in the art's customary means equally, gives an example no longer one by one herein.
Step 103: detect from outline map according to default unique point choice criteria and to obtain all candidate feature points;
The method of this step comprises:
From the each point of outline map, select all each points that meet default unique point choice criteria as the candidate feature point, described unique point choice criteria comprises: in outline map, at least draw the boundary curve of two different directions by this point, and the angle between the boundary curve of described different directions is in the candidate angle scope of setting.
Step 104: the key feature point that several candidate feature points conducts that the reference characteristic point group matching degree of selecting and setting from described candidate feature point is the highest finally obtain.
In order to realize positioning feature point, need to select to have the position of obvious grey scale change as the key feature point in the human face region, usually the key feature point of selecting to be used for positioning is to select at least two classes or more from eyebrows/eyebrow, eyeball/canthus, nostril/wing of nose, the corners of the mouth/lip etc., select the canthus and the corners of the mouth as the reference characteristic point group in the embodiment of the invention, understand easily, the described canthus and the corners of the mouth comprise altogether: 2 canthus of every eyes and 2 corners of the mouths of mouth are totally 6 points.Accordingly, Ci Shi reference characteristic point group promptly is made of the canthus and the corners of the mouth.
Before mating, the training sample of selecting a collection of facial image in advance calibrates the corners of the mouth and canthus in the sample by artificial or other modes, calculate the feature that the described corners of the mouth and canthus distribute in human face region, draw the statistics rule that the reference characteristic point group distributes in human face region, such as shown in Figure 2, reference characteristic point group for canthus and corners of the mouth formation, d1~d5, the represented implication of h as shown in Figure 2, calculating and statistics through a large amount of training samples, can draw the statistics rule that the canthus and the corners of the mouth distribute---such as d1/d2, d1/d3, d4/d5, d1/h, d2/h etc. have its certain regularity of distribution and span.
Afterwards, when carrying out step 104, the candidate feature point selected in the step 103 position relation according to the canthus and the corners of the mouth is made up, calculate the candidate feature point group of these candidate feature points compositions and the matching degree of the described reference characteristic point group that from training sample, counts then, select the candidate feature point group that wherein matching degree is the highest, employed key feature point when then each the candidate feature point position in this candidate feature point group is the positioning feature point of finally determining.The method of calculating described matching degree can adopt various matching algorithm of the prior art or its combination, the embodiment of the invention is not done qualification to this, such as the method that can adopt be: enumerate the candidate feature point group that all candidate feature points are formed, to each feature candidate point group, calculating each corresponding ratio of each candidate feature point (is previously described ratio: d1/d2, d1/d3, d4/d5, d1/h and d2/h) average and variance, judge that the candidate feature point of variance yields in 3 times of variance scopes of reference characteristic point group is that the match is successful; Calculate in each candidate feature point group, the ratio that each candidate feature point is corresponding and the variance sum of reference characteristic point group, all the match is successful and one group of candidate feature point of variance sum minimum is the optimum matching group for each point.If the candidate feature point group that does not have whole 6 points all to mate can be proceeded the coupling of the candidate feature point group that 5 candidate feature points constitute, matching way is identical with 6.If the candidate feature point group that exists 5 candidate feature points of coupling to form can also further utilize the mode of adding up to calculate the optimum position of the 6th unique point, as the key feature point group that finally obtains.Certainly, also can adopt other matching algorithm to calculate, enumerate no longer one by one as space is limited.
One of ordinary skill in the art will readily recognize that the method for the matching degree of described calculated candidate feature point group and reference characteristic point group, can adopt various ripe algorithm of the prior art, do not do one by one herein and enumerate.
At last, it needs to be noted, select for use in the preamble canthus and the corners of the mouth as the reference characteristic point group be embodiment of the present invention for example, in actual applications, the combination of key feature point that can adopt other equally is as the reference characteristic point group.Certainly, in the key feature point that preamble is enumerated: the position of eyebrow, eyebrows and lip may owing to pluck eyebrows, lipstick and cosmetic etc. make its position instability, out-of-shape; Simultaneously, eyeball, lip also can may have bigger scope of activities owing to the action of eyes or lip, and the position is also stable inadequately; Therefore, more stable and the key feature point shape comparison rule in position comprises canthus, nostril/wing of nose and the corners of the mouth, and be not a pixel owing to nostril and the wing of nose come down to a zone in human face region, be difficult to when carrying out key feature point positioning result is accurate to pixel, the position of the canthus and the corners of the mouth is then relatively fixing.Therefore, the reference characteristic point group that the described canthus and the corners of the mouth constitute should have optimum implementation result, but other selection---such as the combination of canthus and lip or the corners of the mouth and eyeball etc., can realize key feature point too, just when some extreme cases occurring, locating effect may be slightly poor.It is described reference characteristic point group, can be by the one or more unique points in the first feature group, with the one or more unique point combination in any in the second feature group, wherein, the first feature group comprises: canthus, eyebrow, eyebrows and eyeball, the second feature group comprises: nostril, the wing of nose, the corners of the mouth and lip.
On the basis of described method, the embodiment of the invention also provides a kind of device of key feature point, it forms structure as shown in Figure 3, comprising: treat alignment image determination module 310, outline map acquisition module 320, candidate point determination module 330 and key feature point determination module 340;
Treat that alignment image determination module 310 is used for that target image is carried out people's face and detects, determine that human face region in the target image is for treating alignment image;
Outline map acquisition module 320 is used for extracting the marginal information for the treatment of alignment image for the treatment of that alignment image determination module 310 obtains, obtains treating the outline map of alignment image;
Candidate point determination module 330 is used for obtaining all candidate feature points according to default unique point choice criteria from the outline map detection that outline map acquisition module 320 obtains;
Key feature point determination module 340 is used for the candidate feature point that obtains from candidate point determination module 330, and the highest several candidate feature points of the reference characteristic point group matching degree of selecting and setting are as the key feature point that finally obtains.
Wherein, the described alignment image determination module 310 for the treatment of comprises: people's face detecting unit 311 and human face region acquiring unit 312;
The Adaboost sorter model that people's face that people's face detecting unit 311 is used to utilize training in advance to obtain detects judges whether target image is facial image, and notifies human face region acquiring unit 312 when judging target image for facial image;
Human face region acquiring unit 312 is used for the notice of recipient's face detecting unit 311, determines the human face region in the facial image, with described human face region as treating alignment image.
Described treating can further include normalized unit 313 in the alignment image determination module 310;
The human face region that described normalized unit 313 is used for human face region acquiring unit 311 is determined carries out normalized, and described normalization includes but not limited to size normalization, illumination condition normalization;
At this moment, described human face region acquiring unit 312 is further used for the human face region after the normalized as treating alignment image.
Described outline map acquisition module 320 comprises: marginal information extraction unit 321 and pretreatment unit 322;
Marginal information extraction unit 321 is used for extracting the marginal information for the treatment of alignment image for the treatment of that alignment image determination module 310 obtains, obtains treating the initial edge figure of alignment image;
The initial edge figure that pretreatment unit 322 is used for that edge information extraction unit 321 is obtained carries out the pre-service before the feature point detection, obtains being used to carry out the final outline map that candidate feature point detects.
Wherein, comprise noise removal subelement 3221 in the described pretreatment unit 322;
Noise is removed subelement 3221 and is used for calculating the pixel number that each connected domain comprises, and is noise and it is removed from initial edge figure if the pixel number that comprises in the connected domain, is then judged this connected domain less than the noise threshold of setting; Described connected domain comprises boundary curve or the closed region that is made of boundary curve; When described connected domain was boundary curve, the pixel number that comprises in the connected domain was meant the number of the pixel that this boundary curve is occupied; And when connected domain is the closed region of boundary curve formation, the total number of all pixels that the pixel number that this connected domain comprises then is meant this closed region is surrounded.
Preferably, described pretreatment unit 322 also comprises crack edge connexon unit 3222;
Crack edge connexon unit 3222 is used for noise is removed the outline map that obtains after subelement 3221 is handled, method with edge swell expand into each marginal point 8 neighborhoods that are adjacent, latter two boundary curve links to each other if expand, and thinks that then described two boundary curves need connect; Wherein, described marginal point is the end points at boundary curve two ends;
For the boundary curve that needs connect, calculate the direction of two boundary curves; If the angular separation of two boundary curves then directly connects nearest end points on described two boundary curves in default angular range; Otherwise, the intersection point of two boundary curves of calculating: if the distance of nearest point then couples together two marginal points as tie point with this intersection point less than default distance threshold on intersection point to two boundary curve; If the distance of nearest point meets or exceeds described distance threshold on intersection point to two boundary curve, then do not connect.
Preferably, described pretreatment unit 322 can also comprise that edge fingers merges subelement 3223;
Edge fingers merges subelement 3223 and is used to receive the outline map that obtains after crack edge connexon unit 3222 is handled, to wherein from many boundary curves of same pixel, calculate the angle of two adjacent boundary curves, when angle during less than the merging threshold value set, described two boundary curves are merged into a boundary curve, the method that merges comprises: if the length of a boundary curve for the twice of another boundary curve length or more than, the direct short boundary curve of deletion then, and with long boundary curve as the boundary curve after merging; If the length of a boundary curve does not surpass the twice of another boundary curve length, then two boundary curves are carried out curve fitting.
Described candidate point determination module 330 comprises: policy unit 331 and selected cell 332;
Policy unit 331 is used to preserve default unique point choice criteria, described unique point choice criteria comprises: in outline map, at least draw the boundary curve of two different directions by this point, and the angle between the boundary curve of described different directions is in the candidate angle scope of setting;
Selected cell 332 is used for the each point of the outline map that obtains from outline map acquisition module 320, selects all each points that meet the unique point choice criteria of preserving in the policy unit 331 as the candidate feature point.
Described key feature point determination module 340 comprises: the benchmark stack features is preserved unit 341 and key feature point selection unit 342;
The benchmark stack features is preserved the training sample that unit 341 is used for selecting in advance calibrating the facial image of described reference characteristic point group, obtain the statistics feature that described reference characteristic point group distributes through training in human face region, described reference characteristic point group comprises: the one or more unique points in the first feature group, the combination in any of forming with the one or more unique point in the second feature group, wherein, the first feature group comprises: canthus, eyebrow, eyebrows and eyeball, and the second feature group comprises: nostril, the wing of nose, the corners of the mouth and lip;
Key feature point selection unit 342 is used for candidate feature point group that the calculated candidate unique point forms and described benchmark stack features are preserved the statistics feature that the reference characteristic point group of unit 341 preservations distributes in human face region matching degree, select the candidate feature point group that wherein matching degree is the highest, as the final employed key feature point of determining of positioning feature point.
It needs to be noted, in actual applications, the position of eyebrow, eyebrows and lip may owing to pluck eyebrows, lipstick and cosmetic etc. make its position instability, out-of-shape; Simultaneously, eyeball, lip also can may have bigger scope of activities owing to the action of eyes or lip, and the position is also stable inadequately; Therefore, more stable and the key feature point shape comparison rule in position comprises canthus, nostril/wing of nose and the corners of the mouth, and be not a pixel owing to nostril and the wing of nose come down to a zone in human face region, be difficult to when carrying out positioning feature point positioning result is accurate to pixel, the position of the canthus and the corners of the mouth is then relatively fixing.Therefore understand easily, the reference characteristic point group that the described canthus and the corners of the mouth constitute should have optimum implementation result, but other selection can realize key feature point too, and just when some extreme cases occurring, locating effect may be slightly poor.
By as seen above-mentioned, the method and apparatus of the key feature point that the embodiment of the invention provides, by human face region being extracted marginal information and selecting the candidate feature point according to default unique point choice criteria, again candidate feature point and predefined reference characteristic point group are compared and select matching degree the highest as final key feature point, the inaccurate situation in the location that occurs easily in the time of can avoiding relying on the single key feature point regularity of distribution to mate occurs, so the accurate location that can realize key feature point in the human face region more accurately, and this scheme algorithm is simple, calculated amount is less, therefore the quick location that can also realize key feature point simultaneously.

Claims (20)

1. the method for a key feature point is characterized in that, this method comprises:
Target image is carried out people's face detect, determine that human face region in the target image is for treating alignment image; The marginal information in the alignment image is treated in extraction, obtains treating the outline map of alignment image;
Detect from outline map according to default unique point choice criteria and to obtain all candidate feature points; The key feature point that several candidate feature points conducts that the reference characteristic point group matching degree of selecting and setting from described candidate feature point is the highest finally obtain.
2. method according to claim 1 is characterized in that, describedly target image is carried out people's face detects, and determines that the human face region in the target image is that the method for the treatment of alignment image comprises:
The Adaboost sorter model that utilizes people's face that training in advance obtains to detect judges whether target image is facial image, if, determine the human face region in the described target image, with described human face region as treating alignment image.
3. method according to claim 2, it is characterized in that, when the Adaboost sorter model that the people's face that utilizes training in advance to obtain detects is judged target image for facial image and behind the human face region in definite described target image, with described human face region as treating that before the alignment image, this method comprises:
The human face region of determining is carried out normalized, and described normalization includes but not limited to size normalization, illumination condition normalization.
4. method according to claim 1 is characterized in that the marginal information in the alignment image is treated in described extraction, obtains treating that the method for the outline map of alignment image comprises:
The marginal information in the alignment image is treated in extraction, obtains treating the initial edge figure of alignment image;
Described initial edge figure is carried out pre-service before the feature point detection, obtain being used to carry out the final outline map that candidate feature point detects.
5. method according to claim 4 is characterized in that, the pretreated method that described initial edge figure is carried out before the feature point detection comprises:
Calculate the pixel number that comprises in each connected domain,, judge that then this connected domain is noise and it is removed from initial edge figure if the pixel number that comprises is less than the noise threshold of setting in the connected domain; Described connected domain comprises boundary curve or the closed region that is made of boundary curve; When described connected domain was boundary curve, the pixel number that comprises in the connected domain was meant the number of the pixel that this boundary curve is occupied; And when connected domain is the closed region of boundary curve formation, the total number of all pixels that the pixel number that this connected domain comprises then is meant this closed region is surrounded.
6. method according to claim 5 is characterized in that, described noise is removed from initial edge figure after, this method further comprises:
To the initial edge figure that removes through noise, with the method for edge swell each marginal point is expand into 8 neighborhoods that are adjacent, if latter two boundary curve that expands is continuous, think that then described two boundary curves need connect; Wherein, described marginal point is the end points at boundary curve two ends;
For the boundary curve that needs connect, calculate the direction of two boundary curves; If the angular separation of two boundary curves then directly connects nearest end points on described two boundary curves in default angular range; Otherwise, the intersection point of two boundary curves of calculating: if the distance of nearest point then couples together two marginal points as tie point with this intersection point less than default distance threshold on intersection point to two boundary curve; If the distance of nearest point meets or exceeds described distance threshold on intersection point to two boundary curve, then do not connect.
7. method according to claim 6 is characterized in that, after the described boundary curve connection that will need to connect, this method further comprises:
For many boundary curves from same pixel, calculate the angle of two adjacent boundary curves, when angle during less than the merging threshold value set, described two boundary curves are merged into a boundary curve, the method that merges comprises: if the length of a boundary curve for the twice of another boundary curve length or more than, the direct short boundary curve of deletion then, and with long boundary curve as the boundary curve after merging; If the length of a boundary curve does not surpass the twice of another boundary curve length, then two boundary curves are carried out curve fitting.
8. method according to claim 1 is characterized in that, describedly detects the method that obtains all candidate feature points according to default unique point choice criteria from outline map and comprises:
From the each point of outline map, select all each points that meet default unique point choice criteria as the candidate feature point, described unique point choice criteria comprises: in outline map, at least draw the boundary curve of two different directions by this point, and the angle between the boundary curve of described different directions is in the candidate angle scope of setting.
9. according to each described method in the claim 1 to 8, it is characterized in that the method for the key feature point that several candidate feature points conducts that the reference characteristic point group matching degree of selecting and setting is the highest finally obtain comprises from described candidate feature point:
Selection in advance calibrates the training sample of the facial image of described reference characteristic point group, obtains the statistics feature that described reference characteristic point group distributes through training in human face region;
The matching degree of the statistics feature that candidate feature point group that the calculated candidate unique point is formed and described reference characteristic point group distribute in human face region, select the candidate feature point group that wherein matching degree is the highest, as the final key feature point of determining, described reference characteristic point group comprises the canthus and the corners of the mouth totally 6 positions.
10. according to each described method in the claim 1 to 8, it is characterized in that described reference characteristic point group comprises:
One or more unique points in the first feature group, the combination in any of forming with the one or more unique point in the second feature group, wherein, the first feature group comprises: canthus, eyebrow, eyebrows and eyeball, the second feature group comprises: nostril, the wing of nose, the corners of the mouth and lip.
11. a key feature point device, it is characterized in that this device comprises:
Treat the alignment image determination module, be used for that target image is carried out people's face and detect, determine that human face region in the target image is for treating alignment image;
The outline map acquisition module is used for extracting the marginal information for the treatment of alignment image for the treatment of that the alignment image determination module obtains, obtains treating the outline map of alignment image;
The candidate point determination module is used for obtaining all candidate feature points according to default unique point choice criteria from the outline map detection that the outline map acquisition module obtains;
Key feature point determination module is used for the candidate feature point that obtains from the candidate point determination module, and the highest several candidate feature points of the reference characteristic point group matching degree of selecting and setting are as the key feature point that finally obtains.
12. device according to claim 11 is characterized in that, the described alignment image determination module for the treatment of comprises:
People's face detecting unit, the Adaboost sorter model that the people's face that is used to utilize training in advance to obtain detects judges whether target image is facial image, and notifies the human face region acquiring unit when judging target image for facial image;
The human face region acquiring unit is used for the notice of recipient's face detecting unit, determines the human face region in the facial image, with described human face region as treating alignment image.
13. device according to claim 12 is characterized in that, describedly treats further to comprise in the alignment image determination module normalized unit;
Described normalized unit, the human face region that is used for the human face region acquiring unit is determined carries out normalized, and described normalization includes but not limited to size normalization, illumination condition normalization;
At this moment, described human face region acquiring unit is further used for the human face region after the normalized as treating alignment image.
14. device according to claim 11 is characterized in that, described outline map acquisition module comprises:
The marginal information extraction unit is used for extracting the marginal information for the treatment of alignment image for the treatment of that the alignment image determination module obtains, obtains treating the initial edge figure of alignment image;
Pretreatment unit, the initial edge figure that is used for that the edge information extraction unit is obtained carries out the pre-service before the feature point detection, obtains being used to carry out the final outline map that candidate feature point detects.
15. device according to claim 14 is characterized in that, described pretreatment unit comprises:
Noise is removed subelement, is used for calculating the pixel number that each connected domain comprises, and is noise and it is removed from initial edge figure if the pixel number that comprises in the connected domain, is then judged this connected domain less than the noise threshold of setting; Described connected domain comprises boundary curve or the closed region that is made of boundary curve; When described connected domain was boundary curve, the pixel number that comprises in the connected domain was meant the number of the pixel that this boundary curve is occupied; And when connected domain is the closed region of boundary curve formation, the total number of all pixels that the pixel number that this connected domain comprises then is meant this closed region is surrounded.
16. device according to claim 15 is characterized in that, described pretreatment unit also comprises:
Crack edge connexon unit, be used for noise is removed the outline map that obtains after subelement is handled, method with edge swell expand into 8 neighborhoods that are adjacent with each marginal point, if latter two boundary curve that expands links to each other, thinks that then described two boundary curves need connect; Wherein, described marginal point is the end points at boundary curve two ends;
For the boundary curve that needs connect, calculate the direction of two boundary curves; If the angular separation of two boundary curves then adopts the mode of bee-line that described marginal point is connected in default angular range; Otherwise, the intersection point of two boundary curves of calculating: if the distance of nearest point then couples together two marginal points as tie point with this intersection point less than default distance threshold on intersection point to two boundary curve; If the distance of nearest point meets or exceeds described distance threshold on intersection point to two boundary curve, then the end points with two edges directly couples together.
17. device according to claim 16 is characterized in that, described pretreatment unit also comprises:
Edge fingers merges subelement, be used to receive the outline map that obtains after the crack edge connexon cell processing, to wherein from many boundary curves of same pixel, calculate the angle of two adjacent boundary curves, when angle during less than the merging threshold value set, described two boundary curves are merged into a boundary curve, the method that merges comprises: if the length of a boundary curve for the twice of another boundary curve length or more than, the direct short boundary curve of deletion then, and with long boundary curve as the boundary curve after merging; If the length of a boundary curve does not surpass the twice of another boundary curve length, then two boundary curves are carried out curve fitting.
18. device according to claim 11 is characterized in that, described candidate point determination module comprises:
Policy unit, be used to preserve default unique point choice criteria, described unique point choice criteria comprises: in outline map, draw the boundary curve of two different directions at least by this point, and the angle between the boundary curve of described different directions is in the candidate angle scope of setting;
Selected cell is used for the each point of the outline map that obtains from the outline map acquisition module, selects all each points that meet the unique point choice criteria of preserving in the policy unit as the candidate feature point.
19., it is characterized in that described key feature point determination module comprises according to each described device in the claim 11 to 18:
The benchmark stack features is preserved the unit, be used for selecting in advance calibrating the training sample of the facial image of described reference characteristic point group, obtain the statistics feature that described reference characteristic point group distributes through training in human face region, described reference characteristic point group comprises the canthus and the corners of the mouth totally 6 positions;
Key feature point selection unit, the matching degree of the statistics feature that the reference characteristic point group that is used for candidate feature point group that the calculated candidate unique point forms and described benchmark stack features preservation unit preserving distributes in human face region, select the candidate feature point group that wherein matching degree is the highest, as the final key feature point of determining.
20., it is characterized in that described key feature point determination module comprises according to each described device in the claim 11 to 18:
The benchmark stack features is preserved the unit, be used for selecting in advance calibrating the training sample of the facial image of described reference characteristic point group, obtain the statistics feature that described reference characteristic point group distributes through training in human face region, described reference characteristic point group comprises: the one or more unique points in the first feature group, the combination in any of forming with the one or more unique point in the second feature group, wherein, the first feature group comprises: canthus, eyebrow, eyebrows and eyeball, and the second feature group comprises: nostril, the wing of nose, the corners of the mouth and lip;
Key feature point selection unit, the matching degree of the statistics feature that the reference characteristic point group that is used for candidate feature point group that the calculated candidate unique point forms and described benchmark stack features preservation unit preserving distributes in human face region, select the candidate feature point group that wherein matching degree is the highest, as the final key feature point of determining.
CN2009102417600A 2009-12-07 2009-12-07 Method and device for positioning key feature point Pending CN101877055A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009102417600A CN101877055A (en) 2009-12-07 2009-12-07 Method and device for positioning key feature point

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009102417600A CN101877055A (en) 2009-12-07 2009-12-07 Method and device for positioning key feature point

Publications (1)

Publication Number Publication Date
CN101877055A true CN101877055A (en) 2010-11-03

Family

ID=43019608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009102417600A Pending CN101877055A (en) 2009-12-07 2009-12-07 Method and device for positioning key feature point

Country Status (1)

Country Link
CN (1) CN101877055A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779267A (en) * 2011-05-12 2012-11-14 株式会社理光 Method and device for detection of specific object region in image
CN104679011A (en) * 2015-01-30 2015-06-03 南京航空航天大学 Image matching navigation method based on stable branch characteristic point
CN104966046A (en) * 2015-05-20 2015-10-07 腾讯科技(深圳)有限公司 Method and device for evaluating face key point positioning result
CN105869139A (en) * 2015-11-24 2016-08-17 乐视致新电子科技(天津)有限公司 Image processing method and apparatus
CN105869122A (en) * 2015-11-24 2016-08-17 乐视致新电子科技(天津)有限公司 Image processing method and apparatus
WO2016192477A1 (en) * 2015-05-29 2016-12-08 腾讯科技(深圳)有限公司 Method and terminal for locating critical point of face
CN106503697A (en) * 2016-12-05 2017-03-15 北京小米移动软件有限公司 Target identification method and device, face identification method and device
CN106919898A (en) * 2017-01-16 2017-07-04 北京龙杯信息技术有限公司 Feature modeling method in recognition of face
CN106934812A (en) * 2015-12-25 2017-07-07 北京展讯高科通信技术有限公司 Image-signal processor and its image-signal processing method
CN107066932A (en) * 2017-01-16 2017-08-18 北京龙杯信息技术有限公司 The detection of key feature points and localization method in recognition of face
CN107615295A (en) * 2015-05-21 2018-01-19 北京市商汤科技开发有限公司 For the apparatus and method for the facial key feature for positioning face-image
CN107943527A (en) * 2017-11-30 2018-04-20 西安科锐盛创新科技有限公司 The method and its system of electronic equipment is automatically closed in sleep
CN108133221A (en) * 2016-12-01 2018-06-08 佳能株式会社 Object shapes detection device and method, image processing apparatus and monitoring system
CN108985134A (en) * 2017-06-01 2018-12-11 重庆中科云丛科技有限公司 Face In vivo detection and brush face method of commerce and system based on binocular camera
CN109558864A (en) * 2019-01-16 2019-04-02 苏州科达科技股份有限公司 Face critical point detection method, apparatus and storage medium
CN110850220A (en) * 2019-11-29 2020-02-28 苏州大学 Electrical appliance detection method, device and system
CN112700464A (en) * 2021-01-15 2021-04-23 腾讯科技(深圳)有限公司 Map information processing method and device, electronic equipment and storage medium
CN114332148A (en) * 2021-12-14 2022-04-12 成都乐信圣文科技有限责任公司 Detection method and device for unclosed line segments of wire frame graph
CN114743252A (en) * 2022-06-10 2022-07-12 中汽研汽车检验中心(天津)有限公司 Feature point screening method, device and storage medium for head model

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779267A (en) * 2011-05-12 2012-11-14 株式会社理光 Method and device for detection of specific object region in image
CN102779267B (en) * 2011-05-12 2015-08-12 株式会社理光 The method and apparatus of specific object region in detected image
CN104679011A (en) * 2015-01-30 2015-06-03 南京航空航天大学 Image matching navigation method based on stable branch characteristic point
CN104966046A (en) * 2015-05-20 2015-10-07 腾讯科技(深圳)有限公司 Method and device for evaluating face key point positioning result
US10706263B2 (en) 2015-05-20 2020-07-07 Tencent Technology (Shenzhen) Company Limited Evaluation method and evaluation device for facial key point positioning result
US10331940B2 (en) 2015-05-20 2019-06-25 Tencent Technology (Shenzhen) Company Limited Evaluation method and evaluation device for facial key point positioning result
CN107615295A (en) * 2015-05-21 2018-01-19 北京市商汤科技开发有限公司 For the apparatus and method for the facial key feature for positioning face-image
CN107615295B (en) * 2015-05-21 2020-09-25 北京市商汤科技开发有限公司 Apparatus and method for locating key features of face image
WO2016192477A1 (en) * 2015-05-29 2016-12-08 腾讯科技(深圳)有限公司 Method and terminal for locating critical point of face
US10068128B2 (en) 2015-05-29 2018-09-04 Tencent Technology (Shenzhen) Company Limited Face key point positioning method and terminal
CN105869122A (en) * 2015-11-24 2016-08-17 乐视致新电子科技(天津)有限公司 Image processing method and apparatus
WO2017088462A1 (en) * 2015-11-24 2017-06-01 乐视控股(北京)有限公司 Image processing method and device
CN105869139A (en) * 2015-11-24 2016-08-17 乐视致新电子科技(天津)有限公司 Image processing method and apparatus
CN106934812A (en) * 2015-12-25 2017-07-07 北京展讯高科通信技术有限公司 Image-signal processor and its image-signal processing method
CN108133221A (en) * 2016-12-01 2018-06-08 佳能株式会社 Object shapes detection device and method, image processing apparatus and monitoring system
CN106503697A (en) * 2016-12-05 2017-03-15 北京小米移动软件有限公司 Target identification method and device, face identification method and device
CN106919898A (en) * 2017-01-16 2017-07-04 北京龙杯信息技术有限公司 Feature modeling method in recognition of face
CN107066932A (en) * 2017-01-16 2017-08-18 北京龙杯信息技术有限公司 The detection of key feature points and localization method in recognition of face
CN108985134A (en) * 2017-06-01 2018-12-11 重庆中科云丛科技有限公司 Face In vivo detection and brush face method of commerce and system based on binocular camera
CN107943527A (en) * 2017-11-30 2018-04-20 西安科锐盛创新科技有限公司 The method and its system of electronic equipment is automatically closed in sleep
CN109558864A (en) * 2019-01-16 2019-04-02 苏州科达科技股份有限公司 Face critical point detection method, apparatus and storage medium
CN110850220A (en) * 2019-11-29 2020-02-28 苏州大学 Electrical appliance detection method, device and system
CN112700464A (en) * 2021-01-15 2021-04-23 腾讯科技(深圳)有限公司 Map information processing method and device, electronic equipment and storage medium
CN114332148A (en) * 2021-12-14 2022-04-12 成都乐信圣文科技有限责任公司 Detection method and device for unclosed line segments of wire frame graph
CN114332148B (en) * 2021-12-14 2023-04-07 成都乐信圣文科技有限责任公司 Detection method and device for unclosed line segments of wire frame graph
CN114743252A (en) * 2022-06-10 2022-07-12 中汽研汽车检验中心(天津)有限公司 Feature point screening method, device and storage medium for head model
CN114743252B (en) * 2022-06-10 2022-09-16 中汽研汽车检验中心(天津)有限公司 Feature point screening method, device and storage medium for head model

Similar Documents

Publication Publication Date Title
CN101877055A (en) Method and device for positioning key feature point
CN107358206B (en) Micro-expression detection method based on region-of-interest optical flow features
CN104361326A (en) Method for distinguishing living human face
CN111144293A (en) Human face identity authentication system with interactive living body detection and method thereof
CN104851140A (en) Face recognition-based attendance access control system
Kawulok et al. Precise multi-level face detector for advanced analysis of facial images
US20120087543A1 (en) Image-based hand detection apparatus and method
CN101447023B (en) Method and system for detecting human head
CN104573634A (en) Three-dimensional face recognition method
EP2858007A1 (en) Sift feature bag based bovine iris image recognition method
WO2019014814A1 (en) Method for quantitatively detecting forehead wrinkles on human face, and intelligent terminal
US20160171323A1 (en) Apparatus for recognizing iris and operating method thereof
CN108960156A (en) A kind of Face datection recognition methods and device
CN105844727A (en) Intelligent dynamic human face recognition attendance checking record management system
CN104008364A (en) Face recognition method
CN107862298B (en) Winking living body detection method based on infrared camera device
CN111339885A (en) User identity determination method based on iris recognition and related device
CN105404854A (en) Methods and devices for obtaining frontal human face images
CN113920591A (en) Medium-distance and long-distance identity authentication method and device based on multi-mode biological feature recognition
CN107346544B (en) Image processing method and electronic equipment
CN109961004B (en) Polarized light source face detection method and system
CN102930259A (en) Method for extracting eyebrow area
CN112149559A (en) Face recognition method and device, readable storage medium and computer equipment
CN104732202A (en) Method for eliminating influence of glasses frame during human eye detection
CN106778451A (en) A kind of eyeglass detection method of face recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20101103