WO2013143390A1 - 人脸标定方法和系统、计算机存储介质 - Google Patents

人脸标定方法和系统、计算机存储介质 Download PDF

Info

Publication number
WO2013143390A1
WO2013143390A1 PCT/CN2013/072518 CN2013072518W WO2013143390A1 WO 2013143390 A1 WO2013143390 A1 WO 2013143390A1 CN 2013072518 W CN2013072518 W CN 2013072518W WO 2013143390 A1 WO2013143390 A1 WO 2013143390A1
Authority
WO
WIPO (PCT)
Prior art keywords
centroid
point
face
area
corner
Prior art date
Application number
PCT/CN2013/072518
Other languages
English (en)
French (fr)
Inventor
王晖
谢晓境
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to CA2867365A priority Critical patent/CA2867365C/en
Priority to SG11201405684WA priority patent/SG11201405684WA/en
Priority to AP2014007969A priority patent/AP2014007969A0/xx
Priority to EP13770054.8A priority patent/EP2833288B1/en
Priority to RU2014142591/08A priority patent/RU2601185C2/ru
Priority to KR1020147029988A priority patent/KR101683704B1/ko
Publication of WO2013143390A1 publication Critical patent/WO2013143390A1/zh
Priority to PH12014501995A priority patent/PH12014501995A1/en
Priority to ZA2014/06837A priority patent/ZA201406837B/en
Priority to US14/497,191 priority patent/US9530045B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Definitions

  • the present invention relates to face detection technology, and in particular, to a face calibration method and system, and a computer storage medium.
  • Face calibration is widely used in various face recognition based products.
  • the accuracy of face calibration is very important for face recognition.
  • more and more portable communication devices also have face recognition functions, such as face recognition and smile photographing of digital cameras, face unlocking of mobile devices, and the like.
  • a face calibration method includes the following steps: preprocessing a picture; extracting corner points in the preprocessed picture, filtering and merging the corner points to obtain a connected area of the corner points; extracting the corner points The centroid in the connected region; the centroid is matched with the face template, the matching probability of the centroid and the face template is calculated, and the region formed by the centroid whose matching probability is greater than or equal to the predetermined value is positioned as the candidate face region.
  • the pre-processing includes one or more of gradation adjustment, automatic white balance, scale normalization, and image mosaic of the picture.
  • the step of extracting a corner point in the preprocessed picture is: calculating a brightness difference degree between a current pixel point and a surrounding pixel point according to a predefined 3 ⁇ 3 template, and extracting the brightness difference degree is greater than or equal to
  • the pixel of the first threshold is a corner point;
  • the 3 ⁇ 3 template is the current pixel point The area formed by the center and the left, right, upper, lower, upper left, upper right, lower left, and lower right pixel points of the current pixel.
  • the step of filtering the diagonal points is: identifying a skin color point in the preprocessed picture, filtering out corner points that do not contain skin color points in a preset range; extracting YcgCr and YcbCr
  • the center of the intersection of the two color spaces is the center of the skin color, the Cb, Cg, and Cr component values of the corner point are calculated, and the distances of the Cb, Cg, and Cr component values of the corner point from the center of the skin color are calculated, In addition to the corner point where the distance is greater than the second threshold.
  • the step of extracting the centroid in the connected region of the corner point is: screening out a connected region whose area is greater than or equal to a third threshold and/or a ratio of width to height within a preset range; The center point in the selected connected region is the centroid; the direction of the extracted centroid is calculated, and the centroid of the direction is removed from the centroid within the set vertical range.
  • the face template is a rectangular template comprising a left eye vertex, a right eye vertex, and at least one third point on the other side parallel to the left eye vertex and the right eye vertex.
  • the centroid is matched with the face template, the matching probability of the centroid and the face template is calculated, and the area formed by the centroid whose matching probability is greater than or equal to the predetermined value is positioned as the candidate face.
  • the steps of the region are: traversing the centroid, for each centroid point, performing: searching for the vertices of the face template with the current first centroid point, searching for the second centroid point whose distance from the vertices of the right eye is less than or equal to the fourth threshold; a third centroid point whose vertical distance from the other side parallel to the side of the left eye vertex and the right eye vertex is less than or equal to a fourth threshold; according to the distance between the second centroid point and the right eye vertex, the first Calculating the matching probability by a vertical distance between the three centroid points and the other side, a shortest distance between the third centroid point and the third point; determining whether the matching probability is greater than or equal to a predetermined value, and if yes, The area formed by the first
  • the centroid is matched with the face template, the matching probability of the centroid and the face template is calculated, and the region formed by the centroid whose matching probability is greater than or equal to the predetermined value is positioned as a candidate.
  • the method further includes: dividing the candidate face region into a set number of grids, and calculating a proportion of the skin color in each of the cells; and filtering out the candidate face region of the skin color ratio that satisfies the preset skin color ratio distribution as a final Face area.
  • a face calibration system comprising: a preprocessing module for preprocessing a picture; a corner extraction module for extracting corner points in the preprocessed picture; a corner point filtering and merging module, for The corner points are filtered and combined to obtain a connected region of the corner points; a centroid extraction module is configured to extract a centroid in the connected region of the corner points; a candidate face region localization module is configured to use the centroid and the face template Perform matching, calculate a matching probability of the centroid and the face template, and locate an area formed by the quality ' ⁇ whose matching probability is greater than or equal to a predetermined value as a candidate face area.
  • the pre-processing includes one or more of gradation adjustment, automatic white balance, scale normalization, and image mosaic of the picture.
  • the corner extraction module is configured to calculate a brightness difference degree between a current pixel point and a surrounding pixel point according to a predefined 3 ⁇ 3 template, and extract a pixel point whose brightness difference degree is greater than or equal to the first threshold value as an angle.
  • the 3X3 template is an area composed of pixel points centered on the current pixel point and left, right, up, down, top left, top right, bottom left, and bottom right of the current pixel point.
  • the corner filtering and merging module is configured to identify skin color points in the preprocessed picture, and filter out corner points that do not contain skin color points within a preset range;
  • the corner filtering and merging module is further configured to extract the center of the intersection of the two color spaces of YcgCr and YcbCr as the center of the skin color, calculate the Cb, Cg, and Cr component values of the corner point, and calculate the Cb of the corner point. And a distance between the Cg and Cr component values and the center of the skin color, and filtering out the corner point where the distance is greater than the second threshold.
  • the centroid extraction module includes: a connectivity area screening unit, configured to filter out a connectivity area where the area is greater than or equal to a third threshold and/or a ratio of width to height within a preset range; a centroid extraction unit, configured Extracting a center point in the selected connected region as a centroid; a centroid removing unit configured to calculate a direction of the extracted centroid, and removing a centroid in which the perpendicularity of the direction is within a set verticality range.
  • a connectivity area screening unit configured to filter out a connectivity area where the area is greater than or equal to a third threshold and/or a ratio of width to height within a preset range
  • a centroid extraction unit configured Extracting a center point in the selected connected region as a centroid
  • a centroid removing unit configured to calculate a direction of the extracted centroid, and removing a centroid in which the perpendicularity of the direction is within a set verticality range.
  • the face template is a rectangular template comprising a left eye vertex, a right eye vertex, and at least one third point on the other side parallel to the left eye vertex and the right eye vertex.
  • the candidate face area location module includes:
  • a search unit for each centroid point, with the current first centroid point as the vertices of the face template, Searching for a second centroid point that is less than or equal to a fourth threshold value from the right eye vertex; and also for searching for a vertical distance from the other side parallel to the left eye vertex and the right eye vertex to be less than or equal to a fourth threshold Third centroid point;
  • a matching probability calculation unit configured to: according to a distance between the second centroid point and the right eye vertex, a vertical distance between the third centroid point and the other side, the third centroid point and the third point The shortest distance calculates the matching probability
  • the area locating unit is configured to determine whether the matching probability is greater than or equal to a predetermined value, and if yes, locate an area formed by the first centroid point, the second centroid point, and the third centroid point as a candidate face area.
  • the system further includes: a region selection module, configured to divide the candidate face region into a set number of grids, calculate a skin color ratio in each cell, and filter out the skin color ratio to meet the preset
  • the candidate face area of the skin color proportion distribution is the final face area.
  • One or more computer storage media containing computer executable instructions for performing a face calibration method, the method comprising the steps of:
  • the centroid is matched with the face template, the matching probability of the centroid and the face template is calculated, and the region formed by the centroid whose matching probability is greater than or equal to the predetermined value is positioned as the candidate face region.
  • the face calibration method and system, and the computer storage medium by calculating the matching probability of the centroid and the face template, and positioning the region formed by the centroid whose matching probability is greater than or equal to the predetermined value as the candidate face region, the probability of the face template
  • the model can be flexibly stretched, rotated, and more accurately matched to the face, and the algorithm is efficient, so the efficiency and accuracy of face calibration can be improved.
  • Figure 1 is a schematic flow chart of a face calibration method in an embodiment
  • Figure 2 is a histogram on the R channel of a picture
  • Figure 3 is a schematic diagram of a 3X3 template in one embodiment
  • FIG. 4 is a schematic flow chart of extracting a centroid in a connected region of a corner point in an embodiment
  • FIG. 5 is a schematic diagram of a face template in an embodiment
  • FIG. 6 is a schematic flow chart of positioning a candidate face region in an embodiment
  • FIG. 7 is a schematic diagram of matching a centroid and a face template in one embodiment
  • Figure 8 is a schematic diagram of a skin color ratio model in one embodiment
  • FIG. 9 is a structural block diagram of a face calibration system in an embodiment
  • FIG. 10 is a structural block diagram of a centroid extraction module in one embodiment
  • FIG. 11 is a structural block diagram of a candidate face area positioning module in an embodiment
  • Figure 12 is a block diagram showing the structure of a face calibration system in another embodiment.
  • a method for calibrating a face includes the following steps: Step S102: Pre-processing a picture.
  • the preprocessing of the picture includes one or more of gradation adjustment, automatic white balance, scale normalization, and image mosaic of the picture. After the image is preprocessed, the subsequent calculation amount can be effectively reduced, thereby improving the calibration efficiency.
  • Step S104 extracting corner points in the preprocessed picture, filtering and combining the corner points to obtain a connected area of the corner points.
  • the corner point refers to a point where the brightness of the surrounding area changes sharply, and the picture obtained by extracting the corner point can be regarded as a contour map.
  • the corner points in the extracted pre-processed picture are not all the corners of the facial features required, so it is necessary to filter the corner points and remove the corner points irrelevant to the facial features.
  • the filtered corners will be partially concentrated, such as the corners of the eyes and mouth. Therefore, the corner points of the local aggregation can be combined to obtain the connected region of the corner points.
  • Step S106 extracting the centroid in the connected region of the corner point.
  • the centroid is the center point of the connected area of the corner point.
  • the centroid can effectively represent the main features of the face, including the eyes, nose, mouth, etc. After extracting the centroid, it can be used for subsequent face template matching.
  • Step S108 Match the centroid with the face template, calculate the matching probability of the centroid and the face template, and locate the region formed by the centroid whose matching probability is greater than or equal to the predetermined value as the candidate face region.
  • the probabilistic model of the face template can be flexibly stretched, rotated, and more accurately matched to the face, and the algorithm is efficient, so the efficiency and accuracy of the face calibration can be improved.
  • the pre-processing of the picture includes gradation adjustment of the picture, automatic white balance, scale normalization, and image mosaic.
  • the gradation adjustment of the image refers to adjusting the gradation of the picture.
  • the color scale is an index standard that indicates the brightness intensity of the picture.
  • the color fullness and fineness of the picture are determined by the color level.
  • the gradation adjustments can be made to the RGB three channels of the picture. As shown in Figure 2, for the histogram on the R channel of a picture, the shadow area and the highlight area containing less data can be removed, the left and right borders are adjusted to the [left, right] interval, and then the value on R is re- Map back to the [0,255] interval.
  • the [left, right] section removes the gradation information retained after the shaded area and the highlight area.
  • the new R/G/B value can be calculated according to the following formula:
  • newRGB (oldRGB - left) * 255 / Diff
  • newRGB is the new R/G/B value
  • oldRGB is the R/G/B value before the tone adjustment.
  • the contrast of the picture after the gradation adjustment is improved, and the edge is more clear, which is more conducive to subsequent skin color recognition, corner filter, and the like.
  • the automatic white balance can be performed as follows:
  • R ', G ' are the three-component values of the picture after white balance, and R, G. ⁇ are the original images.
  • the three-component value; RG is the mean of the 1, G, and B categories on the picture.
  • the pictures can be size normalized, that is, the pictures are performed. Zoom processing.
  • the picture may be scaled by maintaining the original scale, or may be scaled without maintaining the original scale.
  • the picture is scaled by maintaining the original picture ratio.
  • the picture with a height greater than 400px can be reduced to a height of 400px, and for a picture with a height less than 400px, the original size can be maintained without zooming.
  • the picture may be mosaicized after the size of the picture is normalized, and the picture is converted into a mosaic image, which can extract the corner points more accurately, and at the same time, extract the corner points on the mosaic picture.
  • the size of the selectable mosaic is 2X2px
  • the new pixel value is the average of the four pixels.
  • the specific process of extracting the corner points in the preprocessed picture in step S104 is: calculating the brightness difference degree between the current pixel point and the surrounding pixel points according to the predefined 3 ⁇ 3 template, and extracting the brightness difference degree is greater than or equal to the first A threshold pixel is a corner point; wherein the 3X3 template is an area formed by the current pixel point and the left, right, upper, lower, upper left, upper right, lower left, and lower right pixel points of the current pixel.
  • the current pixel point is C
  • the left, right, upper, lower, upper left, upper right, lower left, and lower right pixels are Al, A, B, Bl, A3, B2, B3, and A2, respectively.
  • the area formed by these 9 pixels is the defined 3X3 template.
  • the brightness difference between the current pixel point and the surrounding pixel point is calculated according to the following formula:
  • rAl wl * (fA- fC) 2 + wl * (fAl - fC) 2
  • rB2 w4 * (ffi2 - fC) 2 + w4 * (ffi3 - fC) 2
  • R2 min(rA2, rB2)
  • f represents the luminance component of the pixel (0 ⁇ 255)
  • fA represents the luminance of the right pixel A of point C.
  • Rl rAl -mBl 2 /Al When rnB O and Al+mBl>0
  • R2 rA2 -mB2 2 /A2 when mB2 ⁇ 0 and A2+mB2>0
  • the step of filtering the corner points in step S104 is: identifying the skin color points in the preprocessed picture, filtering out corner points that do not contain skin color points in the preset range; extracting YcgCr and YcbCr
  • the center of the intersection of the color spaces is the center of the skin color, the Cb, Cg, and Cr component values of the corner points are calculated, and the values of the Cb, Cg, and Cr component values of the corner points are calculated from the center of the skin color, and filtered.
  • the distance is greater than a corner point of the second threshold. In this embodiment, the corner points around the skin color are retained and the corner points farther away from the skin color are removed.
  • the skin color can be extracted simultaneously in both the YCbCr and YCgCr spaces, and the accuracy of the extraction is better.
  • the skin color range is CgE [85, 135], Cr G [-Cg + 260, -Cg + 280] o in the YCbCr color space, the skin color range is Cbe [77, 127], Cr e [133 , 173], at the same time, YE [16, 235] on these two color spaces.
  • the pixel points in the preprocessed picture are calculated according to the RGB values of the pixel points according to the following formula: Y, Cb, Cr, Cg components are calculated:
  • the pixel is a skin color pixel point (ie, a skin color point). If the extracted corner points do not contain skin color points within the preset range, the corner points are filtered out.
  • the center (Per, Peg, Pcb) of the intersection of the two color spaces is taken as the center of the skin color, and for each pixel in the preprocessed image, its Cb is calculated. After the Cr and Cg components, the Euclidean distance from the pixel point to the center of the skin color is calculated. If the color distance of the pixel point to the center of the skin color is greater than the second threshold, then the pixel point is considered to be impossible to be a skin color, and the corner point is filtered out.
  • the corner point binary map can be obtained after filtering the diagonal points, but since the number of submitted extracts is relatively large at this time, directly matching the face template on the corner point binary map causes a large The amount is calculated, and many corner points are concentrated locally, so adjacent corner points can be combined to reduce the amount of subsequent calculations.
  • a distance function may be predefined to merge when the distance between adjacent corner points satisfies a preset condition.
  • the algorithm for merging adjacent corner points can be combined by the conventional pixel labeling algorithm, the run-length connectivity algorithm and the region growing algorithm, and will not be described here. After merging the corner points, a connected area of a plurality of corner points is obtained.
  • step S106 is:
  • Step S116 filtering out the connected area i or the area whose area is greater than or equal to the third threshold and/or the aspect ratio within a preset range.
  • the connected region of the corner portion of the obtained portion may not conform to the characteristics of the face, it is necessary to The connected area of the corner points is filtered.
  • the connected area whose area is smaller than the third threshold may be removed; and/or the connected area where the aspect ratio is not within the preset range.
  • the third threshold is set to 450, the connected area with the area of the area of 450 or more is filtered out.
  • the preset range can be greater than 0.5 and less than 5 for the aspect ratio.
  • the third threshold may be set according to the scale of the face template to facilitate subsequent matching of the face template.
  • Step S126 extracting a center point in the selected connected area as a centroid.
  • step S136 the direction of the extracted centroid is calculated, and the centroid of the removal direction is within the centroid of the set verticality.
  • the centroid is a vector, and the direction of the centroid is related to the position where the image is located, and the centroid represents the edge orientation information of the region in which it is located.
  • the traditional Sobel operator an edge extraction operator
  • the verticality of the removal direction is within the centroid of the set verticality range, that is, for the centroid of the direction close to vertical, it is vertical. The centroid of the edge extraction should be removed.
  • centroid can be used to perform face template matching.
  • each centroid can be expressed as (P, R, D), where P is the center point of the merged connected region, R For the radius of the connected region, D is the density of the connected region.
  • the face template is a rectangular template comprising a left eye vertex, a right eye vertex, and at least one third point on the other side that is parallel to the left eye vertex and the right eye vertex.
  • the face template is a rectangular template, and includes at least three points, each of which is represented by (P, w, h), where P is a two-dimensional coordinate of the point, and w is a left and right of the point.
  • the maximum lateral extent, h is the maximum longitudinal extent allowed to appear above and below this point.
  • the left eye vertex is ⁇
  • the right eye vertex is pi
  • p2 is the third point.
  • the dotted line dot in Fig. 5 indicates the position where the point p2 may exist after ⁇ and pi are determined.
  • step S108 traversing the centroid point, for each centroid point, performing:
  • Step S118 The current first centroid point is a vertex of the face template, and the second centroid point whose distance from the vertex of the right eye is less than or equal to the fourth threshold is searched.
  • step S128 is performed. As shown in FIG. 7, where width and height are the width and height of the face template, and the second centroid point searched is cl.
  • Step S128, searching for a third centroid point whose vertical distance from the other side parallel to the edge of the left eye vertex and the right eye vertex is less than or equal to the fourth threshold.
  • step S138 is performed. As shown in Figure 7, the third centroid point found is c2.
  • Step S138 Calculate a matching probability according to a distance between the second centroid point and the right eye vertex, a vertical distance between the third centroid point and the other side, and a shortest distance between the third centroid point and the third point.
  • the first probability value may be calculated according to the distance between the second centroid point and the right eye vertex.
  • s 1 is the first probability value
  • dl is the distance between the second centroid point c 1 and the right eye vertex
  • the threshold is the fourth threshold
  • the second probability value may be calculated based on the vertical distance between the third centroid point and the other side. Referring to Figure 7, the second probability value can be calculated according to the following formula:
  • s2 is the second probability value
  • d2 is the vertical distance between the third centroid point c2 and the other side linel
  • the threshold is the fourth threshold
  • the distance between the third centroid point c2 and all the third points of the face template can be calculated to obtain the shortest distance.
  • the shortest distance is the third centroid point c2 and the first point.
  • the third probability value can be calculated according to the following formula:
  • s3 is the third probability value
  • d3 is the shortest distance
  • width is the width of the face template.
  • the matching probability is calculated according to the three probability values calculated above.
  • the probability of matching can be calculated according to the following formula:
  • Step S148 it is judged whether the matching probability is greater than or equal to a predetermined value, and if so, the process proceeds to step S158, and otherwise ends.
  • the fourth threshold can be set to 50px, the predetermined value is
  • Step S158 Position an area formed by the first centroid point, the second centroid point, and the third centroid point as a candidate face area.
  • the area formed by the first centroid point c0, the searched second centroid point cl, and the third centroid point c2 is positioned as a candidate face area.
  • the search can be performed in a variety of different search methods. For example, a full search can be performed, that is, each centroid is calculated as the vertices of the face template. In order to improve the search efficiency, part of the search can also be performed, that is, some unqualified centroids can be neglected during the search process, thereby accelerating the entire search process. For example, the center of mass surrounded by a large area of dark areas is clearly not the starting left eye position; the center of the center of mass of the five senses should not have a large longitudinal or horizontal arrangement of centroids; ignore the center of mass of the area close to the face template border; Ignore the centroid of an elliptical or curved arrangement that is close to the template size.
  • the candidate face region may be selected after the step S108, the specific process is: dividing the candidate face region into a set number of grids, calculating the proportion of the skin color in each cell; The candidate face area whose proportion satisfies the preset skin color ratio distribution is the final face area.
  • the face area can be divided into nine cells, and the skin color ratio in each cell is calculated separately.
  • the skin color ratio is the ratio of the skin color pixel points in the grid to all the pixel points of the grid.
  • the skin color recognition method can be used to identify the skin color pixel points, and will not be described here.
  • the proportion of the skin color in each grid is pl ⁇ p9, and the thresholds T3 and T4 are set.
  • the candidate face area is the final face area:
  • Tl can be set to 0.5 and T2 can be set to 0.5.
  • a face calibration system includes a preprocessing module 10, a corner extraction module 20, a corner filtering and combining module 30, a centroid extraction module 40, and a candidate face region positioning module. , among them:
  • the pre-processing module 10 is used to pre-process the picture.
  • the preprocessing performed by the preprocessing module 10 on the picture includes one or more of gradation adjustment, automatic white balance, scale normalization, and image mosaic of the picture. After the image is preprocessed, the subsequent calculation amount can be effectively reduced, thereby improving the calibration efficiency.
  • the corner extraction module 20 is for extracting corner points in the preprocessed picture.
  • the corner filtering and merging module 30 is used to filter and merge the corner points to obtain a connected region of the corner points.
  • the centroid extraction module 40 is for extracting the centroid in the connected region of the corner points.
  • the candidate face area locating module 50 is configured to match the centroid with the face template, calculate the matching probability of the centroid and the face template, and locate the area formed by the centroid whose matching probability is greater than or equal to the predetermined value as the candidate face area.
  • the probabilistic model of the face template can be flexibly stretched, rotated, and more accurately matched to the face, and the algorithm is efficient, so the efficiency and accuracy of the face calibration can be improved.
  • the corner point extraction module 20 is configured to calculate a brightness difference degree between the current pixel point and the surrounding pixel point according to the predefined 3 ⁇ 3 template, and extract a pixel point whose brightness difference degree is greater than or equal to the first threshold value as a corner point;
  • the 3X3 template is an area composed of the current pixel point and the left, right, upper, lower, upper left, upper right, lower left, and lower right pixel points of the current pixel.
  • the corner filtering and merging module 30 is configured to identify skin color points in the preprocessed picture, filter out corner points that do not contain skin color points within a preset range, and also extract YcgCr and YcbCr.
  • the center of the intersection of the color space is the center of the skin color, and the Cb, Cg, and Cr component values of the corner point are calculated, and the values of the Cb, Cg, and Cr component values of the corner point and the center of the skin color are calculated, and the filter is removed.
  • Distance A corner point greater than the second threshold In this embodiment, the corner points around the skin color are left and the corner points farther away from the skin color are removed.
  • the corner point binary map can be obtained after filtering the diagonal points, but since the number of submitted extracts is relatively large at this time, directly matching the face template on the corner point binary map causes a large The amount is calculated, and many corner points are concentrated locally, so adjacent corner points can be combined to reduce the amount of subsequent calculations.
  • the corner filtering and combining module 30 is configured to predefine a distance function, and merge when the distance between adjacent corner points satisfies a preset condition.
  • the algorithm for merging adjacent corner points can be combined by the conventional pixel labeling algorithm, the run-length connectivity algorithm and the region growing algorithm, and will not be described here. After merging the corner points, a connected region of a plurality of corner points is obtained.
  • the centroid extraction module 40 includes a connected region selection unit 410, a centroid extraction unit 420, and a centroid removal unit 430, where:
  • the connected area screening unit 410 is configured to filter out the connected area whose area is greater than or equal to the third threshold and/or the aspect ratio within a preset range.
  • the centroid extracting unit 420 is configured to extract the center point in the selected connected region as the centroid.
  • the centroid removal unit 430 is for calculating the direction of the extracted centroid, and removing the centroid of the direction within the set verticality.
  • the centroid is a vector, and the direction of the centroid is related to the position where the image is located, and the centroid represents the edge orientation information of the region in which it is located.
  • the centroid removal unit 430 can be used to calculate the direction of the centroid using a conventional Sobel operator (an edge extraction operator), and remove the centroid of the direction perpendicular to the vertical range, that is, the centroid of the direction close to vertical. , which is the centroid extracted from the vertical edge, should be removed.
  • the face template is a rectangular template comprising a left eye vertex, a right eye vertex, and at least one third point on the other side that is parallel to the left eye vertex and the right eye vertex.
  • the candidate face region locating module 50 includes a search unit 510, a matching probability calculation unit 520, and a region locating unit 530, where:
  • the searching unit 510 is configured to search for a vertex of the face template with the current first centroid point, search for a second centroid point whose distance from the right eye vertex is less than or equal to a fourth threshold; and also for searching with the left eye vertex The vertical distance of the other side parallel to the side of the right eye vertex is less than or equal to the third centroid point of the fourth threshold.
  • the matching probability calculation unit 520 is configured to calculate a matching region localization according to a distance between the second centroid point and the right eye vertex, a vertical distance between the third centroid point and the other side, and a shortest distance between the third centroid point and the third point.
  • the unit 530 is configured to determine whether the matching probability is greater than or equal to a predetermined value, and if yes, locate an area formed by the first centroid point, the second centroid point, and the third centroid point as a candidate face area.
  • the search unit may search for a plurality of different search methods when searching for centroid points. For example, a full search can be performed, with each centroid being calculated as the apex of the face template. In order to improve the search efficiency, part of the search can also be performed, that is, the part of the unqualified centroid can be ignored in the search process, thereby accelerating the entire search process.
  • the center of mass surrounded by a large area of dark areas is clearly not the starting left eye position; the center of the center of mass of the five senses should not have a large longitudinal or horizontal arrangement of centroids; ignore the center of mass of the area close to the face template border; Ignore the centroid of an elliptical or curved arrangement that is close to the template size.
  • the face calibration system further includes a region selection module 60, wherein:
  • the area selection module 60 is configured to divide the candidate face area into a set number of grids, and calculate the skin color ratio in each grid; and select a candidate face area whose skin color ratio satisfies the preset skin color ratio distribution as the final face area.
  • the face area can be divided into nine cells, and the skin color ratio in each cell is calculated separately.
  • the skin color ratio is the ratio of the skin color pixel points in the grid to all the pixel points of the grid.
  • the skin color recognition method can be used to identify the skin color pixel points, and will not be described here.
  • the region screening module 60 may further be configured to obtain the location of the final face region in the picture and the size of the final face region, and output the same.
  • the above face calibration method and system can be used in various face recognition applications.
  • the above-mentioned face calibration method and system can accurately calibrate the face region with respect to the traditional calibration algorithm, and the execution efficiency is higher, and can adapt to massive data processing.
  • the present invention also provides one or more computer storage media containing computer executable instructions for performing a face calibration method, computer executable instructions in a computer storage medium performing face calibration The specific steps of the method are described in the above method, and are not described herein again. However, it is not to be construed as limiting the scope of the invention. It should be noted that a number of variations and modifications may be made by those skilled in the art without departing from the spirit and scope of the invention. Therefore, the scope of the invention should be determined by the appended claims.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

一种人脸标定方法,所述方法包括:对图片进行预处理;提取预处理后的图片中的角点,对所述角点进行滤波和合并,得到角点的连通区域;提取所述角点的连通区域中的质心;将所述质心与人脸模板进行匹配,计算质心与人脸模板的匹配概率,将所述匹配概率大于等于预定值的质心所构成的区域定位为候选人脸区域。采用上述方法,能够提高人脸标定的准确率和效率。此外,还提供了一种人脸标定系统及计算机存储介质。

Description

人脸标定方法和系统、 计算机存储介盾
【技术领域】
本发明涉及人脸检测技术, 尤其涉及一种人脸标定方法和系统、 计算机存 储介质。
【背景技术】
人脸标定广泛应用于各种基于人脸识别的产品中, 人脸标定的准确性对于 人脸识别至关重要。 随着通信技术的不断发展, 越来越多的便携式通信设备也 具有人脸识别的功能, 例如, 数码相机的人脸识别和笑脸拍照、 移动设备的人 脸解锁等。
目前的人脸识别中, 标定人脸时需要大量的训练样本, 且标定人脸需要大 量运算, 算法效率低下且准确度不高, 无法满足海量数据的处理。
【发明内容】
基于此, 有必要提供一种能提高效率和准确率的人脸标定方法。
一种人脸标定方法, 包括以下步骤: 对图片进行预处理; 提取预处理后的 图片中的角点, 对所述角点进行滤波和合并, 得到角点的连通区域; 提取所述 角点的连通区域中的质心; 将所述质心与人脸模板进行匹配, 计算质心与人脸 模板的匹配概率, 将所述匹配概率大于等于预定值的质心所构成的区域定位为 候选人脸区域。
在其中一个实施例中, 所述预处理包括图片的色阶调整、 自动白平衡、 尺 度归一化和图像马赛克中的一种以上。
在其中一个实施例中, 所述提取预处理后的图片中的角点的步骤为: 根据 预先定义的 3X3模板计算当前像素点与周围像素点的亮度差异度, 提取所述亮 度差异度大于等于第一阈值的像素点为角点; 所述 3X3模板为以当前像素点为 中心和所述当前像素点的左、 右、 上、 下、 左上、 右上、 左下和右下的像素点 所构成的区域。
在其中一个实施例中, 所述对角点进行滤波的步骤为: 识别所述预处理后 的图片中的肤色点, 滤除四周预设范围内不含有肤色点的角点; 提取 YcgCr和 YcbCr两个颜色空间的交叉部分的中心为肤色中心, 计算所述角点的 Cb、 Cg、 Cr分量值, 并计算所述角点的 Cb、 Cg、 Cr分量值与所述肤色中心的距离, 滤 除所述距离大于第二阈值的角点。
在其中一个实施例中, 所述提取角点的连通区域的中的质心的步骤为: 筛 选出区域面积大于等于第三阈值和 /或宽高比例在预设范围内的连通区域; 提取 所述筛选出的连通区域中的中心点为质心; 计算所述提取出的质心的方向, 去 除所述方向的垂直度在设定垂直度范围内的质心。
在其中一个实施例中, 所述人脸模板为矩形模板, 包含左眼顶点、 右眼顶 点和至少一个位于与左眼顶点和右眼顶点所在边平行的另一边上的第三点。
在其中一个实施例中, 所述将所述质心与人脸模板进行匹配, 计算质心与 人脸模板的匹配概率, 将所述匹配概率大于等于预定值的质心所构成的区域定 位为候选人脸区域的步骤为: 遍历质心, 对每个质心点, 执行: 以当前的第一 质心点为人脸模板的顶点, 搜索与所述右眼顶点的距离小于等于第四阈值的第 二质心点; 搜索与所述与左眼顶点和右眼顶点所在边平行的另一边的垂直距离 小于等于第四阈值的第三质心点; 根据所述第二质心点与所述右眼顶点的距离、 所述第三质心点与所述另一边的垂直距离、 所述第三质心点与所述第三点的最 短距离计算所述匹配概率; 判断所述匹配概率是否大于等于预定值, 若是, 则 将所述第一质心点、 第二质心点和第三质心点所构成的区域定位为候选人脸区 域。
在其中一个实施例中, 在所述将所述质心与人脸模板进行匹配, 计算质心 与人脸模板的匹配概率, 将所述匹配概率大于等于预定值的质心所构成的区域 定位为候选人脸区域的步骤之后, 还包括: 将候选人脸区域划分为设定数量的 格子, 计算每一格中的肤色比例; 筛选出所述肤色比例满足预设肤色比例分布 的候选人脸区域为最终人脸区域。 此外, 还有必要提供一种能提高效率和准确率的人脸标定系统。 一种人脸标定系统, 包括: 预处理模块, 用于对图片进行预处理; 角点提 取模块, 用于提取预处理后的图片中的角点; 角点滤波和合并模块, 用于对所 述角点进行滤波和合并, 得到角点的连通区域; 质心提取模块, 用于提取所述 角点的连通区域中的质心; 候选人脸区域定位模块, 用于将所述质心与人脸模 板进行匹配, 计算质心与人脸模板的匹配概率, 将所述匹配概率大于等于预定 值的质' ^所构成的区域定位为候选人脸区域。
在其中一个实施例中, 所述预处理包括图片的色阶调整、 自动白平衡、 尺 度归一化和图像马赛克中的一种以上。
在其中一个实施例中, 所述角点提取模块用于根据预先定义的 3X3模板计 算当前像素点与周围像素点的亮度差异度, 提取所述亮度差异度大于等于第一 阈值的像素点为角点; 所述 3X3模板为以当前像素点为中心和所述当前像素点 的左、 右、 上、 下、 左上、 右上、 左下和右下的像素点所构成的区域。
在其中一个实施例中, 所述角点滤波和合并模块用于识别所述预处理后的 图片中的肤色点, 滤除四周预设范围内不含有肤色点的角点;
所述角点滤波和合并模块还用于提取 YcgCr和 YcbCr两个颜色空间的交叉 部分的中心为肤色中心, 计算所述角点的 Cb、 Cg、 Cr分量值, 并计算所述角点 的 Cb、 Cg、 Cr分量值与所述肤色中心的距离, 滤除所述距离大于第二阈值的角 点。
其中一个实施例中, 所述质心提取模块包括: 连通区域筛选单元, 用于筛 选出区域面积大于等于第三阈值和 /或宽高比例在预设范围内的连通区域; 质心 提取单元, 用于提取所述筛选出的连通区域中的中心点为质心; 质心去除单元, 用于计算所述提取出的质心的方向, 去除所述方向的垂直度在设定垂直度范围 内的质心。
在其中一个实施例中, 所述人脸模板为矩形模板, 包含左眼顶点、 右眼顶 点和至少一个位于与左眼顶点和右眼顶点所在边平行的另一边上的第三点。
在其中一个实施例中, 所述候选人脸区域定位模块包括:
搜索单元, 用于对每个质心点, 以当前的第一质心点为人脸模板的顶点, 搜索与所述右眼顶点的距离小于等于第四阈值的第二质心点; 以及还用于搜索 与所述与左眼顶点和右眼顶点所在边平行的另一边的垂直距离小于等于第四阈 值的第三质心点;
匹配概率计算单元, 用于根据所述第二质心点与所述右眼顶点的距离、 所 述第三质心点与所述另一边的垂直距离、 所述第三质心点与所述第三点的最短 距离计算所述匹配概率;
区域定位单元, 用于判断所述匹配概率是否大于等于预定值, 若是, 则将 所述第一质心点、 第二质心点和第三质心点所构成的区域定位为候选人脸区域。
在其中一个实施例中, 所述系统还包括: 区域 选模块, 用于将候选人脸 区域划分为设定数量的格子, 计算每一格中的肤色比例, 筛选出所述肤色比例 满足预设肤色比例分布的候选人脸区域为最终人脸区域。
此外, 还有必要提供一种能提高效率和准确率的计算机存储介质。
一个或多个包含计算机可执行指令的计算机存储介质, 所述计算机可执行 指令用于执行一种人脸标定方法, 其特征在于, 所述方法包括以下步骤:
对图片进行预处理;
提取预处理后的图片中的角点, 对所述角点进行滤波和合并, 得到角点的 连通区域;
提取所述角点的连通区域中的质心;
将所述质心与人脸模板进行匹配, 计算质心与人脸模板的匹配概率, 将所 述匹配概率大于等于预定值的质心所构成的区域定位为候选人脸区域。 上述人脸标定方法和系统、 计算机存储介质, 通过计算质心与人脸模板的 匹配概率, 将匹配概率大于等于预定值的质心所构成的区域定位为候选人脸区 域, 这种人脸模板的概率模型, 可以鲁棒地伸缩、 旋转, 更准确的匹配人脸, 且算法效率高, 因此能够提高人脸标定的效率和准确率。
【附图说明】
图 1为一个实施例中人脸标定方法的流程示意图; 图 2为某一图片 R通道上的直方图;
图 3为一个实施例中 3X3模板的示意图;
图 4为一个实施例中提取角点的连通区域中的质心的流程示意图; 图 5为一个实施例中人脸模板的示意图;
图 6为一个实施例中定位候选人脸区域的流程示意图;
图 7为一个实施例中将质心与人脸模板进行匹配的示意图;
图 8为一个实施例中肤色比例模型的示意图;
图 9为一个实施例中人脸标定系统的结构框图;
图 10为一个实施例中质心提取模块的结构框图;
图 11为一个实施例中候选人脸区域定位模块的结构框图;
图 12为另一个实施例中人脸标定系统的结构框图。
【具体实施方式】
如图 1所示, 在一个实施例中, 一种人脸标定方法, 包括以下步骤: 步骤 S102, 对图片进行预处理。
具体的, 在一个实施例中, 对图片进行的预处理包括图片的色阶调整、 自 动白平衡、 尺度归一化和图像马赛克中的一种以上。 对图片进行预处理后, 可 有效减少后续的计算量, 从而提高标定效率。
步骤 S104, 提取预处理后的图片中的角点, 对角点进行滤波和合并, 得到 角点的连通区域。
角点是指图像中周围亮度变化剧烈的点, 提取出角点后所得到的图片可视 为轮廓图。 而在提取出预处理后的图片中的角点并不全是所需要的五官的角点 , 因此需要对角点进行滤波, 将与五官位置无关的角点去掉。 而滤波后的角点会 局部聚集, 比如眼睛、 嘴巴部分的角点。 因此, 可对局部聚集的角点进行合并, 得到角点的连通区域。
步骤 S106, 提取角点的连通区域中的质心。
质心为角点的连通区域的中心点, 质心可有效表征人脸中的各个主要特征 部位, 包括眼、 鼻、 嘴等, 提取出质心后, 可用于后续进行人脸模板匹配。 步骤 S 108 ,将质心与人脸模板进行匹配,计算质心与人脸模板的匹配概率, 将匹配概率大于等于预定值的质心所构成的区域定位为候选人脸区域。
本实施例中, 通过这种人脸模板的概率模型, 可以鲁棒地伸缩、 旋转, 更 准确的匹配人脸, 且算法效率高, 因此能够提高人脸标定的效率和准确率。
在一个优选的实施例中, 对图片进行预处理包括图片的色阶调整、 自动白 平衡、 尺度归一化和图像马赛克。
图像的色阶调整是指调整图片的色阶。 色阶是表示图片亮度强度的指数标 准, 图片的色彩丰满度和精细度是由色阶决定的。 通过调整图片的色阶可以调 整图像的阴影、 中间调和高光的强度级别, 从而在一定程度上增强图像的视觉 效果。
在一个实施例中, 可对图片的 RGB三个通道分别进行色阶调整。 如图 2所 示, 为某一图片 R通道上的直方图, 可去除包含较少数据的阴影区和高光区, 将左右边界调整到 [left,right]区间上,然后对 R上的值重新映射回到 [0,255]区间。
[left,right]区间即去除了阴影区和高光区之后保留下的色阶信息。
具体的, 可按照如下公式计算新的 R/G/B值:
Diff = right - left
newRGB = (oldRGB - left) * 255 / Diff
其中, newRGB为新的 R/G/B值, oldRGB为色阶调整前的 R/G/B值。
本实施例中, 色阶调整后的图片对比度会有所改善, 边缘会更加清晰, 更 利于后续的肤色识别、 角点滤波等。
自动白平衡用于解决色彩偏移问题。 由于在实际的拍照过程中, 由于移动 终端等设备的拍摄环境或摄像头等设备本身的限制, 会导致曝光不准确, 从而 发生显著的色彩偏移或严重缺失部分色彩的情况。 色彩偏移或严重缺失部分色 彩会影响后续的肤色区域的提取, 因此需要进行自动白平衡。
在一个实施例中, 可按照如下公式进行自动白平衡:
其中, R '、 G '、 分别为白平衡后的图片三分量值, R、 G . β为原图像的 三分量值; R G 分别为图片上的1 、 G、 B分类的均值。
由于不同大小的图片在后续的角点、 质点提取、 角点合并和滤波等处理上 有不同的参数, 因此为使得后续这些处理过程统一参数, 可对图片进行尺寸归 一化, 即对图片进行缩放处理。
在一个实施例中, 可保持原图比例对图片进行缩放, 也可不保持原图比例 进行缩放。 优选的, 保持原图比例对图片进行缩放, 例如, 可将高度大于 400px 的图片保持宽高比例缩小到高度为 400px,对于高度小于 400px的图片则可维持 原尺寸不变, 不进行放大。
由于有些图片的边缘比较宽, 如大于 1 像素的单位, 若直接在像素级别上 提取角点可能会缺失大量所需的角点。 在一个优选的实施例中, 可在图片的尺 寸归一化后对图片进行马赛克处理, 将图片转化为一个马赛克图像, 可以更准 确的提取角点, 同时, 在马赛克图片上提取角点也能大大提升处理的速度。 例 如, 在 400px尺度下, 可选择马赛克的尺度为 2X2px, 新的像素值则为这四个 像素的平均值。
在一个实施例中 ,步骤 S104中提取预处理后的图片中的角点的具体过程为: 根据预先定义的 3X3模板计算当前像素点与周围像素点的亮度差异度, 提取亮 度差异度大于等于第一阈值的像素点为角点; 其中, 该 3X3模板为以当前像素 点为中心和当前像素点的左、 右、 上、 下、 左上、 右上、 左下和右下的像素点 所构成的区域。
如图 3所示, 当前像素点为 C , 则其左、 右、 上、 下、 左上、 右上、 左下和 右下的像素点分别为 Al、 A、 B、 Bl、 A3、 B2、 B3和 A2 , 这 9个像素点所构 成的区域即为定义的 3X3模板。 具体的, 在一个实施例中, 对于预处理后的图 片中的每一个像素点(即当前像素点), 按照如下公式计算当前像素点与周围像 素点的亮度差异度:
定义:
rAl = wl * (fA- fC)2 + wl * (fAl - fC)2
rBl = w2 * (ffi - fC)2 + w2 * (ffil - fC)2
Rl = min(rAl, rBl) rA2 = w3 * (fA2 - fC)2 + w3 * (fA3 - fC)2
rB2 = w4 * (ffi2 - fC)2 + w4 * (ffi3 - fC)2
R2 = min(rA2, rB2)
其中, f表示像素点的亮度分量(0~255 ), 如 fA表示 C点的右边像素 A的 亮度。 Wl、 W2、 W3和 W4为权重, 可取 W1=W2=1 , W3=W4=1。 如果 R1和 R2都小于给定的阈值 Tl (如 Tl=200 ), 则该点(即当前像素点)不是角点, 否 则进行以下公式的计算:
Bl = w5 * (ffi - fA) X (fA- fC) + w5 * (ffil - fAl) X (fAl - fC)
B2 = w6 * (ffi - fAl) X (fAl - fC) + w6 * (ffil - fA) X (fA- fC)
B3 = w7 * (ffi2 - fA2) X (fA2 - fC) + w7 * (ffi3 - fA3) X (fA3 - fC)
B4 = w8 * (ffi2 - fA3) X (fA3 - fC) + w8 * (ffi3 - fA2) X (fA2 - fC) mBl = min(Bl, B2)
mB2 = min(B3, B4)
Al = rBl -rAl - 2 X mBl
A2 = rB2 - r A2 - 2 X mB2
Rl =rAl -mBl2/Al 当 rnB O且 Al+mBl>0
R2 = rA2 -mB22/A2 当 mB2<0且 A2+mB2>0
其中, W5、 W6、 W7、 W8为权重, 可取 W5=W6=1 , W7=W8=1 , 计算得 到最终的 Rl和 R2为当前像素点与周围像素点的亮度差异度。如果 R1和 R2都 小于给定的阈值 T2 (如 T2=700 ), 则该点 (即当前像素点) 不是角点, 否则, 说明该点在图像中周围亮度变化剧烈, 该点为角点。
在一个实施例中, 在步骤 S104中对角点进行滤波的步骤为: 识别预处理后 的图片中的肤色点, 滤除四周预设范围内不含有肤色点的角点; 提取 YcgCr和 YcbCr两个颜色空间的交叉部分的中心为肤色中心, 计算所述角点的 Cb、 Cg、 Cr分量值, 并计算所述角点的 Cb、 Cg、 Cr分量值与所述肤色中心的距离, 滤 除所述距离大于第二阈值的角点。 本实施例中, 即保留周围有肤色的角点并去 除掉与肤色距离较远的角点。
可做皮肤检测的颜色空间有很多种, 如 RGB、 HSV、 YCbCr、 YUV、 YCgCr 等。 在一个优选的实施例中, 可同时在 YCbCr和 YCgCr两个空间上提取肤色, 提取的准确率较好。在 YCgCr颜色空间上,肤色范围为 CgE [85, 135] , Cr G [-Cg + 260, -Cg + 280] o 在 YCbCr颜色空间上, 肤色范围为 Cbe [77, 127], Cr e [133, 173] , 同时, 在这两个颜色空间上 Y E [16, 235]。
具体的, 可预处理后的图片中的像素点, 按照如下公式根据像素点的 RGB 值计算得到 Y、 Cb、 Cr、 Cg分量:
65.481 128,553 24.966:
-37.797 -74.203 112
-81.08S 112 -30.915:
Figure imgf000011_0001
. 112 -93:;786 — ί8'2Μ」 如果所计算得到的 Y、 Cb、 Cr、 Cg分量同时满足上述两个肤色范围, 则该 像素为肤色像素点(即肤色点)。 如果提取出的角点的四周预设范围内不含有肤 色点, 在滤除该角点。
本实施例中,取图片在上述两个颜色空间的交叉部分的中心( Per, Peg, Pcb ), 即为肤色中心, 对于预处理后的图片中的每个像素点, 计算出它的 Cb、 Cr、 Cg 分量后, 再计算该像素点到肤色中心的欧式距离。 如果像素点的颜色到肤色中 心的欧式距离大于第二阈值, 则认为该像素点不可能为肤色, 滤除掉该角点。
在一个实施例中, 在对角点进行滤波后可得到角点二值图, 但由于此时提 取出的提交数目比较多, 直接在角点二值图上匹配人脸模板会造成很大的计算 量, 而很多角点在局部会聚集, 因此可将邻近的角点进行合并, 以减少后续的 计算量。
具体的, 可预先定义一个距离函数, 当邻近的角点之间的距离满足预设条 件则进行合并。 关于合并邻近角点的算法可采用传统的像素标记算法、 游程连 通性算法和区域生长算法进行合并, 在此则不再赘述。 合并角点后, 得到多个 角点的连通区域。
如图 4所示, 在一个实施例中, 步骤 S106的具体过程为:
步骤 S116, 筛选出区域面积大于等于第三阈值和 /或宽高比例在预设范围内 的连通区 i或。
由于得到的部分的角点的连通区域可能并不符合人脸的特性, 因此需要对 角点的连通区域进行过滤。 具体的, 本实施例中, 可去除掉区域面积小于第三 阈值的连通区域; 和 /或宽高比例不在预设范围内的连通区域。 例如, 第三阈值 设定为 450, 则筛选出区域面积大于等于 450的连通区域。预设范围可为宽高比 例必须大于 0.5 , 小于 5。 其中, 第三阈值可根据人脸模板的尺度进行设定, 以 有利于后续匹配人脸模板。
步骤 S126, 提取筛选出的连通区域中的中心点为质心。
步骤 S136, 计算提取的质心的方向, 去除方向的垂直度在设定垂直度范围 内的质心。
具体的, 质心为一个矢量, 质心的方向与它在图像所处的位置有关, 质心 其方向表示的是它所处区域的边缘走向信息。 进一步的, 可采用传统的 Sobel 算子 (一种边缘提取算子)计算质心的方向, 去除方向的垂直度在设定垂直度 范围内的质心, 即对于方向接近垂直的质心, 其为从垂直边缘提取的质心, 应 去除掉。
在执行步骤 S136之后,所得到的质心则可用于进行人脸模板匹配,具体的, 每个质心可表达为 (P,R,D ), 其中, P为合并后的连通区域的中心点, R为该连 通区域的半径, D为连通区域的密度。
在一个实施例中, 人脸模板为矩形模板, 包含左眼顶点、 右眼顶点和至少 一个位于与左眼顶点和右眼顶点所在边平行的另一边上的第三点。
如图 5所示, 人脸模板为矩形模板, 至少包括三个点, 每个点由 (P,w,h ) 来表示, 其中, P为点的二维坐标, w是该点左右允许出现的最大横向范围, h 是该点上下允许出现的最大纵向范围。 如图 5所示, 左眼顶点为 ρθ, 右眼顶点 为 pi , p2为第三点, 图 5中的虚线质点表示当确定了 ρθ和 pi后, 点 p2可能 存在的位置。
如图 6所示, 在一个实施例中, 步骤 S108的具体过程为: 遍历质心点, 对 每个质心点, 执行:
步骤 S118 , 以当前的第一质心点为人脸模板的顶点, 搜索与右眼顶点的距 离小于等于第四阈值的第二质心点。
具体的, 如果搜索不到第二质心点, 则说明该人脸模板不匹配, 如果搜索 得到, 则执行步骤 S128。 如图 7所示, 其中, width、 height为人脸模板的宽度 和高度, 搜索到的第二质心点为 cl。
步骤 S128 , 搜索与所述与左眼顶点和右眼顶点所在边平行的另一边的垂直 距离小于等于第四阈值的第三质心点。
具体的, 如果搜索不到第三质心点, 则说明人脸模板不匹配, 如果搜索得 到, 则执行步骤 S138。 如图 7所示, 搜索到的第三质心点为 c2。
步骤 S138 , 根据第二质心点与右眼顶点的距离、 第三质心点与所述另一边 的垂直距离、 第三质心点与所述第三点的最短距离计算匹配概率。
具体的, 在一个实施例中, 可在搜索到第二质心点后, 根据第二质心点与 右眼顶点的距离计算第一概率值。 结合图 7 , 可按照如下公式计算第一概率值: si = 1 - dl / threshold
其中, s 1为第一概率值, dl为第二质心点 c 1与右眼顶点的距离, threshold 为第四阈值。
在搜索到第三质心点后, 可根据第三质心点与所述另一边的垂直距离计算 第二概率值。 结合图 7 , 可按照如下公式计算第二概率值:
s2 = 1 - d2/threshold
其中, s2为第二概率值, d2为第三质心点 c2与所述另一边 linel的垂直距 离, threshold为第四阈值。
在搜索得到第三质心点 c2后,可计算得到第三质心点 c2与人脸模板的所有 第三点的距离, 得到最短距离, 如图 7所示, 最短距离为第三质心点 c2与第三 点 p4之间的距离 d3。 如果 d3大于 width/5 , 则说明该人脸模板不匹配, 否则, 进一步根据第三质心点与所述第三点的最短距离计算第三概率值。 具体的, 可 按照如下公式计算第三概率值:
s3 = 1 - d3 / (width I 5)
其中, s3为第三概率值, d3为最短距离, width为人脸模板的宽度。
进一步的, 根据上述计算的三个概率值计算匹配概率。 在一个实施例中, 可按照如下公式计算匹配概率:
p = 3*sl + s2 + s3 步骤 S148, 判断匹配概率是否大于等于预定值, 若是, 则进入步骤 S158 , 否则结束。
例如, 对于 250pxX250px的人脸模板, 可设定第四阈值为 50px, 预定值为
0.8 ο
步骤 S158 , 将第一质心点、 第二质心点和第三质心点所构成的区域定位为 候选人脸区域。
如图 7所示, 第一质心点 c0、 搜索到的第二质心点 cl和第三质心点 c2所 构成的区域定位为候选人脸区域。
应当说明的是, 在搜索质心点时, 可以按照多种不同的搜索方式进行搜索。 例如, 可以进行完全搜索, 即将每一个质心都作为人脸模板的顶点进行计算。 为提高搜索效率, 也可以进行部分搜索, 即在搜索过程中可将部分不符合条件 的质心忽略掉, 从而加速整个搜索过程。 例如, 被大面积暗色区域包围的质心 明显不会是起始的左眼位置; 五官的质心邻近区域不应该存在超大的纵向或横 向排列的质心; 忽略与人脸模板边框接近的区域的质心; 忽略与模板尺寸接近 的椭圆形或弧形排列的质心。
在一个实施例中, 在步骤 S108之后还可对候选人脸区域进行区域 选, 具 体过程为: 将候选人脸区域划分为设定数量的格子, 计算每一格中的肤色比例; 筛选出肤色比例满足预设肤色比例分布的候选人脸区域为最终人脸区域。
对于有些候选人脸区域, 可能并不是真正的人脸区域, 对候选人脸区域进 行区域筛选可以进一步提高定位人脸的准确率。 在一个实施例中, 如图 8所示, 可将人脸区域分为 9格, 分别计算每一格中的肤色比例。 肤色比例即该格子中 肤色像素点占该格子的所有像素点的比例。 而识别肤色像素点可采用上述的肤 色识别方法, 在此则不再赘述。
具体的, 如图 8所示, 每个格子中的肤色比例分别为 pl~p9, 设定阈值 T3 和 T4, 当肤色比例满足以下条件, 可该候选人脸区域为最终人脸区域:
Pl、 p3、 p4、 p7、 p8、 p9>= Tl
|p3 - pl | < T2
|p6 - p4| < T2 |p9 - p7| < T2
其中, Tl可设定为 0.5 , T2可设定为 0.5。
在得到最终人脸区域后, 可进一步获取该最终人脸区域在图片中的位置以 及最终人脸区域的大小, 并输出。 如图 9所示, 在一个实施例中, 一种人脸标定系统, 包括预处理模块 10、 角点提取模块 20、 角点滤波和合并模块 30、 质心提取模块 40和候选人脸区域 定位模块, 其中:
预处理模块 10用于对图片进行预处理。
具体的, 在一个实施例中, 预处理模块 10对图片进行的预处理包括图片的 色阶调整、 自动白平衡、 尺度归一化和图像马赛克中的一种以上。 对图片进行 预处理后, 可有效减少后续的计算量, 从而提高标定效率。
角点提取模块 20用于提取预处理后的图片中的角点。
角点滤波和合并模块 30用于对角点进行滤波和合并,得到角点的连通区域。 质心提取模块 40用于提取角点的连通区域中的质心。
候选人脸区域定位模块 50用于将质心与人脸模板进行匹配, 计算质心与人 脸模板的匹配概率, 将匹配概率大于等于预定值的质心所构成的区域定位为候 选人脸区域。
本实施例中, 通过这种人脸模板的概率模型, 可以鲁棒地伸缩、 旋转, 更 准确的匹配人脸, 且算法效率高, 因此能够提高人脸标定的效率和准确率。
在一个实施例中,角点提取模块 20用于根据预先定义的 3X3模板计算当前 像素点与周围像素点的亮度差异度, 提取亮度差异度大于等于第一阈值的像素 点为角点; 其中, 该 3X3模板为以当前像素点为中心和当前像素点的左、 右、 上、 下、 左上、 右上、 左下和右下的像素点所构成的区域。
在一个实施例中, 角点滤波和合并模块 30用于识别预处理后的图片中的肤 色点, 滤除四周预设范围内不含有肤色点的角点, 还用于提取 YcgCr和 YcbCr 两个颜色空间的交叉部分的中心为肤色中心, 计算所述角点的 Cb、 Cg、 Cr分量 值, 并计算所述角点的 Cb、 Cg、 Cr分量值与所述肤色中心的距离, 滤除所述距 离大于第二阈值的角点。 本实施例中, 即保留周围有肤色的角点并去除掉与肤 色距离较远的角点。
在一个实施例中, 在对角点进行滤波后可得到角点二值图, 但由于此时提 取出的提交数目比较多, 直接在角点二值图上匹配人脸模板会造成很大的计算 量, 而很多角点在局部会聚集, 因此可将邻近的角点进行合并, 以减少后续的 计算量。
具体的, 角点滤波和合并模块 30用于预先定义一个距离函数, 当邻近的角 点之间的距离满足预设条件则进行合并。 关于合并邻近角点的算法可采用传统 的像素标记算法、 游程连通性算法和区域生长算法进行合并, 在此则不再赘述。 合并角点后, 得到多个角点的连通区域。
在一个实施例中, 如图 10所述, 质心提取模块 40包括连通区域 选单元 410、 质心提取单元 420和质心去除单元 430, 其中:
连通区域筛选单元 410用于筛选出区域面积大于等于第三阈值和 /或宽高比 例在预设范围内的连通区域。
质心提取单元 420用于提取筛选出的连通区域中的中心点为质心。
质心去除单元 430用于计算提取的质心的方向, 去除方向的垂直度在设定 垂直度范围内的质心。
具体的, 质心为一个矢量, 质心的方向与它在图像所处的位置有关, 质心 其方向表示的是它所处区域的边缘走向信息。 进一步的, 质心去除单元 430可 用于采用传统的 Sobel算子(一种边缘提取算子 )计算质心的方向, 去除方向的 垂直度在设定垂直度范围内的质心, 即对于方向接近垂直的质心, 其为从垂直 边缘提取的质心, 应去除掉。
在一个实施例中, 人脸模板为矩形模板, 包含左眼顶点、 右眼顶点和至少 一个位于与左眼顶点和右眼顶点所在边平行的另一边上的第三点。
如图 11所示, 候选人脸区域定位模块 50包括搜索单元 510、 匹配概率计算 单元 520和区域定位单元 530, 其中:
搜索单元 510 用于以当前的第一质心点为人脸模板的顶点, 搜索与右眼顶 点的距离小于等于第四阈值的第二质心点; 以及还用于搜索与所述与左眼顶点 和右眼顶点所在边平行的另一边的垂直距离小于等于第四阈值的第三质心点。 匹配概率计算单元 520 用于根据第二质心点与右眼顶点的距离、 第三质心 点与所述另一边的垂直距离、 第三质心点与所述第三点的最短距离计算匹配概 区域定位单元 530 用于判断匹配概率是否大于等于预定值, 若是, 则将第 一质心点、 第二质心点和第三质心点所构成的区域定位为候选人脸区域。
在一个实施例中, 搜索单元在搜索质心点时, 可以按照多种不同的搜索方 式进行搜索。 例如, 可以进行完全搜索, 即将每一个质心都作为人脸模板的顶 点进行计算。 为提高搜索效率, 也可以进行部分搜索, 即在搜索过程中可将部 分不符合条件的质心忽略掉, 从而加速整个搜索过程。 例如, 被大面积暗色区 域包围的质心明显不会是起始的左眼位置; 五官的质心邻近区域不应该存在超 大的纵向或横向排列的质心; 忽略与人脸模板边框接近的区域的质心; 忽略与 模板尺寸接近的椭圆形或弧形排列的质心。
如图 12所示, 在另一个实施例中, 人脸标定系统还包括区域 选模块 60 , 其中:
区域 选模块 60用于将候选人脸区域划分为设定数量的格子, 计算每一格 中的肤色比例; 筛选出肤色比例满足预设肤色比例分布的候选人脸区域为最终 人脸区域。
对于有些候选人脸区域, 可能并不是真正的人脸区域, 对候选人脸区域进 行区域筛选可以进一步提高定位人脸的准确率。 在一个实施例中, 如图 8所示, 可将人脸区域分为 9格, 分别计算每一格中的肤色比例。 肤色比例即该格子中 肤色像素点占该格子的所有像素点的比例。 而识别肤色像素点可采用上述的肤 色识别方法, 在此则不再赘述。
区域筛选模块 60在得到最终人脸区域后, 可进一步用于获取该最终人脸区 域在图片中的位置以及最终人脸区域的大小, 并输出。
应当说明的是, 上述人脸标定方法和系统, 可用于各种人脸识别应用中。 上述人脸标定方法和系统, 相对于传统的标定算法, 能够更加准确的标定出人 脸区域, 且执行效率更高, 能够适应海量的数据处理。 此外, 本发明还提供了一个或多个包含计算机可执行指令的计算机存储介 质, 所述计算机可执行指令用于执行一种人脸标定方法, 计算机存储介质中的 计算机可执行指令执行人脸标定方法的具体步骤如上述方法描述, 在此不再赘 述。 但并不能因此而理解为对本发明专利范围的限制。 应当指出的是, 对于本领域 的普通技术人员来说, 在不脱离本发明构思的前提下, 还可以做出若干变形和 改进, 这些都属于本发明的保护范围。 因此, 本发明专利的保护范围应以所附 权利要求为准。

Claims

权 利 要 求 书
1、 一种人脸标定方法, 包括以下步骤:
对图片进行预处理;
提取预处理后的图片中的角点, 对所述角点进行滤波和合并, 得到角点 的连通区 i或;
提取所述角点的连通区域中的质心;
将所述质心与人脸模板进行匹配, 计算质心与人脸模板的匹配概率, 将 所述匹配概率大于等于预定值的质心所构成的区域定位为候选人脸区域。
2、 根据权利要求 1所述的人脸标定方法, 其特征在于, 所述预处理包括 图片的色阶调整、 自动白平衡、 尺度归一化和图像马赛克的一种以上。
3、 根据权利要求 1所述的人脸标定方法, 其特征在于, 所述提取预处理 后的图片中的角点的步骤为:
根据预先定义的 3X3模板计算当前像素点与周围像素点的亮度差异度, 提取所述亮度差异度大于等于第一阈值的像素点为角点;
所述 3X3模板为以当前像素点为中心和所述当前像素点的左、 右、 上、 下、 左上、 右上、 左下和右下的像素点所构成的区域。
4、 根据权利要求 1所述的人脸标定方法, 其特征在于, 所述对角点进行 滤波的步骤为:
识别所述预处理后的图片中的肤色点, 滤除四周预设范围内不含有肤色 点的角点;
提取 YcgCr和 YcbCr两个颜色空间的交叉部分的中心为肤色中心, 计算 所述角点的 Cb、 Cg、 Cr分量值, 并计算所述角点的 Cb、 Cg、 Cr分量值与所 述肤色中心的距离, 滤除所述距离大于第二阈值的角点。
5、 根据权利要求 1所述的人脸标定方法, 其特征在于, 所述提取角点的 连通区域中的质心的步骤为:
筛选出区域面积大于等于第三阈值和 /或宽高比例在预设范围内的连通区 域;
提取所述筛选出的连通区域中的中心点为质心;
计算所述提取出的质心的方向, 去除所述方向的垂直度在设定垂直度范 围内的质心。
6、 根据权利要求 1所述的人脸标定方法, 其特征在于, 所述人脸模板为 矩形模板, 包含左眼顶点、 右眼顶点和至少一个位于与左眼顶点和右眼顶点 所在边平行的另一边上的第三点。
7、 根据权利要求 6所述的人脸标定方法, 其特征在于, 所述将所述质心 与人脸模板进行匹配, 计算质心与人脸模板的匹配概率, 将所述匹配概率大 于等于预定值的质心所构成的区域定位为候选人脸区域的步骤为:
遍历质心, 对每个质心点, 执行:
以当前的第一质心点为人脸模板的顶点, 搜索与所述右眼顶点的距离小 于等于第四阈值的第二质心点;
搜索与所述与左眼顶点和右眼顶点所在边平行的另一边的垂直距离小于 等于第四阈值的第三质心点;
根据所述第二质心点与所述右眼顶点的距离、 所述第三质心点与所述另 一边的垂直距离、 所述第三质心点与所述第三点的最短距离计算所述匹配概 判断所述匹配概率是否大于等于预定值, 若是, 则将所述第一质心点、 第二质心点和第三质心点所构成的区域定位为候选人脸区域。
8、 根据权利要求 1所述的人脸标定方法, 其特征在于, 在所述将所述质 心与人脸模板进行匹配, 计算质心与人脸模板的匹配概率, 将所述匹配概率 大于等于预定值的质心所构成的区域定位为候选人脸区域的步骤之后, 还包 括:
将候选人脸区域划分为设定数量的格子, 计算每一格中的肤色比例; 筛选出所述肤色比例满足预设肤色比例分布的候选人脸区域为最终人脸 区域。
9、 一种人脸标定系统, 其特征在于, 包括:
预处理模块, 用于对图片进行预处理;
角点提取模块, 用于提取预处理后的图片中的角点;
角点滤波和合并模块, 用于对所述角点进行滤波和合并, 得到角点的连 通区域;
质心提取模块, 用于提取所述角点的连通区域中的质心;
候选人脸区域定位模块, 用于将所述质心与人脸模板进行匹配, 计算质 心与人脸模板的匹配概率, 将所述匹配概率大于等于预定值的质心所构成的 区域定位为候选人脸区域。
10、 根据权利要求 9所述的人脸标定系统, 其特征在于, 所述预处理为 图片的色阶调整、 自动白平衡、 尺度归一化和图像马赛克的一种以上。
11、 根据权利要求 9所述的人脸标定系统, 其特征在于, 所述角点提取 模块用于根据预先定义的 3X3模板计算当前像素点与周围像素点的亮度差异 度, 提取所述亮度差异度大于等于第一阈值的像素点为角点;
所述 3X3模板为以当前像素点为中心和所述当前像素点的左、 右、 上、 下、 左上、 右上、 左下和右下的像素点所构成的区域。
12、 根据权利要求 9所述的人脸标定系统, 其特征在于, 所述角点滤波 和合并模块用于识别所述预处理后的图片中的肤色点, 滤除四周预设范围内 不含有肤色点的角点;
所述角点滤波和合并模块还用于提取 YcgCr和 YcbCr两个颜色空间的交 叉部分的中心为肤色中心, 计算所述角点的 Cb、 Cg、 Cr分量值, 并计算所 述角点的 Cb、 Cg、 Cr分量值与所述肤色中心的距离, 滤除所述距离大于第 二阈值的角点。
13、 根据权利要求 9所述的人脸标定系统, 其特征在于, 所述质心提取 模块包括:
连通区域筛选单元, 用于筛选出区域面积大于等于第三阈值和或宽高比 例在预设范围内的连通区域;
质心提取单元, 用于提取所述筛选出的连通区域中的中心点为质心; 质心去除单元, 用于计算所述提取出的质心的方向, 去除所述方向的垂 直度在设定垂直度范围内的质心。
14、 根据权利要求 9所述的人脸标定系统, 其特征在于, 所述人脸模板 为矩形模板, 包含左眼顶点、 右眼顶点和至少一个位于与左眼顶点和右眼顶 点所在边平行的另一边上的第三点。
15、 根据权利要求 14所述的人脸标定系统, 其特征在于, 所述候选人脸 区域定位模块包括:
搜索单元, 用于对每个质心点, 以当前的第一质心点为人脸模板的顶点, 搜索与所述右眼顶点的距离小于等于第四阈值的第二质心点; 以及还用于搜 索与所述与左眼顶点和右眼顶点所在边平行的另一边的垂直距离小于等于第 四阈值的第三质心点;
匹配概率计算单元, 用于根据所述第二质心点与所述右眼顶点的距离、 所述第三质心点与所述另一边的垂直距离、 所述第三质心点与所述第三点的 最短距离计算所述匹配概率;
区域定位单元, 用于判断所述匹配概率是否大于等于预定值, 若是, 则 将所述第一质心点、 第二质心点和第三质心点所构成的区域定位为候选人脸 区域。
16、 根据权利要求 9所述的人脸标定系统, 其特征在于, 所述系统还包 括:
区域 选模块, 用于将候选人脸区域划分为设定数量的格子, 计算每一 格中的肤色比例, 选出所述肤色比例满足肤色比例分布的候选人脸区域为 最终人脸区域。
17、 一个或多个包含计算机可执行指令的计算机存储介质, 所述计算机 可执行指令用于执行一种人脸标定方法, 其特征在于, 所述方法包括以下步 骤: 对图片进行预处理;
提取预处理后的图片中的角点, 对所述角点进行滤波和合并, 得到角点 的连通区域;
提取所述角点的连通区域中的质心;
将所述质心与人脸模板进行匹配, 计算质心与人脸模板的匹配概率, 将 所述匹配概率大于等于预定值的质心所构成的区域定位为候选人脸区域。
18、 根据权利要求 17所述的计算机存储介质, 其特征在于, 所述预处理 包括图片的色阶调整、 自动白平衡、 尺度归一化和图像马赛克的一种以上。
19、 根据权利要求 17所述的计算机存储介质, 其特征在于, 所述提取预 处理后的图片中的角点的步骤为:
根据预先定义的 3X3模板计算当前像素点与周围像素点的亮度差异度, 提取所述亮度差异度大于等于第一阈值的像素点为角点;
所述 3X3模板为以当前像素点为中心和所述当前像素点的左、 右、 上、 下、 左上、 右上、 左下和右下的像素点所构成的区域。
20、 根据权利要求 17所述的计算机存储介质, 其特征在于, 所述对角点 进行滤波的步骤为:
识别所述预处理后的图片中的肤色点, 滤除四周预设范围内不含有肤色 点的角点;
提取 YcgCr和 YcbCr两个颜色空间的交叉部分的中心为肤色中心, 计算 所述角点的 Cb、 Cg、 Cr分量值, 并计算所述角点的 Cb、 Cg、 Cr分量值与所 述肤色中心的距离, 滤除所述距离大于第二阈值的角点。
21、 根据权利要求 17所述的计算机存储介质, 其特征在于, 所述提取角 点的连通区域中的质心的步骤为:
筛选出区域面积大于等于第三阈值和 /或宽高比例在预设范围内的连通区 域;
提取所述筛选出的连通区域中的中心点为质心;
计算所述提取出的质心的方向, 去除所述方向的垂直度在设定垂直度范 围内的质心。
22、 根据权利要求 17所述的计算机存储介质, 其特征在于, 所述人脸模 板为矩形模板, 包含左眼顶点、 右眼顶点和至少一个位于与左眼顶点和右眼 顶点所在边平行的另一边上的第三点。
23、 根据权利要求 22所述的计算机存储介质, 其特征在于, 所述将所述 质心与人脸模板进行匹配, 计算质心与人脸模板的匹配概率, 将所述匹配概 率大于等于预定值的质心所构成的区域定位为候选人脸区域的步骤为:
遍历质心, 对每个质心点, 执行:
以当前的第一质心点为人脸模板的顶点, 搜索与所述右眼顶点的距离小 于等于第四阈值的第二质心点;
搜索与所述与左眼顶点和右眼顶点所在边平行的另一边的垂直距离小于 等于第四阈值的第三质心点;
根据所述第二质心点与所述右眼顶点的距离、 所述第三质心点与所述另 一边的垂直距离、 所述第三质心点与所述第三点的最短距离计算所述匹配概 判断所述匹配概率是否大于等于预定值, 若是, 则将所述第一质心点、 第二质心点和第三质心点所构成的区域定位为候选人脸区域。
24、 根据权利要求 17所述的计算机存储介质, 其特征在于, 在所述将所 述质心与人脸模板进行匹配, 计算质心与人脸模板的匹配概率, 将所述匹配 概率大于等于预定值的质心所构成的区域定位为候选人脸区域的步骤之后, 还包括:
将候选人脸区域划分为设定数量的格子, 计算每一格中的肤色比例; 筛选出所述肤色比例满足预设肤色比例分布的候选人脸区域为最终人脸 区域。
替换页 (细则第 26条)
PCT/CN2013/072518 2012-03-26 2013-03-13 人脸标定方法和系统、计算机存储介质 WO2013143390A1 (zh)

Priority Applications (9)

Application Number Priority Date Filing Date Title
CA2867365A CA2867365C (en) 2012-03-26 2013-03-13 Method, system and computer storage medium for face detection
SG11201405684WA SG11201405684WA (en) 2012-03-26 2013-03-13 Face calibration method and system, and computer storage medium
AP2014007969A AP2014007969A0 (en) 2012-03-26 2013-03-13 Face calibration method and system, and computer storage medium
EP13770054.8A EP2833288B1 (en) 2012-03-26 2013-03-13 Face calibration method and system, and computer storage medium
RU2014142591/08A RU2601185C2 (ru) 2012-03-26 2013-03-13 Способ, система и компьютерный носитель данных для детектирования лица
KR1020147029988A KR101683704B1 (ko) 2012-03-26 2013-03-13 얼굴 보정 방법, 시스템 및 컴퓨터 저장 매체
PH12014501995A PH12014501995A1 (en) 2012-03-26 2014-09-05 Method, system and computer storage medium for face detection
ZA2014/06837A ZA201406837B (en) 2012-03-26 2014-09-18 Face calibration method and system, and computer storage medium
US14/497,191 US9530045B2 (en) 2012-03-26 2014-09-25 Method, system and non-transitory computer storage medium for face detection

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210082964.6A CN102663354B (zh) 2012-03-26 2012-03-26 人脸标定方法和系统
CN201210082964.6 2012-03-26

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/497,191 Continuation US9530045B2 (en) 2012-03-26 2014-09-25 Method, system and non-transitory computer storage medium for face detection

Publications (1)

Publication Number Publication Date
WO2013143390A1 true WO2013143390A1 (zh) 2013-10-03

Family

ID=46772838

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/072518 WO2013143390A1 (zh) 2012-03-26 2013-03-13 人脸标定方法和系统、计算机存储介质

Country Status (13)

Country Link
US (1) US9530045B2 (zh)
EP (1) EP2833288B1 (zh)
KR (1) KR101683704B1 (zh)
CN (1) CN102663354B (zh)
AP (1) AP2014007969A0 (zh)
CA (1) CA2867365C (zh)
CL (1) CL2014002526A1 (zh)
MY (1) MY167554A (zh)
PH (1) PH12014501995A1 (zh)
RU (1) RU2601185C2 (zh)
SG (1) SG11201405684WA (zh)
WO (1) WO2013143390A1 (zh)
ZA (1) ZA201406837B (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111724406A (zh) * 2020-07-14 2020-09-29 苏州精濑光电有限公司 一种区域连通合并方法、装置、设备和介质

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663354B (zh) 2012-03-26 2014-02-19 腾讯科技(深圳)有限公司 人脸标定方法和系统
JP5878586B2 (ja) * 2013-05-09 2016-03-08 華碩電腦股▲ふん▼有限公司ASUSTeK COMPUTER INC. 画像色調整方法及びその電子装置
CN104156717A (zh) * 2014-08-31 2014-11-19 王好贤 基于图像处理技术的驾驶员驾车打电话违章识别方法
CN104284017A (zh) * 2014-09-04 2015-01-14 广东欧珀移动通信有限公司 一种信息提示方法及装置
CN105303551A (zh) * 2015-08-07 2016-02-03 深圳市瀚海基因生物科技有限公司 一种单分子定位方法
CN110889825A (zh) * 2015-08-07 2020-03-17 深圳市真迈生物科技有限公司 一种单分子定位装置
CN105488475B (zh) * 2015-11-30 2019-10-15 西安闻泰电子科技有限公司 手机中人脸检测方法
US9934397B2 (en) 2015-12-15 2018-04-03 International Business Machines Corporation Controlling privacy in a face recognition application
CN105844235B (zh) * 2016-03-22 2018-12-14 南京工程学院 基于视觉显著性的复杂环境人脸检测方法
CN107452002A (zh) * 2016-05-31 2017-12-08 百度在线网络技术(北京)有限公司 一种图像分割方法及装置
US20210090545A1 (en) * 2017-04-12 2021-03-25 Hewlett-Packard Development Company, L.P. Audio setting modification based on presence detection
CN107122751B (zh) * 2017-05-03 2020-12-29 电子科技大学 一种基于人脸对齐的人脸跟踪和人脸图像捕获方法
CN107239764A (zh) * 2017-06-07 2017-10-10 成都尽知致远科技有限公司 一种动态降噪的人脸识别方法
KR102397396B1 (ko) 2017-09-13 2022-05-12 삼성전자주식회사 자동 화이트 밸런스를 위한 이미지 처리 방법 및 장치
CN108399630B (zh) * 2018-01-22 2022-07-08 北京理工雷科电子信息技术有限公司 一种复杂场景下感兴趣区域内目标快速测距方法
CN110415168B (zh) * 2018-04-27 2022-12-02 武汉斗鱼网络科技有限公司 人脸局部缩放处理方法、存储介质、电子设备及系统
US10762336B2 (en) * 2018-05-01 2020-09-01 Qualcomm Incorporated Face recognition in low light conditions for unlocking an electronic device
CN109657544B (zh) * 2018-11-10 2023-04-18 江苏网进科技股份有限公司 一种人脸检测方法和装置
US10885606B2 (en) 2019-04-08 2021-01-05 Honeywell International Inc. System and method for anonymizing content to protect privacy
CN112052706B (zh) * 2019-06-06 2022-07-29 鸿富锦精密工业(武汉)有限公司 电子装置及人脸识别方法
AU2020329148A1 (en) * 2019-08-09 2022-03-17 Clearview Ai, Inc. Methods for providing information about a person based on facial recognition
US11062579B2 (en) 2019-09-09 2021-07-13 Honeywell International Inc. Video monitoring system with privacy features
CN111814702A (zh) * 2020-07-13 2020-10-23 安徽兰臣信息科技有限公司 一种基于成年人脸和儿童照特征空间映射关系的儿童人脸识别方法
CN112418184A (zh) * 2020-12-14 2021-02-26 杭州魔点科技有限公司 基于鼻部特征的人脸检测方法、装置、电子设备及介质
CN113747640B (zh) * 2021-09-03 2024-02-09 深圳时空数字科技有限公司 一种数字展厅灯光智能中央控制方法及系统
CN114022934B (zh) * 2021-11-04 2023-06-27 清华大学 一种基于多数原则的实时人像聚档方法、系统和介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101561710A (zh) * 2009-05-19 2009-10-21 重庆大学 一种基于人脸姿态估计的人机交互方法
CN102663354A (zh) * 2012-03-26 2012-09-12 腾讯科技(深圳)有限公司 人脸标定方法和系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100361497B1 (ko) * 1999-01-08 2002-11-18 엘지전자 주식회사 얼굴영역 추출방법
US6526161B1 (en) * 1999-08-30 2003-02-25 Koninklijke Philips Electronics N.V. System and method for biometrics-based facial feature extraction
KR100682889B1 (ko) 2003-08-29 2007-02-15 삼성전자주식회사 영상에 기반한 사실감 있는 3차원 얼굴 모델링 방법 및 장치
JP4085959B2 (ja) * 2003-11-14 2008-05-14 コニカミノルタホールディングス株式会社 物体検出装置、物体検出方法、および記録媒体
EP1566788A3 (en) * 2004-01-23 2017-11-22 Sony United Kingdom Limited Display
JP2009104427A (ja) * 2007-10-24 2009-05-14 Fujifilm Corp 顔検出方法及び装置、並びに顔検出プログラム
CN100561503C (zh) * 2007-12-28 2009-11-18 北京中星微电子有限公司 一种人脸眼角与嘴角定位与跟踪的方法及装置
RU2382407C1 (ru) * 2008-11-21 2010-02-20 Корпорация "САМСУНГ ЭЛЕКТРОНИКС Ко., Лтд." Способ и система для обнаружения лица
KR101151435B1 (ko) * 2009-11-11 2012-06-01 한국전자통신연구원 얼굴 인식 장치 및 방법

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101561710A (zh) * 2009-05-19 2009-10-21 重庆大学 一种基于人脸姿态估计的人机交互方法
CN102663354A (zh) * 2012-03-26 2012-09-12 腾讯科技(深圳)有限公司 人脸标定方法和系统

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LU XUJUN ET AL.: "A Method Using Skin-Color And Template Matching For Face Detection.", COMPUTER APPLICATIONS AND SOFTWARE., vol. 28, no. 7, July 2011 (2011-07-01), pages 112 - 114,140, XP008174076 *
See also references of EP2833288A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111724406A (zh) * 2020-07-14 2020-09-29 苏州精濑光电有限公司 一种区域连通合并方法、装置、设备和介质
CN111724406B (zh) * 2020-07-14 2023-12-08 苏州精濑光电有限公司 一种区域连通合并方法、装置、设备和介质

Also Published As

Publication number Publication date
US9530045B2 (en) 2016-12-27
KR20140137014A (ko) 2014-12-01
KR101683704B1 (ko) 2016-12-07
CN102663354B (zh) 2014-02-19
SG11201405684WA (en) 2014-10-30
PH12014501995B1 (en) 2014-11-24
EP2833288B1 (en) 2020-05-27
RU2601185C2 (ru) 2016-10-27
CA2867365A1 (en) 2013-10-03
ZA201406837B (en) 2015-11-25
PH12014501995A1 (en) 2014-11-24
EP2833288A1 (en) 2015-02-04
RU2014142591A (ru) 2016-05-20
EP2833288A4 (en) 2015-06-10
CN102663354A (zh) 2012-09-12
MY167554A (en) 2018-09-14
AP2014007969A0 (en) 2014-09-30
CL2014002526A1 (es) 2015-04-10
CA2867365C (en) 2016-11-08
US20150016687A1 (en) 2015-01-15

Similar Documents

Publication Publication Date Title
WO2013143390A1 (zh) 人脸标定方法和系统、计算机存储介质
WO2020107866A1 (zh) 一种文字区域获取方法、装置、存储介质及终端设备
JP4868530B2 (ja) 画像認識装置
CN108537782B (zh) 一种基于轮廓提取的建筑物图像匹配与融合的方法
US20060153450A1 (en) Integrated image processor
CN102132323A (zh) 自动图像矫直
CN111353961B (zh) 一种文档曲面校正方法及装置
JP6798752B2 (ja) 補正画像を生成する方法、ノートブック又はアジェンダの1ページ又は2つの隣接するページに描かれた書込み又は図の選択画像を生成する方法、pc用のコンピュータプログラム、又は、スマートフォン若しくはタブレットコンピュータ用のモバイルアプリケーション
JP5974589B2 (ja) 画像処理装置およびプログラム
JP2007272435A (ja) 顔特徴抽出装置及び顔特徴抽出方法
CN106228157B (zh) 基于图像识别技术的彩色图像文字段落分割与识别方法
JP2006119817A (ja) 画像処理装置
JP6797046B2 (ja) 画像処理装置及び画像処理プログラム
WO2022160586A1 (zh) 一种深度检测方法、装置、计算机设备和存储介质
JPH10149449A (ja) 画像分割方法、画像識別方法、画像分割装置および画像識別装置
CN110288531B (zh) 一种辅助操作人员制作标准身份证相片的方法及工具
RU2329535C2 (ru) Способ автоматического кадрирования фотографий
KR101513931B1 (ko) 구도의 자동보정 방법 및 이러한 구도의 자동보정 기능이 탑재된 영상 장치
WO2019237560A1 (zh) 一种边框页码扫描系统
WO2023225774A1 (zh) 图像处理方法及装置、电子设备、计算机可读存储介质
CN115619636A (zh) 图像拼接方法、电子设备以及存储介质
CN111414877B (zh) 去除颜色边框的表格裁切方法、图像处理设备和存储介质
WO2022056875A1 (zh) 一种铭牌图像的分割方法、装置和计算机可读存储介质
CN113096149B (zh) 一种基于色彩三要素的摇床矿带分割方法
JP5423221B2 (ja) 画像判定装置、画像判定プログラムおよび画像判定方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13770054

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2867365

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2014002526

Country of ref document: CL

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: P1030/2014

Country of ref document: AE

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112014023675

Country of ref document: BR

WWE Wipo information: entry into national phase

Ref document number: 2013770054

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: IDP00201406412

Country of ref document: ID

ENP Entry into the national phase

Ref document number: 20147029988

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2014142591

Country of ref document: RU

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 112014023675

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20140924