CN107491768A - Extract the method, apparatus and electronic equipment of face characteristic - Google Patents

Extract the method, apparatus and electronic equipment of face characteristic Download PDF

Info

Publication number
CN107491768A
CN107491768A CN201710792163.1A CN201710792163A CN107491768A CN 107491768 A CN107491768 A CN 107491768A CN 201710792163 A CN201710792163 A CN 201710792163A CN 107491768 A CN107491768 A CN 107491768A
Authority
CN
China
Prior art keywords
information
pixel
gradient
hog
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710792163.1A
Other languages
Chinese (zh)
Other versions
CN107491768B (en
Inventor
王成波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201710792163.1A priority Critical patent/CN107491768B/en
Publication of CN107491768A publication Critical patent/CN107491768A/en
Application granted granted Critical
Publication of CN107491768B publication Critical patent/CN107491768B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to field of face identification, more particularly to a kind of method, apparatus and electronic equipment for extracting face characteristic.In the present invention, by obtaining the first face illustraton of model sample from the picture comprising face, first face illustraton of model sample is transformed to second faceform's pattern of default yardstick again, and calculate the HOG information of each default characteristic point in second faceform's pattern, and the HOG message linkages of each default characteristic point are formed into complete HOG information together, the last face characteristic in complete HOG information extractions picture.Therefore, the present invention is by obtaining faceform's pattern, and using the face characteristic in the HOG information extraction pictures of faceform's pattern, compared to the method for existing face characteristic extraction, the present invention can improve the efficiency and accuracy rate of face feature extraction method.

Description

Extract the method, apparatus and electronic equipment of face characteristic
Technical field
The invention belongs to field of face identification, more particularly to a kind of method, apparatus and electronic equipment for extracting face characteristic.
Background technology
At present, face recognition technology is widely used to every field in life.Such as the face applied to safety-security area Recognition access control system attendance checking system and recognition of face intelligent monitor system, it is fast applied to the face auto-focusing of entertainment field and smiling face Gate technique;Face payment system applied to financial field etc..
However, face recognition technology depends critically upon the extraction of face characteristic (such as eyes, nose and face etc.), it is excellent Elegant face feature extraction method, can make face identification system reach the effect got twice the result with half the effort.It can be said that face characteristic extracts Basic problem and key problem of the method as recognition of face, vital effect is played for recognition of face.But at present Face feature extraction method generally existing efficiency it is low low with accuracy rate the problem of.
The content of the invention
The present invention provides a kind of method, apparatus and electronic equipment for extracting face characteristic, it is intended to it is special to solve existing face Levy the problem of efficiency existing for extracting method is low low with accuracy rate.
First aspect present invention provides a kind of method for extracting face characteristic, and methods described includes:
The first face illustraton of model sample is obtained from the picture comprising face;
The first face illustraton of model sample is transformed to second faceform's pattern of default yardstick;
Calculate the HOG information that characteristic point is each preset in the second faceform pattern;
The HOG message linkages of each default characteristic point are formed into complete HOG information together;
According to the face characteristic in picture described in the complete HOG information extractions.
In a preferably embodiment, the HOG for calculating each default characteristic point in the second faceform pattern Information includes:
Using the upper left corner of the second faceform pattern as the origin of coordinates, extended to the right with the transverse direction comprising the origin of coordinates Direction be X-axis positive direction, using the direction extended longitudinally downward comprising the origin of coordinates as Y-axis positive direction, establish described second The rectangular coordinate system of faceform's pattern;Wherein, described in a default characteristic point in the second faceform pattern corresponds to One coordinate points of rectangular coordinate system;
It is described according to the gradient information of each pixel of calculated for pixel values of each pixel in the second faceform pattern Gradient information includes gradient magnitude and gradient direction;
Centered on the coordinate where the default characteristic point, the gradient for the pixel that the size of N*N block is included is obtained Information;Wherein, the number of pixels that the block of the N*N is included is N*N, and N is the positive integer more than 1;
The gradient information of the pixel included according to the size of the block of the N*N obtains the HOG letters of the default characteristic point Breath.
In a preferably embodiment, the gradient information for the pixel that the size of the block according to the N*N is included obtains Take the HOG information of the default characteristic point also includes afterwards:
The HOG information of the default characteristic point is normalized, the HOG information after being normalized;
HOG information after normalization is carried out to increase dimension processing, obtains the HOG information after increasing dimension.
In a preferably embodiment, the gradient information for the pixel that the size of the block according to the N*N is included obtains Taking the HOG information of the default characteristic point includes:
Quantified using the predetermined angle θ gradient directions included by quantization amplitude to the size of the block of the N*N, obtained Gradient direction after quantization;
Using two perpendicular bisectors of the block of the N*N as line of demarcation, the block of the N*N is averagely divided into 4 cell factories, Traversal calculating is carried out to the projection range value of the gradient magnitude of the pixel in described piece on each cell factory;
According to the gradient direction after quantization, the projection range value on identical gradient direction is accumulated in together, forms 4* The HOG information of (360/ θ) dimension;Wherein, the gradient direction of the projection range value of pixel is identical with the gradient direction of the pixel.
Second aspect of the present invention provides a kind of device for extracting face characteristic, and described device includes:
Acquisition module, for obtaining the first face illustraton of model sample from the picture comprising face;
Change of scale module, the second faceform for the first face illustraton of model sample to be transformed to default yardstick scheme Sample;
Computing module, the HOG information of characteristic point is each preset in the second faceform pattern for calculating;
Serial module structure, believe for the HOG message linkages of each default characteristic point to be formed into complete HOG together Breath;
Extraction module, for the face characteristic in the picture according to the complete HOG information extractions.
In a preferably embodiment, the computing module includes:
Establishment of coordinate system unit, for using the upper left corner of the second faceform pattern as the origin of coordinates, to include seat The direction that the transverse direction of mark origin extends to the right is X-axis positive direction, using the direction extended longitudinally downward comprising the origin of coordinates as Y-axis Positive direction, establish the rectangular coordinate system of the second faceform pattern;Wherein, one in faceform's pattern is default Characteristic point corresponds to a coordinate points of the rectangular coordinate system;
Computing unit, for according to each pixel of calculated for pixel values of each pixel in the second faceform pattern Gradient information, the gradient information include gradient magnitude and gradient direction;
Gradient information acquiring unit, for centered on the coordinate where the default characteristic point, obtain N*N block it is big The gradient information of small included pixel;Wherein, the number of pixels that the block of the N*N is included is N*N, and N is more than 1 just Integer;
HOG information acquisition units, the gradient information for the pixel that the size for the block according to the N*N is included obtain institute State the HOG information of default characteristic point.
In a preferably embodiment, the computing module also includes:
Normalization unit, for the HOG information of this feature point to be normalized, after being normalized HOG information;
Increase dimension unit, for carrying out increasing dimension processing to the HOG information after the normalization, obtain the HOG after increasing dimension Information.
In a preferably embodiment, the HOG information acquisition units include:
Gradient direction quantifies subelement, for being included using predetermined angle θ by quantization amplitude to the size of the block of the N*N Gradient direction quantified, obtain quantify after gradient direction;
Range value computation subunit is projected, for using two perpendicular bisectors of the block of the N*N as line of demarcation, by the N*N Block be averagely divided into 4 cell factories, to the projection width of the gradient magnitude of the pixel in described piece on each cell factory Angle value carries out traversal calculating;
Project range value to add up subelement, for according to the gradient direction after quantization, by the projection on identical gradient direction Range value is accumulated in together, and forms the HOG information of 4* (360/ θ) dimension;Wherein, pixel projection range value gradient direction with The gradient direction of the pixel is identical.
Third aspect present invention provides a kind of electronic equipment, and the electronic equipment includes memory and processor, the place Reason device is used to realize any of the above-described embodiment methods described in the computer program stored in performing memory.
Fourth aspect present invention provides a kind of computer-readable recording medium, is stored thereon with computer program, the meter Calculation machine program realizes any of the above-described embodiment methods described when being executed by processor.
In the present invention, by obtaining the first face illustraton of model sample from the picture comprising face, then by the first face mould Type pattern is transformed to second faceform's pattern of default yardstick, and calculates and characteristic point is each preset in second faceform's pattern HOG information, and the HOG message linkages of each default characteristic point are formed into complete HOG information together, finally according to complete Face characteristic in whole HOG information extraction pictures.Therefore, the present invention is by obtaining faceform's pattern, and utilizes face mould Face characteristic in the HOG information extraction pictures of type pattern, compared to the method for existing face characteristic extraction, the present invention can be with Improve the efficiency and accuracy rate of face feature extraction method.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, make required in being described below to embodiment Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for For those skilled in the art, on the premise of not paying creative work, it can also be obtained according to these accompanying drawings other attached Figure.
Fig. 1 is the implementation process figure of the method for extraction face characteristic provided in an embodiment of the present invention;
Fig. 2 is the implementation process figure of step S30 in the method provided in an embodiment of the present invention for extracting face characteristic;
Fig. 3 is another implementation process figure of step S30 in the method provided in an embodiment of the present invention for extracting face characteristic;
Fig. 4 is the implementation process figure of step S304 in the method provided in an embodiment of the present invention for extracting face characteristic;
Fig. 5 is the functional block diagram of the device of extraction face characteristic provided in an embodiment of the present invention;
Fig. 6 is the structured flowchart of computing module 30 in the device provided in an embodiment of the present invention for extracting face characteristic;
Fig. 7 is another structured flowchart of computing module 30 in the device provided in an embodiment of the present invention for extracting face characteristic;
Fig. 8 is the structural frames of HOG information acquisition units 304 in the device provided in an embodiment of the present invention for extracting face characteristic Figure;
Fig. 9 is the structural representation of electronic equipment provided in an embodiment of the present invention;
Figure 10 is projection amplitude of the pixel on 4 cell factories in calculating N*N provided in an embodiment of the present invention block The schematic diagram of value.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, it is right below in conjunction with drawings and Examples The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.
Fig. 1 shows the implementation process of the method for extraction face characteristic provided in an embodiment of the present invention, according to different need Ask, the order of step can change in the flow chart, and some steps can be omitted.For convenience of description, illustrate only and this hair The related part of bright embodiment, details are as follows:
Step S10, the first face illustraton of model sample is obtained from the picture comprising face.
In this step, it is necessary to obtain the first face illustraton of model sample from the picture comprising face.This includes the figure of face Piece can be the picture in picture database, can also be either video recording equipment or the picture of video monitoring equipment output, remove The picture in outside this or other sources, this is not particularly limited.In a preferably embodiment, this is included The picture of face is video recording equipment or the picture of video monitoring equipment output.In addition, the size of face and people in the picture Face location in picture is not particularly limited.The first face illustraton of model sample only includes for what is extracted from picture The picture of face, for example, the border up and down by determining face in picture, face is extracted from the picture and obtained Obtain the first face illustraton of model sample.
In a preferably embodiment, above-mentioned steps S10, the first face illustraton of model sample is obtained from the picture comprising face Specially:The first face illustraton of model sample is obtained from the picture comprising face using image Segmentation Technology.
Image Segmentation Technology is to segment the image into several regions specific, with unique properties and extract to feel emerging The technical process of interesting target, it can be described as by the committed step of image procossing to graphical analysis.At present, existing image segmentation Method mainly include the dividing method based on threshold value, the dividing method based on region, the dividing method based on edge, based on Nogata The dividing method of figure and dividing method based on particular theory etc., and the dividing method based on histogram is wherein highly effective Image partition method.Therefore, in this step, the face in picture can be split using image Segmentation Technology, entered And obtain faceform's pattern.
Step S20, the first face illustraton of model sample is transformed to second faceform's pattern of default yardstick.
After the first face illustraton of model sample is got by above-mentioned steps S10, face characteristic is known in order to better improve Other efficiency and accuracy rate, in step S20, the first face illustraton of model sample can be transformed to the second face mould of default yardstick Type pattern, the default yardstick can be preset according to actual conditions, not do special limitation herein.
In a preferably embodiment, the first face illustraton of model sample is transformed to second faceform's pattern of default yardstick Specially:First face illustraton of model sample is transformed to second faceform's pattern of default yardstick using interpolation algorithm.In this hair In bright embodiment, specifically row interpolation can be entered to the pixel of the first face illustraton of model sample before conversion by interpolation algorithm Computing, the first face illustraton of model sample before conversion is transformed to second faceform's pattern of default yardstick.Preferably one In embodiment, above-mentioned interpolation algorithm is Lagrange interpolation algorithms either Newton interpolation algorithms or Hermite interpolation algorithm Either piecewise interpolation algorithm or Based on Interpolating Spline.
Step S30, calculate the HOG information of each default characteristic point in second faceform's pattern.
In step s 30, for being transformed to second faceform's pattern of default yardstick, wherein each default feature is calculated The HOG information of point.The default characteristic point can be preset, and the combination of multiple characteristic points constitutes the feature in face, for example, Eyebrow, eyes, nose, face, facial contour etc..The quantity of the default characteristic point can be multiple, in a preferably embodiment In, quantity of the default characteristic point is 74,74 default characteristic points collectively constituted eyebrow in face, eyes, nose, Face and facial contour.HOG (English full name:Histogram of Oriented Gradient) information, i.e. direction gradient is straight Square figure information, it is a kind of to be used for carrying out the Feature Descriptor of object detection, HOG information in computer vision and image procossing It is that HOG information is because of its good characteristic and carries by calculating and the histogram of gradients of statistical picture regional area is come constitutive characteristic Take effect and be widely adopted.In step s 30, it is necessary to calculate the HOG information of each default characteristic point respectively, wherein calculating every The method of the HOG information of individual default characteristic point is the same.
Step S40, the HOG message linkages of each default characteristic point are formed into complete HOG information together.
After the HOG information of each default characteristic point is got by above-mentioned steps S30, it can be incited somebody to action by step S40 The HOG message linkages of each default characteristic point got together, form the complete HOG letters of second faceform's pattern Breath.Wherein, the HOG information is described in a manner of characteristic vector.
Step S50, according to the face characteristic in complete HOG information extractions picture.
After the complete HOG information of second faceform's pattern is got by step S40, you can pass through the step Suddenly, the face characteristic in the picture according to complete HOG information extractions.
In a preferably embodiment, the face characteristic in picture described in complete HOG information extractions is specially:Root According to complete HOG information, the face characteristic in the picture is extracted using SDM algorithms.In embodiments of the present invention, specifically can be with According to above-mentioned complete HOG information, calculated using SDM algorithms and map out the feature of the first face illustraton of model sample before conversion Point, so as to identify the face characteristic among the picture for including face.Wherein, SDM (Supervised Descent Method, I.e. face aligns) algorithm is on the basis of the face having been detected by, it is automatically found eyebrow in face, eyes, nose, face And the position of the symbolic characteristic such as facial contour.
In embodiments of the present invention, by obtaining the first face illustraton of model sample from the picture comprising face, then by first Faceform's pattern is transformed to second faceform's pattern of default yardstick, and calculates and each preset in second faceform's pattern The HOG information of characteristic point, and the HOG message linkages of each default characteristic point are formed into complete HOG information together, finally According to the face characteristic in complete HOG information extractions picture.Therefore, the embodiment of the present invention is by obtaining faceform's pattern, And using the face characteristic in the HOG information extraction pictures of the first face illustraton of model sample or second faceform's pattern, compare The method of extraction face characteristic can improve face characteristic and carry in the method for existing face characteristic extraction, the embodiment of the present invention Take the efficiency and accuracy rate of method.
Fig. 2 shows the specific implementation flow of step S30 in the method provided in an embodiment of the present invention for extracting face characteristic, According to different demands, the order of step can change in the flow chart, and some steps can be omitted.For convenience of description, only The part related to the embodiment of the present invention is shown, details are as follows:
Step S301, using the upper left corner of second faceform's pattern as the origin of coordinates, with comprising the origin of coordinates laterally to The direction of right extension is X-axis positive direction, using the direction extended longitudinally downward comprising the origin of coordinates as Y-axis positive direction, establishes people The rectangular coordinate system of face model pattern;Wherein, a default characteristic point in faceform's pattern corresponds to the one of rectangular coordinate system Individual coordinate points.
In order to calculate the gradient information of pixel in second faceform's pattern, it is necessary first to establish second faceform's pattern Rectangular coordinate system.
Step S302, believed according to the gradient of each pixel of calculated for pixel values of each pixel in second faceform's pattern Breath, gradient information include gradient magnitude and gradient direction.
After the rectangular coordinate system of second faceform's pattern is established, according to each picture in second faceform's pattern The gradient information of each pixel of calculated for pixel values of element, prepared for the follow-up HOG information for calculating second faceform's pattern.Institute Meaning pixel is the basic coding of basic protochrome and its gray scale, is the elementary cell for forming image, it refers to by a digital sequence Arrange a least unit in the image represented.Wherein, the pixel value of pixel is represented using the RGB numerical value of pixel in image.
In a preferably embodiment, the gradient that each pixel in faceform's pattern can be calculated according to below equation is believed Breath;Wherein, gradient information includes gradient magnitude and gradient direction.
1st, gradient magnitude is calculated:
Wherein, M (i, j) denotation coordination is the gradient magnitude of the pixel of (i, j), and dx denotation coordinations are the pixel of (i, j) in X Gradient magnitude on direction of principal axis;Dy denotation coordinations are the gradient magnitude of the pixel of (i, j) in the Y-axis direction.
(1) if it is not the pixel of sharp point that coordinate, which is the pixel of (i, j),:
Then changing coordinates be the gradient magnitude of the pixel of (i, j) in the X-axis direction in the X-axis direction with changing coordinates phase The difference divided by two of the pixel value of former and later two adjacent pixels, i.e.,:
Dx=[I (i+1, j)-I (i-1, j)]/2;
Changing coordinates are that the gradient magnitude of the pixel of (i, j) in the Y-axis direction is adjacent with changing coordinates in the Y-axis direction Former and later two pixels pixel value difference divided by two, i.e.,:
Dy=[I (i, j+1)-I (i, j-1)]/2;
Wherein, I represents the pixel value of image pixel.
(2) if coordinate for the pixel of (i, j) is the pixel of sharp point (not including the image lower left corner, the upper left corner, upper right Angle and the pixel in the lower right corner):
1. it is the pixel of image left border that if coordinate, which is the pixel of (i, j),:
Then changing coordinates be the gradient magnitude of the pixel of (i, j) in the X-axis direction in the X-axis direction with changing coordinates phase The difference of the pixel value of adjacent pixel and the pixel value of changing coordinates, i.e.,:
Dx=I (i+1, j)-I (i, j);
Changing coordinates are that the gradient magnitude of the pixel of (i, j) in the Y-axis direction is adjacent with changing coordinates in the Y-axis direction Former and later two pixels pixel value difference divided by two, i.e.,:
Dy=[I (i, j+1)-I (i, j-1)]/2;
2. it is the pixel of image right side boundary that if coordinate, which is the pixel of (i, j),:
Then changing coordinates are the pixel value for the pixel that the gradient magnitude of the pixel of (i, j) in the X-axis direction is changing coordinates With the difference of the pixel value of the pixel adjacent with changing coordinates in the X-axis direction, i.e.,:
Dx=I (i, j)-I (i-1, j);
Changing coordinates are that the gradient magnitude of the pixel of (i, j) in the Y-axis direction is adjacent with changing coordinates in the Y-axis direction Former and later two pixels pixel value difference divided by two, i.e.,:
Dy=[I (i, j+1)-I (i, j-1)]/2;
3. it is the pixel of image upper bound that if coordinate, which is the pixel of (i, j),:
Then changing coordinates be the gradient magnitude of the pixel of (i, j) in the Y-axis direction in the Y-axis direction with changing coordinates phase The difference of the pixel value of adjacent pixel and the pixel value of the pixel of changing coordinates, i.e.,:
Dy=I (i, j+1)-I (i, j);
Then changing coordinates be the gradient magnitude of the pixel of (i, j) in the X-axis direction in the X-axis direction with changing coordinates phase The difference divided by two of the pixel value of former and later two adjacent pixels, i.e.,:
Dx=[I (i+1, j)-I (i-1, j)]/2;
4. it is the pixel of image lower limits that if coordinate, which is the pixel of (i, j),:
Then changing coordinates are the pixel value for the pixel that the gradient magnitude of the pixel of (i, j) in the Y-axis direction is changing coordinates With the difference of the pixel value of the pixel of the coordinate adjacent with changing coordinates in the Y-axis direction, i.e.,:
Dy=I (i, j)-I (i, j-1);
Then changing coordinates be the gradient magnitude of the pixel of (i, j) in the X-axis direction in the X-axis direction with changing coordinates phase The difference divided by two of the pixel value of former and later two adjacent pixels, i.e.,:
Dx=[I (i+1, j)-I (i-1, j)]/2;
(3) if coordinate for the pixel of (i, j) is the image the lower left corner either upper left corner or the upper right corner or the picture of inferior horn again Element:
1. it is the pixel in the image lower left corner that if coordinate, which is the pixel of (i, j),:
Illustrate coordinate for the pixel that the pixel of (i, j) is both image left border, be the pixel of image lower limits again, when Preceding coordinate is picture of the gradient magnitude of the pixel of (i, j) in the X-axis direction for pixel adjacent with changing coordinates in the X-axis direction Element value and the difference of the pixel value of changing coordinates, i.e.,:
Dx=I (i+1, j)-I (i, j);
Changing coordinates be the pixel that the gradient magnitude of the pixel of (i, j) in the Y-axis direction is changing coordinates pixel value with The difference of the pixel value of the pixel of the coordinate adjacent with changing coordinates in the Y-axis direction, i.e.,:
Dy=I (i, j)-I (i, j-1);
2. it is the pixel in the image upper left corner that if coordinate, which is the pixel of (i, j),:
Illustrate coordinate for the pixel that the pixel of (i, j) is both image left border, be the pixel of image upper bound again, then Changing coordinates are that the gradient magnitude of the pixel of (i, j) in the X-axis direction is pixel adjacent with changing coordinates in the X-axis direction The difference of the pixel value of pixel value and changing coordinates, i.e.,:
Dx=I (i+1, j)-I (i, j);
Changing coordinates are that the gradient magnitude of the pixel of (i, j) in the Y-axis direction is adjacent with changing coordinates in the Y-axis direction Pixel pixel value and changing coordinates pixel pixel value difference, i.e.,:
Dy=I (i, j+1)-I (i, j);
3. it is the pixel in the image upper right corner that if coordinate, which is the pixel of (i, j),:
Illustrate coordinate for the pixel that the pixel of (i, j) is both image right side boundary, be the pixel of image upper bound again, then Changing coordinates be the pixel that the gradient magnitude of the pixel of (i, j) in the X-axis direction is changing coordinates pixel value with X-direction The difference of the pixel value of the upper pixel adjacent with changing coordinates, i.e.,:
Dx=I (i, j)-I (i-1, j);
Changing coordinates are that the gradient magnitude of the pixel of (i, j) in the Y-axis direction is adjacent with changing coordinates in the Y-axis direction Pixel pixel value and changing coordinates pixel pixel value difference, i.e.,:
Dy=I (i, j+1)-I (i, j);
4. it is the pixel in the image lower right corner that if coordinate, which is the pixel of (i, j),:
Illustrate coordinate for the pixel that the pixel of (i, j) is both image right side boundary, be the pixel of image lower limits again, then Changing coordinates be the pixel that the gradient magnitude of the pixel of (i, j) in the X-axis direction is changing coordinates pixel value with X-direction The difference of the pixel value of the upper pixel adjacent with changing coordinates, i.e.,:
Dx=I (i, j)-I (i-1, j);
Changing coordinates be the pixel that the gradient magnitude of the pixel of (i, j) in the Y-axis direction is changing coordinates pixel value with The difference of the pixel value of the pixel of the coordinate adjacent with changing coordinates in the Y-axis direction, i.e.,:
Dy=I (i, j)-I (i, j-1);
2nd, gradient direction is calculated:O (i, j)=arctan (dy/dx);
Wherein, O (i, j) is the gradient direction for the pixel that coordinate is (i, j), and dx denotation coordinations are the pixel of (i, j) in X-axis Gradient magnitude on direction;Dy denotation coordinations are the gradient magnitude of the pixel of (i, j) in the Y-axis direction.
When calculating gradient magnitude, it is (i, j) that can utilize dx the and dy coordinates computeds being calculated in above-mentioned steps 1 Pixel gradient direction, utilize O (i, j)=arctan (dy/dx) coordinates computed for (i, j) pixel gradient direction.
Step S303, centered on the coordinate where presetting characteristic point, obtain the pixel that the size of N*N block is included Gradient information;Wherein, the number of pixels that N*N block is included is N*N, and N is the positive integer more than 1.
After the gradient information of each pixel in obtaining second faceform's pattern, in view of one in second faceform's pattern Individual default characteristic point is answered with a coordinate pair in rectangular coordinate system, centered on the coordinate where this presets characteristic point, takes N*N Block (English name:Block the gradient information for the pixel that size) is included.Wherein, N*N block is referred to by N*N picture The block of element composition, the number of pixels that the block of the N*N is included is N*N, and N is the positive integer more than 1.In a preferably embodiment In, the number of pixels that N*N block is included is 32*32, i.e. N is 32.
Step S304, the gradient information of the pixel included according to the size of N*N block obtain the HOG letters of default characteristic point Breath.
After the gradient information for the pixel that the block size for getting above-mentioned N*N is included, you can according in step S303 In the gradient information that is included of the N*N block size that gets obtain the HOG information of the default characteristic point.
Fig. 3 shows another implementation process of step S30 in the method provided in an embodiment of the present invention for extracting face characteristic, According to different demands, the order of step can change in the flow chart, and some steps can be omitted.For convenience of description, only The part related to the embodiment of the present invention is shown, details are as follows:
Characteristic point is even eliminated to picture noise, the sensitivity of light and shade in order to weaken, and obtains relatively reliable, performance More preferable characteristic point data, as shown in figure 3, on the basis of shown in above-mentioned Fig. 2, step S30 also includes:
Step S305, the HOG information for presetting characteristic point is normalized, the HOG information after being normalized.
Because HOG information is described with the mode of characteristic vector, it is assumed that the HOG information got by step S304 is The HOG information of 4*18 dimensions, specifically, the gradient magnitude in the HOG information of all dimensions is subjected to a square calculating first, and will Square being added up for all gradient magnitudes, forms total HOG information, then be utilized respectively each dimension gradient magnitude divided by Total HOG information, normalized is thus completed, obtain and normalize later HOG information.After normalized, HOG letters The dimension of breath is constant, is still the HOG information of 4*18 dimensions.
Step S306, the HOG information after normalization is carried out to increase dimension processing, obtains the HOG information after increasing dimension.
After the HOG information after being normalized, in order to further weaken even elimination feature pair graph The sensitivity of piece noise, light and shade, the HOG information after normalization can also be carried out increasing dimension processing, more may be used with obtaining Lean on, the more preferable characteristic point data of performance.Specifically, the HOG information of later 4*18 dimensions will be normalized according to gradient direction Symmetrical merging treatment is carried out, the HOG information of 4*9 dimensions is obtained, the gradient magnitude in all 9 directions is added up afterwards, this Sample just forms the HOG information of 4 dimensions, finally by the HOG information of the 4*18 dimensions got by step S304 and above-mentioned 4*9 The HOG information of the HOG information of individual dimension and last 4 dimensions collects together, forms 4*18+4*9+4=4* (18+9+ 4) the HOG information of=4*31=124 dimension.So far, for each presets characteristic point, we have obtained 124 dimensions HOG information.
In embodiments of the present invention, when calculating the HOG information of each default characteristic point, first to the after change of scale Two faceform's patterns establish coordinate system, then calculate the gradient information of each pixel in second faceform's pattern, for follow-up meter Calculate HOG information and extraction face characteristic is prepared, afterwards centered on the coordinate where this presets characteristic point, take N*N block big The gradient information of small included pixel, the gradient information finally included according to the size of N*N block obtain the default feature The HOG information of point;On the basis of above-mentioned, the HOG information that characteristic point is preset to this is normalized, afterwards again to normalizing HOG information after change carries out increasing dimension processing.Therefore, the method for the extraction face characteristic in the embodiment of the present invention can weaken Characteristic point is even eliminated to picture noise, the sensitivity of light and shade, obtains relatively reliable, the more preferable characteristic point data of performance, And further improve the efficiency and accuracy rate of face feature extraction method.
Fig. 4 shows the specific implementation stream of step S304 in the method provided in an embodiment of the present invention for extracting face characteristic Journey, according to different demands, the order of step can change in the flow chart, and some steps can be omitted, for convenience of description, The part related to the embodiment of the present invention is illustrate only, details are as follows:
Step S3041, the gradient direction amount of progress included using predetermined angle θ by quantization amplitude to the size of N*N block Change, obtain the gradient direction after quantifying.
In embodiments of the present invention, predetermined angle θ can be redefined for 10 degree, can also be set according to actual conditions For other number of degrees, it is not particularly limited herein.In a preferably embodiment, predetermined angle θ is preset as 20 degree.In step In rapid S3041, quantified using the predetermined angle θ gradient directions included by quantization amplitude to the size of N*N block, acquisition amount Gradient direction after change.As shown in Table 1, it is 360 degree/20 degree=18 gradients after quantifying if predetermined angle θ is 20 degree Direction, 18 directions correspond to 18 numbers in integer numerical value 0 to 17 respectively, wherein [0,20) integer numerical value 0 is corresponded to, [20, 40) corresponding integer numerical value 1, by that analogy, [320,340) integer numerical value 16 is corresponded to, [340,360) correspond to integer numerical value 17.
Angular interval [0,20) [20,40) [40,60) ··· [300,320) [320,340) [340,360)
Quantized directions 0 1 2 ··· 15 16 17
Table one
Step S3042, using two perpendicular bisectors of N*N block as line of demarcation, N*N block is averagely divided into 4 cell lists Member, traversal calculating is carried out to the projection range value of the gradient magnitude of the pixel in the block of the N*N on each cell factory.
Using two perpendicular bisectors of N*N block as line of demarcation, N*N block is averagely divided into 4 cell factory (English names Claim:Cell), and to the gradient magnitude of the pixel in N*N block traversal calculating is carried out in the projection range value of each cell factory. Assuming that the block of the N*N contains 32*32 pixel, then traversal is needed to calculate the 32*32 pixel on each cell factory Project range value.
Wherein, projection amplitude of each pixel on 4 cell factories can be specifically calculated according to bilinear interpolation Value, as shown in Figure 10.
(x0, y0), (x1, y1), (x2, y2) and (x3, y3) is cell factory 0, cell factory 1, cell factory 2 respectively And the coordinate of the central point of cell factory 3.Coordinate is projection of the gradient magnitude of the pixel of (i, j) on each cell factory The calculation formula of range value is as follows:
M (cell0, i, j)=M (i, j) * (y2-j) * (x1-i),
M (cell1, i, j)=M (i, j) * (y2-j) * (i-x1),
M (cell2, i, j)=M (i, j) * (j-y0) * (x1-i),
M (cell3, i, j)=M (i, j) * (j-y0) * (i-x1);
Wherein, M (cell0, i, j), M (cell1, i, j), M (cell2, i, j), M (cell3, i, j) are that coordinate is respectively Projection amplitude of the gradient magnitude of the pixel of (i, j) on cell factory 0, cell factory 1, cell factory 2 and cell factory 3 Value, according to above-mentioned calculation formula, the gradient magnitude of the pixel to being included in N*N block projects range value on each cell factory Carry out traversal calculating.
Step S3043, according to the gradient direction after quantization, the projection range value on identical gradient direction is accumulated in one Rise, form the HOG information of 4* (360/ θ) dimension;Wherein, the gradient direction of projection range value and the gradient direction of pixel of pixel It is identical.
Projection width of all pixels included in N*N block on each cell factory is being calculated by step S3042 After angle value, according to the gradient direction after quantifying in step S3041, by the projection in each cell factory on identical gradient direction Range value is accumulated in together, wherein, the gradient direction of the projection range value of pixel is identical with the gradient direction of pixel.Each cell Unit forms the HOG information of 360/ θ dimensions, and 4 cell factories form the HOG information of 4* (360/ θ) dimension altogether.Assuming that preset angle It is 20 degree to spend θ, then has the HOG information for forming 4*18 dimensions altogether by step S3043,4 cell factories.
In embodiments of the present invention, default characteristic point is obtained in the gradient information of the pixel included according to N*N block During HOG information, the gradient direction in N*N block is quantified first, then N*N block is averagely divided into 4 cell factories, And traversal calculating is carried out to the projection range value of the gradient magnitude of pixel in N*N block on each cell factory, finally by phase It is accumulated in together with the projection range value on gradient direction, that is, obtains the HOG information of the default characteristic point.Therefore, the present invention is real The method for applying example HOG information of the default characteristic point by above-mentioned acquisition, can further improve face feature extraction method Efficiency and accuracy rate.
Fig. 5 shows the functional block diagram of the device of extraction face characteristic provided in an embodiment of the present invention, for the ease of saying It is bright, the part related to the embodiment of the present invention is illustrate only, details are as follows:
With reference to figure 5, the modules included by the device of the extraction face characteristic are corresponded in embodiment for performing Fig. 1 Each step, the associated description in embodiment is corresponded to referring specifically to Fig. 1 and Fig. 1, here is omitted.The embodiment of the present invention In, the device of the extraction face characteristic include acquisition module 10, change of scale module 20, computing module 30, serial module structure 40 with And extraction module 50.
Acquisition module 10, for obtaining the first face illustraton of model sample from the picture comprising face.
Change of scale module 20, the second faceform for the first face illustraton of model sample to be transformed to default yardstick scheme Sample.
In a preferably embodiment, change of scale module 20 is specifically used for utilizing interpolation algorithm by the first face illustraton of model Sample is transformed to second faceform's pattern of default yardstick.
Computing module 30, for calculating the HOG information of each default characteristic point in second faceform's pattern.
Serial module structure 40, for the HOG message linkages of each default characteristic point to be formed into complete HOG information together.
Extraction module 50, for the face characteristic in complete HOG information extractions picture.
In a preferably embodiment, extraction module 50 is specifically used for according to complete HOG information, is carried using SDM algorithms Take the face characteristic in picture.
In embodiments of the present invention, the first face illustraton of model is obtained from the picture comprising face by acquisition module 10 First face illustraton of model sample is transformed to second faceform's pattern of default yardstick, calculates mould by sample, change of scale module 20 again Block 30 calculates the HOG information of each default characteristic point in second faceform's pattern, and serial module structure 40 will each preset spy The HOG message linkages of sign point form complete HOG information together, and last extraction module 50 is according to complete HOG information extractions Face characteristic in picture.Therefore, the embodiment of the present invention obtains faceform's pattern, the profit of extraction module 50 by acquisition module 10 With the face characteristic in the HOG information extraction pictures of second faceform's pattern, the side extracted compared to existing face characteristic Method, the device of face characteristic is extracted in the embodiment of the present invention can improve the efficiency and accuracy rate of face feature extraction method.
Fig. 6 shows the structured flowchart of computing module 30 in the device provided in an embodiment of the present invention for extracting face characteristic, For convenience of description, the part related to the embodiment of the present invention is illustrate only, details are as follows:
With reference to figure 6, the unit included by the computing module 30 is used to performing Fig. 2 and corresponds to each step in embodiment Suddenly, the associated description in embodiment is corresponded to referring specifically to Fig. 2 and Fig. 2, here is omitted.In the embodiment of the present invention, meter Calculating module 30 includes establishment of coordinate system unit 301, computing unit 302, gradient information acquiring unit 303 and HOG acquisition of information Unit 304.
Establishment of coordinate system unit 301, for using the upper left corner of second faceform's pattern as the origin of coordinates, to include coordinate The direction that the transverse direction of origin extends to the right is X-axis positive direction, using the direction extended longitudinally downward comprising the origin of coordinates as Y-axis just Direction, establish the rectangular coordinate system of second faceform's pattern;Wherein, a default characteristic point in faceform's pattern is corresponding One coordinate points of rectangular coordinate system.
Computing unit 302, for according to each pixel of calculated for pixel values of each pixel in second faceform's pattern Gradient information, gradient information include gradient magnitude and gradient direction.
Gradient information acquiring unit 303, for by preset characteristic point where coordinate centered on, obtain N*N block it is big The gradient information of small included pixel.
HOG information acquisition units 304, the gradient information for the pixel that the size for the block according to N*N is included obtain in advance If the HOG information of characteristic point.
Fig. 7 shows another structural frames of computing module 30 in the device provided in an embodiment of the present invention for extracting face characteristic Figure, for convenience of description, illustrate only the part related to the embodiment of the present invention, details are as follows:
With reference to figure 7, the unit included by the computing module 30 is used to performing Fig. 3 and corresponds to each step in embodiment Suddenly, the associated description in embodiment is corresponded to referring specifically to Fig. 3 and Fig. 3, here is omitted.As shown in fig. 7, in above-mentioned figure On the basis of shown in 6, the computing module 30 also includes normalization unit 305 and increases dimension unit 306.
Normalization unit 305, for the HOG information for presetting characteristic point to be normalized, after being normalized HOG information.
Increase dimension unit 306, for carrying out increasing dimension processing to the HOG information after normalization, obtain the HOG letters after increasing dimension Breath.
In embodiments of the present invention, when computing module 30 calculates the HOG information of each default characteristic point, coordinate system first Establish unit 301 and coordinate system is established to second faceform's pattern after change of scale, computing unit 302 calculates the second face again The gradient information of each pixel in model pattern, prepared for follow-up HOG information and the extraction face characteristic of calculating, gradient is believed afterwards Centered on ceasing coordinate of the acquiring unit 303 where presetting characteristic point, the gradient letter for the pixel that N*N block size included is taken Breath, the gradient information that last HOG information acquisition units 304 are included according to the size of N*N block obtain the default characteristic point HOG information;On the basis of above-mentioned, the HOG information that normalization unit 305 presets characteristic point to this is normalized, and increases Dimension unit 306 carries out increasing dimension processing to the HOG information after normalization.Therefore, the extraction face in the embodiment of the present invention is special The device of sign can weaken or even eliminate characteristic point to picture noise, the sensitivity of light and shade, obtain relatively reliable, performance more Good characteristic point data, and further improve the efficiency and accuracy rate of face feature extraction method.
Fig. 8 shows the structure of HOG information acquisition units 304 in the device for extracting face characteristic that inventive embodiments provide Block diagram, for convenience of description, the part related to the embodiment of the present invention is illustrate only, details are as follows:
With reference to figure 8, each subelement included by the HOG information acquisition units 304 is corresponded in embodiment for performing Fig. 4 Each step, the associated description in embodiment is corresponded to referring specifically to Fig. 4 and Fig. 4, here is omitted.The present invention is implemented In example, the HOG information acquisition units 304 include gradient direction and quantify subelement 3041, projection range value computation subunit 3042 And the cumulative subelement 3043 of projection range value.
Gradient direction quantifies subelement 3041, for being included using predetermined angle θ by quantization amplitude to the size of N*N block Gradient direction quantified, obtain quantify after gradient direction.
Range value computation subunit 3042 is projected, for using two perpendicular bisectors of N*N block as line of demarcation, by N*N block 4 cell factories are averagely divided into, the projection range value of the gradient magnitude of the pixel in block on each cell factory is carried out Traversal calculates.
Project range value to add up subelement 3043, for according to the gradient direction after quantization, by identical gradient direction Projection range value is accumulated in together, and forms the HOG information of 4* (360/ θ) dimension;Wherein, the gradient side of the projection range value of pixel To identical with the gradient direction of the pixel.
In embodiments of the present invention, the ladder of the pixel included in HOG information acquisition units 304 according to each N*N block When spending the HOG information of acquisition of information characteristic point, gradient direction quantization subelement 3041 enters to the gradient direction in N*N block first Row quantifies, and block is averagely divided into 4 cell factories by projection range value computation subunit 3042 again, and to picture in the block of the N*N Projection range value of the gradient magnitude of element on each cell factory carries out traversal calculating, finally projects the cumulative subelement of range value 3043 are accumulated in together the projection range value on identical gradient direction, that is, obtain the HOG information of this feature point.Therefore, this hair Bright embodiment obtains the HOG information of this feature point by above-mentioned unit, can further improve face characteristic extraction The efficiency and accuracy rate of method.
In view of the device of said extracted face characteristic has the advantage for improving face characteristic extraction efficiency and accuracy rate, this Inventive embodiments also provide a kind of face identification system, and the face identification system includes the extraction described in any of the above-described embodiment The device of face characteristic.
Fig. 9 is the electronic equipment of the preferred embodiment of the method provided in an embodiment of the present invention for realizing extraction face characteristic Structural representation.For convenience of description, the part related to the embodiment of the present invention is illustrate only, details are as follows:
The electronic equipment 1, which includes but is not limited to any one, to pass through keyboard, mouse, remote control, touch pad with user Or the mode such as voice-operated device carries out the electronic product of man-machine interaction, for example, personal computer, tablet personal computer, smart mobile phone, individual Digital assistants (Personal Digital Assistant, PDA), game machine, IPTV (Internet Protocol Television, IPTV), intelligent wearable equipment etc..Network residing for electronic equipment 1 is including but not limited to mutual Networking, wide area network, Metropolitan Area Network (MAN), LAN, VPN (Virtual Private Network, VPN) etc..
As shown in figure 9, electronic equipment 1 includes memory 2, processor 3 and input-output equipment 4.
Memory 2 is used to store the program of the method for extraction face characteristic and various data, and is run in electronic equipment 1 High speed is realized in journey, is automatically completed the access of program or data.Memory 2 can be the External memory equipment of electronic equipment 1 And/or internal storage device.Further, memory 2 can be there is no physical form in integrated circuit there is store function Circuit, such as RAM (Random-Access Memory, direct access storage device), FIFO (First In First Out) Deng, or, memory 2 can also be the storage device for having physical form, such as memory bar, TF card (Trans-flash Card) Etc..
Processor 3 can be central processing unit (CPU, Central Processing Unit).CPU is one piece of super large rule The integrated circuit of mould, it is the arithmetic core (Core) and control core (Control Unit) of electronic equipment 1.Processor 3 can be held The operating system of row electronic equipment 1 and the types of applications program of installation, program code etc., such as perform extraction face characteristic Modules or unit in device are to realize the method for extraction face characteristic.
Input-output equipment 4 is mainly used in realizing the input/output function of electronic equipment 1, for example, the numeral of transmitting-receiving input or Character information, or show the information inputted by user or be supplied to the information of user and the various menus of electronic equipment 1.
If the integrated module/unit of the electronic equipment 1 is realized in the form of SFU software functional unit and as independent Production marketing in use, can be stored in a computer read/write memory medium.It is real based on such understanding, the present invention All or part of flow in existing above-described embodiment method, the hardware of correlation can also be instructed come complete by computer program Into described computer program can be stored in a computer-readable recording medium, and the computer program is being executed by processor When, can be achieved above-mentioned each embodiment of the method the step of.Wherein, the computer program includes computer program code, described Computer program code can be source code form, object identification code form, executable file or some intermediate forms etc..The meter Calculation machine computer-readable recording medium can include:Can carry any entity or device of the computer program code, recording medium, USB flash disk, Mobile hard disk, magnetic disc, CD, computer storage, read-only storage (ROM, Read-Only Memory), random access memory Device (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..Need what is illustrated It is that the content that the computer-readable medium includes can be fitted according to legislation in jurisdiction and the requirement of patent practice When increase and decrease, such as in some jurisdictions, according to legislation and patent practice, computer-readable medium, which does not include electric carrier wave, to be believed Number and telecommunication signal.
In several embodiments provided by the present invention, it should be understood that disclosed system, apparatus and method can be with Realize by another way.For example, device embodiment described above is only schematical, for example, the module Division, only a kind of division of logic function, can there is other dividing mode when actually realizing.
The module illustrated as separating component can be or may not be physically separate, show as module The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of module therein can be selected to realize the mesh of this embodiment scheme according to the actual needs 's.
In addition, each functional module in each embodiment of the present invention can be integrated in a processing unit, can also That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list Member can both be realized in the form of hardware, can also be realized in the form of hardware adds software function module.
It is obvious to a person skilled in the art that the invention is not restricted to the details of above-mentioned one exemplary embodiment, Er Qie In the case of without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter From the point of view of which point, embodiment all should be regarded as exemplary, and be nonrestrictive, the scope of the present invention is by appended power Profit requires rather than described above limits, it is intended that all in the implication and scope of the equivalency of claim by falling Change is included in the present invention.Any attached associated diagram mark in claim should not be considered as into the involved right of limitation will Ask.Furthermore, it is to be understood that the word of " comprising " one is not excluded for other units or step, odd number is not excluded for plural number.Stated in system claims Multiple modules or device can also be realized by a module or device by software or hardware.The first, the second grade word For representing title, and it is not offered as any specific order.
Finally it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention and it is unrestricted, although reference The present invention is described in detail for preferred embodiment, it will be understood by those within the art that, can be to the present invention's Technical scheme is modified or equivalent substitution, without departing from the spirit and scope of technical solution of the present invention.

Claims (10)

  1. A kind of 1. method for extracting face characteristic, it is characterised in that methods described includes:
    The first face illustraton of model sample is obtained from the picture comprising face;
    The first face illustraton of model sample is transformed to second faceform's pattern of default yardstick;
    Calculate the HOG information that characteristic point is each preset in the second faceform pattern;
    The HOG message linkages of each default characteristic point are formed into complete HOG information together;
    According to the face characteristic in picture described in the complete HOG information extractions.
  2. 2. the method as described in claim 1, it is characterised in that described calculate each is preset in the second faceform pattern The HOG information of characteristic point includes:
    Using the upper left corner of the second faceform pattern as the origin of coordinates, the side that is extended to the right with the transverse direction comprising the origin of coordinates To for X-axis positive direction, using the direction extended longitudinally downward comprising the origin of coordinates as Y-axis positive direction, second face is established The rectangular coordinate system of model pattern;Wherein, a default characteristic point in the second faceform pattern corresponds to the right angle One coordinate points of coordinate system;
    According to the gradient information of each pixel of calculated for pixel values of each pixel in the second faceform pattern, the gradient Information includes gradient magnitude and gradient direction;
    Centered on the coordinate where the default characteristic point, the gradient information for the pixel that the size of N*N block is included is obtained; Wherein, the number of pixels that the block of the N*N is included is N*N, and N is the positive integer more than 1;
    The gradient information of the pixel included according to the size of the block of the N*N obtains the HOG information of the default characteristic point.
  3. 3. method as claimed in claim 2, it is characterised in that the pixel that the size of the block according to the N*N is included Gradient information obtain the HOG information of the default characteristic point after also include:
    The HOG information of the default characteristic point is normalized, the HOG information after being normalized;
    HOG information after the normalization is carried out to increase dimension processing, obtains the HOG information after increasing dimension.
  4. 4. method as claimed in claim 2, it is characterised in that the pixel that the size of the block according to the N*N is included Gradient information obtain the HOG information of the default characteristic point and include:
    Quantified using the predetermined angle θ gradient directions included by quantization amplitude to the size of the block of the N*N, obtain and quantify Gradient direction afterwards;
    Using two perpendicular bisectors of the block of the N*N as line of demarcation, the block of the N*N is averagely divided into 4 cell factories, to institute State projection range value of the gradient magnitude of the pixel in N*N block on each cell factory and carry out traversal calculating;
    According to the gradient direction after quantization, the projection range value on identical gradient direction is accumulated in together, forms 4* (360/ θ) The HOG information of dimension;Wherein, the gradient direction of the projection range value of pixel is identical with the gradient direction of pixel.
  5. 5. a kind of device for extracting face characteristic, it is characterised in that described device includes:
    Acquisition module, for obtaining the first face illustraton of model sample from the picture comprising face;
    Change of scale module, for the first face illustraton of model sample to be transformed to second faceform's pattern of default yardstick;
    Computing module, the HOG information of characteristic point is each preset in the second faceform pattern for calculating;
    Serial module structure, for the HOG message linkages of each default characteristic point to be formed into complete HOG information together;
    Extraction module, for the face characteristic in the picture according to the complete HOG information extractions.
  6. 6. device as claimed in claim 5, it is characterised in that the computing module includes:
    Establishment of coordinate system unit, it is former to include coordinate for using the upper left corner of the second faceform pattern as the origin of coordinates The direction that the transverse direction of point extends to the right is X-axis positive direction, square by Y-axis of the direction extended longitudinally downward comprising the origin of coordinates To establishing the rectangular coordinate system of the second faceform pattern;Wherein, one in the second faceform pattern is default Characteristic point corresponds to a coordinate points of the rectangular coordinate system;
    Computing unit, for the gradient according to each pixel of calculated for pixel values of each pixel in the second faceform pattern Information, the gradient information include gradient magnitude and gradient direction;
    Gradient information acquiring unit, the size institute of the block for centered on the coordinate where the default characteristic point, obtaining N*N Comprising pixel gradient information;Wherein, the number of pixels that the block of the N*N is included is N*N, and N is just whole more than 1 Number;
    HOG information acquisition units, the gradient information acquisition for the pixel that the size for the block according to the N*N is included are described pre- If the HOG information of characteristic point.
  7. 7. device as claimed in claim 6, it is characterised in that the computing module also includes:
    Normalization unit, for the HOG information of the default characteristic point to be normalized, the HOG after being normalized Information;
    Increase dimension unit, for carrying out increasing dimension processing to the HOG information after the normalization, obtain the HOG information after increasing dimension.
  8. 8. device as claimed in claim 6, it is characterised in that the HOG information acquisition units include:
    Gradient direction quantifies subelement, for the ladder included using predetermined angle θ by quantization amplitude to the size of the block of the N*N Degree direction is quantified, and obtains the gradient direction after quantifying;
    Range value computation subunit is projected, for using two perpendicular bisectors of the block of the N*N as line of demarcation, by the block of the N*N 4 cell factories are averagely divided into, to the projection range value of the gradient magnitude of the pixel in described piece on each cell factory Carry out traversal calculating;
    Project range value to add up subelement, for according to the gradient direction after quantization, by the projection amplitude on identical gradient direction Value is accumulated in together, and forms the HOG information of 4* (360/ θ) dimension;Wherein, the gradient direction and the picture of the projection range value of pixel The gradient direction of element is identical.
  9. 9. a kind of electronic equipment, it is characterised in that the electronic equipment includes memory and processor, and the processor is used for Realized when performing the computer program stored in memory such as any one methods described in claim 1-4.
  10. 10. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the computer program Realized when being executed by processor such as any one methods described in Claims 1-4.
CN201710792163.1A 2017-09-05 2017-09-05 Extract the method, apparatus and electronic equipment of face characteristic Active CN107491768B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710792163.1A CN107491768B (en) 2017-09-05 2017-09-05 Extract the method, apparatus and electronic equipment of face characteristic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710792163.1A CN107491768B (en) 2017-09-05 2017-09-05 Extract the method, apparatus and electronic equipment of face characteristic

Publications (2)

Publication Number Publication Date
CN107491768A true CN107491768A (en) 2017-12-19
CN107491768B CN107491768B (en) 2018-09-21

Family

ID=60652153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710792163.1A Active CN107491768B (en) 2017-09-05 2017-09-05 Extract the method, apparatus and electronic equipment of face characteristic

Country Status (1)

Country Link
CN (1) CN107491768B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463186A (en) * 2013-09-16 2015-03-25 深圳市迈瑞思智能技术有限公司 Target feature detection method and device
US20160275339A1 (en) * 2014-01-13 2016-09-22 Carnegie Mellon University System and Method for Detecting and Tracking Facial Features In Images
CN106980809A (en) * 2016-01-19 2017-07-25 深圳市朗驰欣创科技股份有限公司 A kind of facial feature points detection method based on ASM
CN106991356A (en) * 2016-01-20 2017-07-28 上海慧体网络科技有限公司 The algorithm that sportsman is tracked in a kind of video to ball match
CN107066958A (en) * 2017-03-29 2017-08-18 南京邮电大学 A kind of face identification method based on HOG features and SVM multi-categorizers

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463186A (en) * 2013-09-16 2015-03-25 深圳市迈瑞思智能技术有限公司 Target feature detection method and device
US20160275339A1 (en) * 2014-01-13 2016-09-22 Carnegie Mellon University System and Method for Detecting and Tracking Facial Features In Images
CN106980809A (en) * 2016-01-19 2017-07-25 深圳市朗驰欣创科技股份有限公司 A kind of facial feature points detection method based on ASM
CN106991356A (en) * 2016-01-20 2017-07-28 上海慧体网络科技有限公司 The algorithm that sportsman is tracked in a kind of video to ball match
CN107066958A (en) * 2017-03-29 2017-08-18 南京邮电大学 A kind of face identification method based on HOG features and SVM multi-categorizers

Also Published As

Publication number Publication date
CN107491768B (en) 2018-09-21

Similar Documents

Publication Publication Date Title
US10635890B2 (en) Facial recognition method and apparatus, electronic device, and storage medium
US10713532B2 (en) Image recognition method and apparatus
CN114913565B (en) Face image detection method, model training method, device and storage medium
CN111241989B (en) Image recognition method and device and electronic equipment
WO2016023264A1 (en) Fingerprint identification method and fingerprint identification device
CN102682428B (en) Fingerprint image computer automatic mending method based on direction fields
CN106709404A (en) Image processing device and image processing method
CN110082135A (en) Equipment fault recognition methods, device and terminal device
CN114187633A (en) Image processing method and device, and training method and device of image generation model
CN111898538A (en) Certificate authentication method and device, electronic equipment and storage medium
CN113591566A (en) Training method and device of image recognition model, electronic equipment and storage medium
CN111680544B (en) Face recognition method, device, system, equipment and medium
CN113177432A (en) Head pose estimation method, system, device and medium based on multi-scale lightweight network
CN112699857A (en) Living body verification method and device based on human face posture and electronic equipment
CN114037838A (en) Neural network training method, electronic device and computer program product
CN113255561B (en) Hair information identification method, device, equipment and storage medium
CN113705469A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN109815772A (en) Fingerprint enhancement, recognition methods, device and Fingerprint enhancement identifying system
CN107491768B (en) Extract the method, apparatus and electronic equipment of face characteristic
CN113887408B (en) Method, device, equipment and storage medium for detecting activated face video
CN116052175A (en) Text detection method, electronic device, storage medium and computer program product
CN111612712B (en) Face correction degree determination method, device, equipment and medium
CN112541436B (en) Concentration analysis method and device, electronic equipment and computer storage medium
CN115311518A (en) Method, device, medium and electronic equipment for acquiring visual attribute information
CN104050457A (en) Human face gender identification method based on small sample training library

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant