CN105469081B - A kind of face key independent positioning method and system for U.S. face - Google Patents

A kind of face key independent positioning method and system for U.S. face Download PDF

Info

Publication number
CN105469081B
CN105469081B CN201610024890.9A CN201610024890A CN105469081B CN 105469081 B CN105469081 B CN 105469081B CN 201610024890 A CN201610024890 A CN 201610024890A CN 105469081 B CN105469081 B CN 105469081B
Authority
CN
China
Prior art keywords
face
input picture
gray scale
unit
original shape
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610024890.9A
Other languages
Chinese (zh)
Other versions
CN105469081A (en
Inventor
陈丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Pinguo Technology Co Ltd
Original Assignee
Chengdu Pinguo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Pinguo Technology Co Ltd filed Critical Chengdu Pinguo Technology Co Ltd
Priority to CN201610024890.9A priority Critical patent/CN105469081B/en
Publication of CN105469081A publication Critical patent/CN105469081A/en
Application granted granted Critical
Publication of CN105469081B publication Critical patent/CN105469081B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention discloses a kind of face key independent positioning method and system for U.S. face, and iconic model training is comprising steps of obtaining sample image by server and being converted into gray scale sample image;The face in gray scale sample image is detected, statistics human face characteristic point coordinate establishes feature point model;By using it is randomly shaped and add random perturbation extract original shape group;Utilize random forest and linear regression method repetitive exercise iconic model;Feature point model and iconic model are stored as training pattern;Face key point is extracted comprising steps of being loaded into training pattern;It obtains input picture and is converted into gray scale input picture;Detect the face in gray scale input picture;By using it is randomly shaped and add random perturbation extract original shape group;Calculating is iterated in training pattern to original shape group, obtains new shape group;The intermediate value of shape of looking for novelty group is as face key point.The present invention is weaker and higher for the setting accuracy of face key point to the dependence of initialization shape.

Description

A kind of face key independent positioning method and system for U.S. face
Technical field
The invention belongs to technical field of image processing, more particularly to a kind of face key independent positioning method for U.S. face And system.
Background technique
In recent years, with the raising of mobile phone pixel, self-timer crowd increasingly increases, and the demand to photo processing is also increasingly various Change, therefore carries out personalized U.S. face to taking a picture certainly and have become the research field for being rich in vigor;In the mistake of personalized U.S. face Cheng Zhong, the positioning to face key point are an extremely crucial steps, and the accuracy of positioning affects face U.S. face in image Quality.
Currently, having existed the location algorithm of many face key points during image U.S. face, such as used in early stage To AAM algorithm, ASM algorithm and the mutation algorithm on both algorithms, there are also recently for Face detection based on returning The method returned.
But these algorithms used at present often lead to face to the excessive dependence of the original shape of face in image and close Key point location falls into local optimum so that final locating effect is barely satisfactory so that the positioning accuracy of final key point compared with It is low, lead to the U.S. face quality decline of image, further progress is needed to manually adjust, many inconvenience is brought for user.
Summary of the invention
To solve the above-mentioned problems, the invention proposes a kind of face key independent positioning method and system for U.S. face, The method is weaker and higher for the setting accuracy of face key point to the dependence of initialization shape.
In order to achieve the above objectives, the technical solution adopted by the present invention is that: a kind of face key point location side for U.S. face Method, including iconic model training method and face key point extracting method;
Described image model training method comprising steps of
(1) sample image is obtained by server;
(2) sample image is converted into gray scale sample image;
(3) face information in gray scale sample image is detected, face frame position is recorded, while detecting and reading face and corresponding to Characteristic point information constitute true shape, statistics human face characteristic point coordinate establish feature point model;
(4) by using randomly shaped and add random perturbation and extract gray scale using the face information obtained in step (3) The original shape group of face in sample image;
(5) random forest and linear regression method are utilized, is instructed according to the true shape and the original shape group iteration Practice iconic model;
(6) the feature point model and iconic model are stored as training pattern;
Face key point extracting method comprising steps of
(7) training pattern is loaded into mobile client;
(8) input picture is obtained using mobile client;
(9) input picture is converted into gray scale input picture;
(10) face information in the gray scale input picture is detected, face frame position is recorded;
(11) by using method that is randomly shaped and adding random perturbation, the face information obtained in step (10) is utilized Extract the original shape group of face in gray scale input picture;
(12) calculating is iterated in the training pattern to the original shape group of face in the gray scale input picture, Obtain new shape group;
(13) intermediate value of shape of looking for novelty group is as final shape, final shape, that is, face key point.
Further, utilizing the face information in Adaboost algorithm detection gray scale sample image in the step (3); The face information in Adaboost algorithm detection gray scale input picture is utilized in the step (10).
Further, the extracting method of the original shape group of face described in the step (4) and step (11) includes Step: randomly choosing multiple original shapes, and rectangular centre to the face frame and scale do random offset and obtain new face Frame, the original shape, which is then done scaling and offset, falls in original shape in new face frame, to obtain original shape group.
Further, in the step (5) repetitive exercise iconic model comprising steps of
(5.1) the shape residual error of gray scale sample image is calculated using the true shape and the original shape group;
(5.2) to minimize leaf node residual error impurity level as target training random forest;
(5.3) the pixel value difference feature of gray scale sample image is extracted according to random forest;
(5.4) utilize linear regression, calculate the pixel value difference feature to the shape residual error mapping matrix;
(5.5) according to the pixel value difference feature and mapping matrix, the shape increment of gray scale sample image is calculated, shape is utilized Original shape group described in shape incremental update;
(5.6) difference of the original shape group is calculated, if more than preset threshold then return step (4), otherwise returns to step Suddenly (5.1) continue iteration, then export final random forest and mapping matrix as image mould until reaching default the number of iterations Type.
Further, the step (12) in obtain new shape group iterative calculation comprising steps of
(12.1) pixel difference is extracted according to the random forest in the original shape group and training pattern of face in input picture Value tag;
(12.2) based on the mapping matrix in pixel value difference feature and training pattern, shape increment is acquired, is increased using shape Amount updates the original shape group of face in input picture, obtains new shape group;
(12.3) difference of new shape group is calculated, if more than given threshold then return step (11), otherwise return step (12.1) continue iteration, then export final new shape group until reaching default the number of iterations.
Further, the default the number of iterations takes 5-9 times, the default the number of iterations optimal selection value is 7 times.
On the other hand, the present invention also provides a kind of face key point positioning system for U.S. face, including server and Mobile client, the server and mobile client wirelessly communicate;The server includes model training module, is used for sample This image forms training pattern by training;The mobile client includes face key point extraction module, and the face is crucial Point extraction module, for extracting the face key point of input picture from model training module.
Further, the model training module includes sample acquisition unit, image conversion unit, Face datection reading Unit, original shape group acquiring unit, model training unit and model set unit;The sample acquisition unit obtains sample graph Sample image is converted into gray scale sample image by image conversion unit as after, the Face datection reading unit is converted from image Gray scale sample image is transferred in unit and face information detection is carried out to gray scale sample image obtains true shape and statistics face Characteristic point coordinate establishes feature point model, and the original shape group acquiring unit obtains face letter from Face datection reading unit Cease and extract from gray scale sample image is transferred in image conversion unit the original shape group of face in gray scale sample image, institute Model training unit is stated to transfer true shape from Face datection reading unit and transfer initial shape from original shape group acquiring unit Shape group to training image model, feature point model in Face datection reading unit described in the model set unit set and Iconic model in the model training unit forms training pattern.
Further, the face key point extraction module includes model acquiring unit, image acquisition unit, input figure As converting unit, input picture Face datection unit, input picture original shape group acquiring unit and key point extraction unit;Institute It states model acquiring unit and connects the server, described image acquiring unit obtains input picture using mobile client, described Input picture converting unit transfers input picture from image acquisition unit and input picture is converted into gray scale input picture, institute Input picture Face datection unit is stated to transfer gray scale input picture from input picture converting unit and detect the gray scale input Face information in image, the input picture original shape group acquiring unit obtain people from input picture Face datection unit Face information and from gray scale input picture is transferred in input picture converting unit to extract the initial of face in gray scale input picture Shape group, the key point extraction unit by the model acquiring unit training pattern and the input picture it is initial The original shape group of face, which calculate, in gray scale input picture in shape group acquiring unit obtains face key point.
Further, the mobile client uses mobile phone, tablet computer or digital camera.
Using the technical program the utility model has the advantages that
(1) original shape group is obtained by using method that is randomly shaped and adding random perturbation, so that original shape is more Has diversity, to reduce dependence of the face key point location to initialization shape.
(2) using minimize leaf node residual error impurity level as target random forest training so that extract characteristics of image with The shape residual error degree of correlation is higher, more conducively optimizes current shape, to improve crucial spot placement accuracy.
(3) using the method for multiple original shape group linear regressions, multiple original shapes are investigated at any time in regression process The corresponding regression effect of group, jumps out iteration once as a result having big difference and restarts, reduce and fall into caused by local optimum Error.
(4) a kind of face key point positioning system for U.S. face proposed by the invention, can cooperate proposed by the invention Method realization and this method application.
Detailed description of the invention
Fig. 1 is a kind of face key independent positioning method flow chart for U.S. face of the invention;
Fig. 2 is iconic model training method flow chart in the embodiment of the present invention;
Fig. 3 is face key point extracting method flow chart in the embodiment of the present invention;
Fig. 4 is the structural schematic diagram of model training module in the embodiment of the present invention;
Fig. 5 is the structural schematic diagram of face key point extraction module in the embodiment of the present invention.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, the present invention is made into one with reference to the accompanying drawing Step illustrates.
In the present embodiment, shown in Figure 1, the invention proposes a kind of face key point location sides for U.S. face Method, including iconic model training method and face key point extracting method;Described image model training method includes step (1)- (6), as shown in Figure 2;Face key point extracting method includes step (7)-(13), as shown in Figure 3.
Implementation process are as follows:
(1) sample image is obtained by server.
(2) sample image is converted into gray scale sample image, conversion formula is as follows:
Gray=R × 0.299+G × 0.587+B × 0.114
Wherein R, the G and B points of values for three color channels in sample image, GrayFor gray scale sample image.
(3) face information in gray scale sample image is detected, face frame position is recorded, while detecting and reading face and corresponding to Characteristic point information constitute true shape, statistics human face characteristic point coordinate establish feature point model.
Using the face information in Adaboost algorithm detection gray scale sample image, face frame and corresponding face are obtained Characteristic point, Adaboost algorithm are existing algorithm for detecting human face characteristic point.
When it is implemented, record face frame { Recti, while reading the corresponding characteristic point information of face, the characteristic point Information constitutes true shape { TSi};Statistics human face characteristic point coordinate establishes feature point model { DSk, wherein i indicates sample image Label.
(4) by using randomly shaped and add random perturbation and extract gray scale using the face information obtained in step (3) The original shape group of face in sample image.
The extracting method of the original shape group of the face is comprising steps of randomly choose multiple original shapes, and to described Face frame { RectiRectangular centre and scale do random offset and obtain new face frame, then the original shape is scaled Original shape is set to fall in acquisition original shape group in new face frame with offset.
When it is implemented, randomly choosing several original shape { S to each sample imageij, i indicates sample image label, j Indicate original shape label;And random offset is done to the rectangular centre and scale of the face frame detected, obtain new face frame {Rect_newi, then by { SijScale and deviate, so that it is just fallen in { Rect_newiIn obtain original shape group { S_ newij};The scaling and offset equation are as follows:
S_newij=Sij×scale+offset
Wherein, scale is amount of zoom, and offset is offset.
(5) random forest and linear regression method are utilized, is instructed according to the true shape and the original shape group iteration Practice iconic model.
(5.1) the shape residual error of gray scale sample image is calculated using the true shape and the original shape group;
(5.2) to minimize leaf node residual error impurity level as target training random forest;
(5.3) the pixel value difference feature of gray scale sample image is extracted according to random forest;
(5.4) utilize linear regression, calculate the pixel value difference feature to the shape residual error mapping matrix;
(5.5) according to the pixel value difference feature and mapping matrix, the shape increment of gray scale sample image is calculated, shape is utilized Original shape group described in shape incremental update;
(5.6) difference of the original shape group is calculated, if more than preset threshold then return step (4), otherwise returns to step Suddenly (5.1) continue iteration, export final random forest and mapping matrix as iconic model until reaching default the number of iterations.
Specific implementation process:
For t=0 to 7do is according to { TSiAnd { S_newijCalculate the shape residual error { RS of each sampleij};
To minimize leaf node residual error impurity level as target, a random forest RF is trainedt
Pixel value difference feature { the Φ of each sample is extracted according to random forestij};
Using linear regression, { Φ is calculatedijArrive { RSijMapping { Wt};
According to { ΦijAnd { Wt, the shape increment of each sample is calculated, { S_new is updatedij};
Calculate { S_newijDifference then go back to step (4) if more than some threshold value;
End for。
(6) the feature point model and iconic model are stored as training pattern.
Store { DSk}、{RFtAnd { WtIt is used as training pattern, complete training.
(7) training pattern is loaded into mobile client.
It is loaded into training pattern { DS trained in advancek}、{RFtAnd { Wt}。
(8) input picture is obtained using mobile client.
(9) input picture is converted into gray scale input picture, conversion formula is as follows:
Gray=R × 0.299+G × 0.587+B × 0.114
Wherein R, the G and B points of values for three color channels in input picture, GrayFor gray scale input picture.
(10) face information in the gray scale input picture is detected, face frame position is recorded.
Using the face information in Adaboost algorithm detection gray scale input picture, face frame { Rect1 is obtainedi}。
The face that face scale is greater than 200 × 200 can be only considered in implementation process, because of the picture taken pictures certainly based on mobile phone Element consideration is too small less than 200 × 200 face, then disregards.
(11) by using the face of the middle acquisition of method extraction and application step (10) that is randomly shaped and adding random perturbation The original shape group of face in information gray scale input picture.
The extracting method of the original shape group of the face is comprising steps of randomly choose multiple original shapes, and to described The rectangular centre and scale of face frame do random offset and obtain new face frame, and the original shape, which is then done scaling and offset, makes Original shape, which is fallen in, obtains original shape group in new face frame.
When it is implemented, randomly choosing several original shape { S1 to input pictureij};And to the face frame detected {Rect1iRectangular centre and scale do random offset, obtain new face frame { Rect1_newi, then by { S1ijScale And offset, so that it is just fallen in { Rect1_newiIn obtain original shape group { S1_newij};The scaling and offset equation are such as Under:
S1_newij=S1ij×scale+offset
Wherein, scale is amount of zoom, and offset is offset.
(12) calculating is iterated in the training pattern to the original shape group of face in the gray scale input picture, Obtain new shape group.
Obtain new shape group iterative calculation comprising steps of
(12.1) pixel difference is extracted according to the random forest in the original shape group and training pattern of face in input picture Value tag;
(12.2) based on the mapping matrix in pixel value difference feature and training pattern, shape increment is acquired, is increased using shape Amount updates the original shape group of face in input picture, obtains new shape group;
(12.3) difference of new shape group is calculated, if more than given threshold then return step (11), otherwise return step (12.1) continue iteration and export final new shape group until reaching default the number of iterations.
Specific implementation process:
For t=0 to 7do is according to { S1_newijAnd { RFtExtract pixel value difference feature;
Based on pixel value difference feature and { Wt, shape increment is acquired, and update { S1_newij};
Calculate { S1_newijDifference then return step 11 if more than some threshold value;
End for。
(13) intermediate value of shape of looking for novelty group is as final shape, final shape, that is, face key point.
Wherein, the default the number of iterations t takes 5-9 times.
It is 7 times by presetting the number of iterations t optimal selection value described in experimental verification.
For the realization for cooperating the method for the present invention, it is based on identical inventive concept, the present invention also provides one kind for U.S. face Face key point positioning system, including server and mobile client, the server and mobile client wirelessly communicate;Institute Stating server includes model training module, for sample image to be formed training pattern by training;The mobile client packet Include face key point extraction module, the face key point extraction module, for extracting input picture from model training module Face key point.
Wherein, shown in Figure 4, the model training module includes sample acquisition unit, image conversion unit, face inspection Survey reading unit, original shape group acquiring unit, model training unit and model set unit;The sample acquisition unit obtains Sample image is converted into gray scale sample image by image conversion unit after sample image, the Face datection reading unit is from figure True shape and system are obtained as transferring gray scale sample image in converting unit and carrying out face information detection to gray scale sample image Meter human face characteristic point coordinate establishes feature point model, and the original shape group acquiring unit is obtained from Face datection reading unit Face information and the initial shape that face in gray scale sample image is extracted from gray scale sample image is transferred in image conversion unit Shape group, the model training unit are transferred true shape from Face datection reading unit and are transferred from original shape group acquiring unit Characteristic point of the original shape group to training image model, in Face datection reading unit described in the model set unit set Iconic model in model and the model training unit forms training pattern.
Wherein, shown in Figure 5, the face key point extraction module include model acquiring unit, image acquisition unit, Input picture converting unit, input picture Face datection unit, input picture original shape group acquiring unit and key point are extracted Unit;The model acquiring unit connects the server, and described image acquiring unit obtains input figure using mobile client Picture, the input picture converting unit transfer input picture from image acquisition unit and input picture are converted into gray scale input Image, the input picture Face datection unit transfer gray scale input picture from input picture converting unit and detect the ash The face information in input picture is spent, the input picture original shape group acquiring unit is from input picture Face datection unit Obtain face information and from gray scale input picture is transferred in input picture converting unit to extract face in gray scale input picture Original shape group, the key point extraction unit pass through in the model acquiring unit training pattern and the input scheme Face key point is obtained as the original shape group of face in the gray scale input picture in original shape group acquiring unit calculate.
Wherein, the mobile client uses mobile phone, tablet computer and digital camera.
The above shows and describes the basic principles and main features of the present invention and the advantages of the present invention.The technology of the industry Personnel are it should be appreciated that the present invention is not limited to the above embodiments, and the above embodiments and description only describe this The principle of invention, without departing from the spirit and scope of the present invention, various changes and improvements may be made to the invention, these changes Change and improvement all fall within the protetion scope of the claimed invention.This reality invent claimed range by appended claims and Its equivalent thereof.

Claims (9)

1. a kind of face key independent positioning method for U.S. face, which is characterized in that including iconic model training method and face Crucial point extracting method;
Described image model training method comprising steps of
(1) sample image is obtained by server;
(2) sample image is converted into gray scale sample image;
(3) face information in gray scale sample image is detected, records face frame position, while detecting and reading the corresponding spy of face Sign point information constitutes true shape, and statistics human face characteristic point coordinate establishes feature point model;
(4) by using randomly shaped and add random perturbation and extract gray scale sample using the face information obtained in step (3) The original shape group of face in image;
(5) random forest and linear regression method are utilized, according to the true shape and the original shape group repetitive exercise figure As model;
Step (5) specifically includes the following steps:
(5.1) the shape residual error of gray scale sample image is calculated using the true shape and the original shape group;
(5.2) to minimize leaf node residual error impurity level as target training random forest;
(5.3) the pixel value difference feature of gray scale sample image is extracted according to random forest;
(5.4) utilize linear regression, calculate the pixel value difference feature to the shape residual error mapping matrix;
(5.5) according to the pixel value difference feature and mapping matrix, the shape increment of gray scale sample image is calculated, is increased using shape Amount updates the original shape group;
(5.6) difference of the original shape group is calculated, if more than preset threshold then return step (4), otherwise return step (5.1) continue iteration, then export final random forest and mapping matrix as iconic model until reaching default the number of iterations;
(6) the feature point model and iconic model are stored as training pattern;
The face key point extracting method comprising steps of
(7) training pattern is loaded into mobile client;
(8) input picture is obtained using mobile client;
(9) input picture is converted into gray scale input picture;
(10) face information in the gray scale input picture is detected, face frame position is recorded;
(11) it by using method that is randomly shaped and adding random perturbation, is extracted using the face information obtained in step (10) The original shape group of face in gray scale input picture;
(12) calculating is iterated in the training pattern to the original shape group of face in the gray scale input picture, obtained New shape group;
(13) intermediate value of shape of looking for novelty group is as final shape, final shape, that is, face key point.
2. a kind of face key independent positioning method for U.S. face according to claim 1, which is characterized in that the step (3) face information in Adaboost algorithm detection gray scale sample image is utilized in;Adaboost is utilized in the step (10) Algorithm detects the face information in gray scale input picture.
3. a kind of face key independent positioning method for U.S. face according to claim 1, which is characterized in that the step (4) and the extracting method of the original shape group of face described in step (11) is comprising steps of randomly choose multiple original shapes, and Rectangular centre and scale to the face frame do random offset and obtain new face frame, then by the original shape do scaling and Offset falls in original shape in new face frame, to obtain original shape group.
4. a kind of face key independent positioning method for U.S. face according to claim 1, which is characterized in that the step (12) in obtain new shape group iterative calculation comprising steps of
(12.1) pixel value difference spy is extracted according to the random forest in the original shape group and training pattern of face in input picture Sign;
(12.2) based on the mapping matrix in pixel value difference feature and training pattern, shape increment is acquired, more using shape increment The original shape group of face, obtains new shape group in new input picture;
(12.3) difference of new shape group is calculated, if more than given threshold then return step (11), otherwise return step (12.1) Continue iteration, then exports final new shape group until reaching default the number of iterations.
5. a kind of face key independent positioning method for U.S. face according to claim 4, which is characterized in that described default The number of iterations takes 5-9 times.
6. a kind of face key point location system for U.S. face of any one the method for realizing claim 1-5 System, which is characterized in that including server and mobile client, the server and mobile client are wirelessly communicated;The service Device includes model training module, for sample image to be formed training pattern by training;The mobile client includes face Key point extraction module, the face key point extraction module are closed for extracting the face of input picture from model training module Key point.
7. a kind of face key point positioning system for U.S. face according to claim 6, which is characterized in that the model Training module includes sample acquisition unit, image conversion unit, Face datection reading unit, original shape group acquiring unit, mould Type training unit and model set unit;The sample acquisition unit obtains sample graph after sample image by image conversion unit As being converted into gray scale sample image, the Face datection reading unit transfers gray scale sample image and right from image conversion unit Gray scale sample image carries out face information detection acquisition true shape and statistics human face characteristic point coordinate establishes feature point model, institute Original shape group acquiring unit is stated to obtain face information from Face datection reading unit and transfer ash from image conversion unit Degree sample image to extract the original shape group of face in gray scale sample image, read from Face datection by the model training unit It takes unit to transfer true shape and transfers original shape group from original shape group acquiring unit to training image model, the mould The image mould in feature point model and the model training unit in Face datection reading unit described in type aggregation units set Type forms training pattern.
8. a kind of face key point positioning system for U.S. face according to claim 6, which is characterized in that the face Key point extraction module includes model acquiring unit, image acquisition unit, input picture converting unit, input picture Face datection Unit, input picture original shape group acquiring unit and key point extraction unit;The model acquiring unit connects the service Device, described image acquiring unit obtain input picture using mobile client, and the input picture converting unit is obtained from image Input picture is transferred in unit and input picture is converted into gray scale input picture, and the input picture Face datection unit is from defeated Enter to transfer gray scale input picture in image conversion unit and detects the face information in the gray scale input picture, the input figure As original shape group acquiring unit obtains face information and from input picture converting unit from input picture Face datection unit In transfer gray scale input picture to extract the original shape group of face in gray scale input picture, the key point extraction unit is logical It crosses to the gray scale input in the training pattern and the input picture original shape group acquiring unit in the model acquiring unit The original shape group of face, which calculate, in image obtains face key point.
9. a kind of face key point positioning system for U.S. face according to claim 6, which is characterized in that the movement Client uses mobile phone, tablet computer or digital camera.
CN201610024890.9A 2016-01-15 2016-01-15 A kind of face key independent positioning method and system for U.S. face Active CN105469081B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610024890.9A CN105469081B (en) 2016-01-15 2016-01-15 A kind of face key independent positioning method and system for U.S. face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610024890.9A CN105469081B (en) 2016-01-15 2016-01-15 A kind of face key independent positioning method and system for U.S. face

Publications (2)

Publication Number Publication Date
CN105469081A CN105469081A (en) 2016-04-06
CN105469081B true CN105469081B (en) 2019-03-22

Family

ID=55606752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610024890.9A Active CN105469081B (en) 2016-01-15 2016-01-15 A kind of face key independent positioning method and system for U.S. face

Country Status (1)

Country Link
CN (1) CN105469081B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127104A (en) * 2016-06-06 2016-11-16 安徽科力信息产业有限责任公司 Prognoses system based on face key point and method thereof under a kind of Android platform
CN106096560A (en) * 2016-06-15 2016-11-09 广州尚云在线科技有限公司 A kind of face alignment method
CN107169463B (en) 2017-05-22 2018-09-14 腾讯科技(深圳)有限公司 Method for detecting human face, device, computer equipment and storage medium
CN107392110A (en) * 2017-06-27 2017-11-24 五邑大学 Beautifying faces system based on internet
CN113205040A (en) * 2017-08-09 2021-08-03 北京市商汤科技开发有限公司 Face image processing method and device and electronic equipment
CN107704847B (en) * 2017-10-26 2021-03-19 成都品果科技有限公司 Method for detecting key points of human face
CN109522871B (en) * 2018-12-04 2022-07-12 北京大生在线科技有限公司 Face contour positioning method and system based on random forest
CN109635752B (en) * 2018-12-12 2021-04-27 腾讯科技(深圳)有限公司 Method for positioning key points of human face, method for processing human face image and related device
CN110634102A (en) * 2019-11-19 2019-12-31 成都品果科技有限公司 Cross-platform certificate photo mobile terminal system based on flutter and use method thereof
CN111179174B (en) * 2019-12-27 2023-11-03 成都品果科技有限公司 Image stretching method and device based on face recognition points
CN114418901B (en) * 2022-03-30 2022-08-09 江西中业智能科技有限公司 Image beautifying processing method, system, storage medium and equipment based on Retinaface algorithm

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824050A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascade regression-based face key point positioning method
CN104899563A (en) * 2015-05-29 2015-09-09 深圳大学 Two-dimensional face key feature point positioning method and system
CN104992148A (en) * 2015-06-18 2015-10-21 江南大学 ATM terminal human face key points partially shielding detection method based on random forest
CN105224935A (en) * 2015-10-28 2016-01-06 南京信息工程大学 A kind of real-time face key point localization method based on Android platform

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9558396B2 (en) * 2013-10-22 2017-01-31 Samsung Electronics Co., Ltd. Apparatuses and methods for face tracking based on calculated occlusion probabilities

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824050A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascade regression-based face key point positioning method
CN104899563A (en) * 2015-05-29 2015-09-09 深圳大学 Two-dimensional face key feature point positioning method and system
CN104992148A (en) * 2015-06-18 2015-10-21 江南大学 ATM terminal human face key points partially shielding detection method based on random forest
CN105224935A (en) * 2015-10-28 2016-01-06 南京信息工程大学 A kind of real-time face key point localization method based on Android platform

Also Published As

Publication number Publication date
CN105469081A (en) 2016-04-06

Similar Documents

Publication Publication Date Title
CN105469081B (en) A kind of face key independent positioning method and system for U.S. face
WO2019109526A1 (en) Method and device for age recognition of face image, storage medium
TWI358674B (en)
CN105718873B (en) Stream of people's analysis method based on binocular vision
CN110136198B (en) Image processing method, apparatus, device and storage medium thereof
CN112967341B (en) Indoor visual positioning method, system, equipment and storage medium based on live-action image
WO2020192112A1 (en) Facial recognition method and apparatus
CN108399405A (en) Business license recognition methods and device
CN103116763A (en) Vivo-face detection method based on HSV (hue, saturation, value) color space statistical characteristics
CN113361542B (en) Local feature extraction method based on deep learning
CN110458059A (en) A kind of gesture identification method based on computer vision and identification device
CN103778436B (en) A kind of pedestrian's attitude detecting method based on image procossing
CN104281835B (en) Face recognition method based on local sensitive kernel sparse representation
CN111191649A (en) Method and equipment for identifying bent multi-line text image
CN109598220A (en) A kind of demographic method based on the polynary multiple dimensioned convolution of input
CN104346630B (en) A kind of cloud flowers recognition methods of heterogeneous characteristic fusion
WO2021175040A1 (en) Video processing method and related device
CN105009172B (en) For the method and apparatus that motion blur perceives vision posture tracking
CN104537381B (en) A kind of fuzzy image recognition method based on fuzzy invariant features
CN105825228B (en) Image-recognizing method and device
CN109087270B (en) One kind being based on improved convolution match tracing pipe video image defogging Enhancement Method
US11659123B2 (en) Information processing device, information processing system, information processing method and program for extracting information of a captured target image based on a format of the captured image
CN109118591A (en) A kind of identification of historical relic cloud and interactive system and method based on augmented reality
CN110555379B (en) Human face pleasure degree estimation method capable of dynamically adjusting features according to gender
CN109359543B (en) Portrait retrieval method and device based on skeletonization

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 610041 the 13 floor of No. 1 middle Tianfu Avenue, No. 1268, high-tech zone, Chengdu, Sichuan.

Patentee after: Chengdu PinGuo Digital Entertainment Ltd.

Address before: 3, 13 floor, E zone, Tianfu Software Park, Chengdu hi tech Zone, Sichuan, 610000

Patentee before: Chengdu PinGuo Digital Entertainment Ltd.