CN100520806C - Automatic elimination method of artifact generated in face image synthesis - Google Patents

Automatic elimination method of artifact generated in face image synthesis Download PDF

Info

Publication number
CN100520806C
CN100520806C CNB2007100669307A CN200710066930A CN100520806C CN 100520806 C CN100520806 C CN 100520806C CN B2007100669307 A CNB2007100669307 A CN B2007100669307A CN 200710066930 A CN200710066930 A CN 200710066930A CN 100520806 C CN100520806 C CN 100520806C
Authority
CN
China
Prior art keywords
pixel
color value
face
organ
people
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2007100669307A
Other languages
Chinese (zh)
Other versions
CN101008981A (en
Inventor
陈纯
卜佳俊
张翼
宋明黎
庞晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CNB2007100669307A priority Critical patent/CN100520806C/en
Publication of CN101008981A publication Critical patent/CN101008981A/en
Application granted granted Critical
Publication of CN100520806C publication Critical patent/CN100520806C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/273Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised

Abstract

This invention discloses one human face image integration virtual shadow automatic removing method, which comprises the following steps: integrating one new human face image by more than two pieces to remove the large volume virtual shadow and overlap shadow through one set of special integration formula while traditional method amplifies the error with virtual and overlap shadow to lower the image quality without resolution on local area of human face.

Description

The automatic removing method of diplopia during facial image is synthetic
Technical field
The present invention relates to Computer Image Processing and computer vision field, particularly relate to a kind of automatic removing method that is used for the synthetic diplopia of facial image.
Background technology
Along with computer hardware technique, the particularly fast development of theory on computer vision and image processing techniques, various theories in this specific area of detection, identification and fusion of facial image and practical application means are also abundant day by day.The identification of people's face and synthetic very application prospects arranged is searched for to medical science from the film special efficacy; From the public safety to the aided education; All can see its figure from virtual meeting to family's amusement etc. everywhere.
The cardinal principle flow process that facial image is handled is: earlier original image is carried out pre-service, human face region is detected again, the feature of identification people face is synthesized people's face according to particular requirement and purpose again.The synthetic deformation effect that can be divided into individual people's face and with new facial image of many facial images fusions.
From this flow process as can be seen, the method for people's face fusion is a key factor that influences final effect.Undoubtedly, study and invent better blending algorithm and technology, obtaining further, more outstanding treatment effect is very meaningful and valuable.
In a series of people's face synthetic technology, two dimension (plane) facial image is carried out synthetic automatically technology be in fundamental position.This technology also has bigger theory and is being worth and broad practical application.It should be noted that the process that the synthetic front and back often of recognition of face and people's face link up, combine closely, the face characteristic data that obtain in the face recognition process are necessary during people's face synthesizes.There are consistance in algorithm and data structure that these two processes adopt.
In the synthetic the whole bag of tricks of recognition of face and people's face, ASM and AAM model be have a far reaching influence, full-fledged, practical application class methods widely.But, if in the synthetic previous stage of people's face---there is error in the face characteristic point data that obtains in the recognition of face, and this error can be continued to amplify in fusion process so, and a large amount of diplopias can appear in the people's face edge that finally synthesizes, picture quality is reduced, and visual effect descends.And objectively there is in various degree error inevitably in the unique point that obtains of the face identification method of present stage, and this probability that fusing stage face diplopia, ghost image are occurred is very high.
Summary of the invention
The object of the present invention is to provide the automatic removing method of the synthetic middle diplopia of a kind of facial image.
The technical solution used in the present invention is as follows:
Carrying out in the building-up process of facial image based on ASM and AAM model, (no matter these data are artificial that demarcate or demarcate automatically with face identification method) can error occur with the place, feature place of actual persons face because the face characteristic point data of demarcating.Adopt the synthetic meeting of such unique point a large amount of diplopias to occur.This method combines ASM/AAM model, kafka algorithm and psychology of vision scheduling theory basis, under the inaccurate prerequisite of given unique point, significantly removes the diplopia that occurs in the building-up process, obtains the method that clear level and smooth people's face synthesizes the result.
1 basic process
1) the synthetic basic process of facial image: synthetic being meant of facial image imported two different facial images of the good unique point of mark, through a series of processing procedure, generates the process of a new facial image; This new facial image has the face characteristic of two original images, seems all alike with two people's faces; The color value of each pixel of new facial image is calculated according to one group of specific composite formula by the color value of the pixel of the opposite position of two facial images of input, finish when the color value of each pixel all calculates, whole image also just determined;
2) two facial images of distinguishing the good unique point of mark of input are established first and are A, and second is B, and they are carried out the image pre-service respectively, calculates the average color of their people's face area;
3) the new facial image that will synthesize is called target figure, and the color value of each pixel of target figure is by A, and the color value of the corresponding pixel points of relevant position among B two figure calculates by above-mentioned specific composite formula;
4) when each pixel among the target figure all calculate finish after, target figure is carried out aftertreatment.The color of utilizing quick gaussian filtering to make that putting in order opens one's eyes wide marks on a map is more even.
The computing method and the last handling process of 2 each pixel color value of target figure
1) for each pixel of target figure, calculates the distance that this puts people's face outline and each face's organ contours, judge their relative position relation; Face's organ refers in particular to eyes, two eyebrow, nose, face totally 6 organs in ASM and AAM model, the outline line of the outline line of all organs and people's face outline is provided by the line of the unique point of demarcating;
The position relation of each pixel and people's face outline has three kinds:
A1) this point is in people's face outline,
A2) this point is outside people's face outline, but near outline,
A3) this point is outside people's face outline, and away from outline;
For each face's organ, pixel and their position relation have three kinds:
B1) this within the profile of certain organ,
B2) this outside the profile of certain organ, but near this organ contours,
B3) this is outside the profile of certain organ, and away from this organ contours;
2) at above-mentioned 1) described in, take all factors into consideration relative position relation different of each pixel and people's face outline and each face's organ, select different composite formulas for use, this pixel at input picture A, the color value substitution formula of the pixel of the relevant position among the B calculates the color value of this pixel;
At first, find A, the pixel of corresponding position in two input pictures of B.Color value to them carries out bilinear interpolation facing separately respectively in the territory, and interpolation result is carried out the translation adjustment with the aforesaid average colour of skin that calculates, and as A, B two figure corresponding pixel points participate in the color value that calculates adjusted value;
Take all factors into consideration relative position relation different of each pixel and people's face outline and each face's organ, divide following four kinds of situation situations to adopt different specific composite formulas to calculate:
A), calculate its color value with following formula for the pixel within each organ contours line:
p=cA+frac×(cB-cA) 1
Wherein
P is the color value of object pixel,
CA is a figure A corresponding point color of pixel value,
CB is a figure B corresponding point color of pixel value,
Frac is according to the selected different coefficients of Different Organs;
B), calculate its color value with following formula for the pixel of also close certain organ contours line in people's face outline:
p = cA + dFrac + ( 1 - dTF dTFO ) 2 × ( dTF - dFrac ) × ( cB - cA ) - - - 2
Wherein
P is the color value of object pixel,
CA is a figure A corresponding point color of pixel value,
CB is a figure B corresponding point color of pixel value,
DFrac is apart from the texture influence coefficient of the nearest organ of this pixel to this pixel,
DTF is the vertical range of this pixel to nearest organ contours line,
DTF0 is the vertical range of this pixel to the outer contour of face;
C) in human face region, but away from the pixel of each organ contours, as the pixel on the cheek, by following its color value of formula decision:
p = cA + ( 1 - ( cB - cA ) 2 255 × 255 ) × ( cB - cA ) × frac - - - 3
Wherein
P is the color value of object pixel,
CA is a figure A corresponding point color of pixel value,
CB is a figure B corresponding point color of pixel value,
Frac is according to the selected different coefficients of Different Organs;
D) for outside the facial contour zone, but near the pixel of people's face outline, by following its color value of formula decision:
p = cA + ( ( cB - cA ) 2 255 × 255 ) × ( cB - cA ) × frac 2 × dTFO - - - 4
Wherein
P is the color value of object pixel,
CA is a figure A corresponding point color of pixel value,
CB is a figure B corresponding point color of pixel value,
Frac is according to the different different coefficients of choosing of the nearest organ of this point;
DTF0 is the vertical range of this pixel to the outer contour of face;
3) whole open one's eyes wide mark on a map color value as all pixels all calculate finish after, whole pictures is carried out quick gaussian filtering, make whole color of image more even.
The present invention compares the useful effect that has with prior art:
Considered figure A in new composite formula family that is adopted and the synthesis condition judgment criterion, diplopia seldom appears in the inaccurate situation of unique point that figure B demarcates.If the site error of given unique point is less, in synthetic result, do not observe diplopia, ghost image; If the site error value of given unique point is very big, a small amount of very light diplopia is arranged in synthetic result.After further carrying out the post-processed process, do not influence and observe and use.Especially in zones such as the frontal eminence that originally occurs diplopia in a large number, eyes, the improvement of effect is more obvious.This method also can be used on synthesizing of many facial images.
Description of drawings
Fig. 1 is the design sketch that obtains with traditional fusion method, can be observed a large amount of diplopias;
Fig. 2 is the design sketch that obtains with the present invention, does not observe diplopia.
Embodiment
The present invention can be used for two or many situations that facial image synthesizes.The facial image that synthesizes has the face characteristic of its mother matrix image concurrently, and clear picture is level and smooth.The example that synthesizes with two facial images illustrates this synthetic method:
Carrying out in the building-up process of facial image based on ASM and AAM model, (no matter these data are artificial that demarcate or demarcate automatically with face identification method) can error occur with the place, feature place of actual persons face because the face characteristic point data of input.Adopt the synthetic meeting of such unique point a large amount of diplopias to occur.This method combines ASM/AAM model, kafka algorithm and psychology of vision scheduling theory basis, under the inaccurate prerequisite of given unique point, significantly removes the diplopia that occurs in the building-up process, obtains the method that clear level and smooth people's face synthesizes the result.
1 basic process
1) the synthetic basic process of facial image: synthetic being meant of facial image imported two different facial images of the good unique point of mark, through a series of processing procedure, generates the process of a new facial image; This new facial image has the face characteristic of two original images, seems all alike with two people's faces; The color value of each pixel of new facial image is calculated according to one group of specific composite formula by the color value of the pixel of the opposite position of two facial images of input.Finish when the color value of each pixel all calculates, whole image also just determined;
2) two facial image A that distinguish the good unique point of mark of input, B carries out the image pre-service respectively to them, calculates the average color of their people's face area;
Take the interval sampling algorithm in the preprocessing process, sample and statistics with histogram, finally calculate the average color of this people's face for the pixel within the facial contour zone in the picture.
The computing formula of average color is as follows:
colorA = 1 N Σ i = 1 N cAi
1
colorB = 1 M Σ i = 1 M cBi
Wherein
ColorA is the mean value of all color of pixel values of figure A,
ColorB is the mean value of all color of pixel values of figure B,
N is the sum of all pixels of figure A,
M is the sum of all pixels of figure B,
CAi is i color of pixel value of figure A,
CBi is i color of pixel value of figure B,
3) the new facial image that will synthesize is called target figure, and the color value of each pixel of target figure is by A, and the color value of the corresponding pixel points of relevant position calculates by specific formula among B two figure;
4) when each pixel among the target figure all calculate finish after, target figure is carried out aftertreatment.The color of utilizing quick gaussian filtering to make that putting in order opens one's eyes wide marks on a map is more even;
The computing method and the last handling process of 2 each pixel color value of target figure
1) for each pixel of target figure, calculates the distance that this puts people's face outline and each face's organ contours, judge their relative position relation.Face's organ refers in particular to eyes, two eyebrow, nose, face totally 6 organs in ASM and AAM model, the outline line of the outline line of all organs and people's face outline is provided by the line of the unique point of demarcating;
The position relation of each pixel and people's face outline has three kinds:
A1) this point is in people's face outline,
A2) this point is outside people's face outline, but near outline,
A3) this point is outside people's face outline, and away from outline;
For each face's organ, each pixel and their position relation have three kinds:
B1) this within the profile of certain organ,
B2) this outside the profile of certain organ, but near this organ contours,
B3) this is outside the profile of certain organ, and away from this organ contours;
2) at above-mentioned 1) described in, take all factors into consideration relative position relation different of each pixel and people's face outline and each face's organ, select different synthesis strategies and composite formula for use, this pixel at input picture A, the color value substitution formula of the pixel of the relevant position among the B calculates the color value of this pixel;
At first, according to the relevant theory of ASM model, utilize A, the unique point of having demarcated on two input pictures of B finds the pixel of their relevant positions separately.The color value of two pixels finding is carried out bilinear interpolation facing separately respectively in the territory, and interpolation result is carried out the translation adjustment with the aforesaid average colour of skin that calculates, as A, B two figure corresponding pixel points participate in the color value that calculates adjusted value;
Take all factors into consideration relative position relation different of each pixel and people's face outline and each face's organ, divide following four kinds of situations to adopt different composite formulas to calculate.The color value of each pixel of target figure all will pass through such computation process.In general, from the upper left corner of target figure, each each pixel of going is calculated one by one.Begin to handle next line after delegation calculate to finish, all determined up to the color value of all pixels of whole image.Target image has just generated.
Subordinate list: composite formula coefficient table
frac dFrac
Eyebrow 0.2 0.8
Eyes 0.08 0.6
Nose 0.08 0.24
Face 0.2 0.65
A), calculate its color value with following formula for the pixel within each organ contours line:
p=cA+frac×(cB-cA) 2
Wherein
P is the color value of object pixel,
CA is a figure A corresponding point color of pixel value,
CB is a figure B corresponding point color of pixel value,
Frac is according to the selected different coefficients of Different Organs; Because organ difference, the value of this coefficient also change thereupon.In the color value computation process as the pixel in the eye contour, the coefficient value of when applying mechanically this formula, selecting for use just and the coefficient value of the computing formula of the pixel in the nose profile different.All coefficient values are provided by measuring, and guaranteeing has general adaptability for different facial images, see " composite formula coefficient table ".
B), calculate its color value with following formula for the pixel of also close certain organ contours line in people's face outline:
p = cA + dFrac + ( 1 - dTF dTFO ) 2 × ( dTF - dFrac ) × ( cB - cA ) - - - 3
Wherein
P is the color value of object pixel,
CA is a figure A corresponding point color of pixel value,
CB is a figure B corresponding point color of pixel value,
DFrac is apart from the texture influence coefficient of the nearest organ of this pixel to this pixel, sees " composite formula coefficient table "
DTF is the vertical range of this pixel to nearest organ contours line,
DTF0 is the vertical range of this pixel to the outer contour of face;
C) in human face region, but away from the pixel of each organ contours, as the pixel on the cheek, by following its color value of formula decision:
p = cA + ( 1 - ( cB - cA ) 2 255 × 255 ) × ( cB - cA ) × frac - - - 4
Wherein
P is the color value of object pixel,
CA is a figure A corresponding point color of pixel value,
CB is a figure B corresponding point color of pixel value,
Frac is according to the selected different coefficients of Different Organs, sees " composite formula coefficient table ";
D) for outside the facial contour zone, but near the pixel of people's face outline, by following its color value of formula decision:
p = cA + ( ( cB - cA ) - 255 × 255 ) × ( cB - cA ) × frac 2 × dTFO - - - 5
Wherein
P is the color value of object pixel,
CA is a figure A corresponding point color of pixel value,
CB is a figure B corresponding point color of pixel value,
Frac is according to the different different coefficients of choosing of the nearest organ of this point, sees " composite formula coefficient table ";
DTF0 is the vertical range of this pixel to the outer contour of face;
3) whole open one's eyes wide mark on a map color value as all pixels all calculate finish after, whole image carried out quick gaussian filtering, make whole color of image more even.
colorC = 1 M Σ i = 1 M cCi - - - 6
Wherein
ColorC is the average color of all pixels of target figure
CCi is the color value of i pixel on the target figure,
For each pixel on the target figure:
cCi ′ = 2 π σA e - 2 π 2 σ 2 ( cCi - colorC ) 2
Wherein
ColorC is the average color of all pixels of target figure
CCi is the color value of i pixel on the target figure,
CCi ' is the color value of i pixel behind gaussian filtering on the target figure.
M is the sum of all pixels of target figure,
A, σ are the parameters of gaussian filtering formula.
Filtering has just generated target image after finishing.

Claims (1)

1, the automatic removing method of the synthetic middle diplopia of a kind of facial image, the step of this method is as follows: import two facial images of distinguishing the good unique point of mark, if first is A, second is B, they are carried out the image pre-service respectively, calculate A respectively according to the relevant theory of complexion model, the average colour of skin of B image, and calculate average colour of skin difference; Merge the shape facility and the textural characteristics of two facial images of input, finally generate a new facial image; The new facial image that synthesizes is called target figure, and the color value of each pixel of target figure is by A, the color value of the corresponding pixel points of relevant position among B two figure, and by drawing with reference to the relative position of this pixel at facial contour; This new facial image has the shape facility and the textural characteristics of people's face of two original images, and seeming all has similarity with two facial images; When each pixel among the target figure all calculate finish after, target figure is carried out aftertreatment, utilize quick gaussian filtering to make that the whole color of marking on a map of opening one's eyes wide is more even;
It is characterized in that: pixel has four kinds at the relative position of facial contour in the above-mentioned steps, selects for use different color value composite formulas to calculate:
1) for each pixel of target figure, calculate the distance of this pixel to people's face outline and each face's organ contours, judge their relative position relation; Face's organ refers in particular to eyes, two eyebrow, nose, face totally 6 organs in ASM and AAM model, the outline line of the outline line of all organs and people's face outline is provided by the line of the unique point of demarcating;
The position relation of each pixel and people's face outline has three kinds:
A1) this pixel is in people's face outline,
A2) this pixel is outside people's face outline, but near outline,
A3) this pixel is outside people's face outline, and away from outline;
For each face's organ, pixel and their position relation have three kinds:
B1) this pixel is within the profile of certain organ,
B2) this pixel is outside the profile of certain organ, but near this organ contours,
B3) this pixel is outside the profile of certain organ, and away from this organ contours;
2) take all factors into consideration relative position relation different of each pixel and people's face outline and each face's organ, select different composite formulas for use, at input picture A, the color value substitution formula of the pixel of the relevant position among the B calculates the color value of this pixel this pixel;
At first, find A, the pixel of corresponding position in two input pictures of B, color value to them carries out bilinear interpolation facing separately respectively in the territory, and interpolation result carried out the translation adjustment with the average colour of skin difference that is calculated in the described step, as A, B two figure corresponding pixel points participate in the color value of calculating adjusted value;
Take all factors into consideration relative position relation different of each pixel and people's face outline and each face's organ, divide following four kinds of situations to adopt different specific composite formulas to calculate, each pixel on the facial image must belong to a certain in these four kinds of situations:
A), calculate its color value with following formula for the pixel within each organ contours line:
p=cA+frac×(cB-cA) 1
Wherein
P is the color value of target pixel points,
CA is the color value of figure A corresponding pixel points,
CB is the color value of figure B corresponding pixel points,
Frac is according to the selected different coefficients of Different Organs;
B), calculate its color value with following formula for the pixel of also close certain organ contours line in people's face outline:
p = cA + dFrac + ( 1 - dTF dTFO ) 2 × ( dTF - dFrac ) × ( cB - cA ) - - - 2
Wherein
P is the color value of target pixel points,
CA is the color value of figure A corresponding pixel points,
CB is the color value of figure B corresponding pixel points,
DFrac is the texture influence coefficient of the nearest organ of distance objective pixel to this pixel,
DTF is the vertical range that target pixel points arrives nearest organ contours line,
DTFO is the vertical range that target pixel points arrives the outer contour of face;
C) in human face region, but away from the pixel of each organ contours, the pixel on the cheek, by following its color value of formula decision:
p = cA + ( ( cB - cA ) 2 255 × 255 ) × ( cB - cA ) × frac - - - 3
Wherein
P is the color value of target pixel points,
CA is the color value of figure A corresponding pixel points,
CB is the color value of figure B corresponding pixel points,
Frac is according to the selected different coefficients of Different Organs;
D), determine its color value by following formula for the pixel that does not belong to three kinds of situations in the image:
p = cA + ( ( cB - cA ) 2 255 × 255 ) × ( cB - cA ) × frac 2 × dTFO - - - 3
Wherein
P is the color value of target pixel points,
CA is the color value of figure A corresponding pixel points,
CB is the color value of figure B corresponding pixel points,
Frac is according to the different different coefficients of choosing of the nearest organ of this target pixel points;
DTFO is the vertical range of this target pixel points to the outer contour of face.
CNB2007100669307A 2007-01-26 2007-01-26 Automatic elimination method of artifact generated in face image synthesis Expired - Fee Related CN100520806C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2007100669307A CN100520806C (en) 2007-01-26 2007-01-26 Automatic elimination method of artifact generated in face image synthesis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2007100669307A CN100520806C (en) 2007-01-26 2007-01-26 Automatic elimination method of artifact generated in face image synthesis

Publications (2)

Publication Number Publication Date
CN101008981A CN101008981A (en) 2007-08-01
CN100520806C true CN100520806C (en) 2009-07-29

Family

ID=38697399

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2007100669307A Expired - Fee Related CN100520806C (en) 2007-01-26 2007-01-26 Automatic elimination method of artifact generated in face image synthesis

Country Status (1)

Country Link
CN (1) CN100520806C (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770649B (en) * 2008-12-30 2012-05-02 中国科学院自动化研究所 Automatic synthesis method for facial image
CN106156730B (en) * 2016-06-30 2019-03-15 腾讯科技(深圳)有限公司 A kind of synthetic method and device of facial image
WO2019075656A1 (en) 2017-10-18 2019-04-25 腾讯科技(深圳)有限公司 Image processing method and device, terminal, and storage medium
CN108764206B (en) * 2018-06-07 2020-07-28 广州杰赛科技股份有限公司 Target image identification method and system and computer equipment

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
人脸合成中模型的平滑调整和逼真的纹理映射. 赵向阳,杜利民.计算机辅助设计与图形学学报,第16卷第11期. 2004
人脸合成中模型的平滑调整和逼真的纹理映射. 赵向阳,杜利民.计算机辅助设计与图形学学报,第16卷第11期. 2004 *
人脸表情的识别、重建与合成. 宋明黎,全文,中国优秀博硕士学位论文全文数据库. 2005
人脸表情的识别、重建与合成. 宋明黎,全文,中国优秀博硕士学位论文全文数据库. 2005 *
基于正交图像生成人脸模型的合成分析方法. 苏从勇,庄越挺,黄丽,吴飞.浙江大学学报(工学版),第39卷第2期. 2005
基于正交图像生成人脸模型的合成分析方法. 苏从勇,庄越挺,黄丽,吴飞.浙江大学学报(工学版),第39卷第2期. 2005 *
计算机人脸合成系统的设计与实现. 李武军,任中方,陈兆乾.计算机应用研究,第7期. 2004
计算机人脸合成系统的设计与实现. 李武军,任中方,陈兆乾.计算机应用研究,第7期. 2004 *

Also Published As

Publication number Publication date
CN101008981A (en) 2007-08-01

Similar Documents

Publication Publication Date Title
CN108537743B (en) Face image enhancement method based on generation countermeasure network
CN103761519B (en) Non-contact sight-line tracking method based on self-adaptive calibration
EP2685419B1 (en) Image processing device, image processing method, and computer-readable medium
CN105049911A (en) Video special effect processing method based on face identification
Mozaffari et al. Gender classification using single frontal image per person: combination of appearance and geometric based features
CN104794693B (en) A kind of portrait optimization method of face key area automatic detection masking-out
CN107423678A (en) A kind of training method and face identification method of the convolutional neural networks for extracting feature
CN108491784A (en) The identification in real time of single feature towards large-scale live scene and automatic screenshot method
CN107123088A (en) A kind of method of automatic replacing photo background color
CN110516575A (en) GAN based on residual error domain richness model generates picture detection method and system
CN101299268A (en) Semantic object dividing method suitable for low depth image
CN103886589A (en) Goal-oriented automatic high-precision edge extraction method
CN109711268B (en) Face image screening method and device
CN110348263A (en) A kind of two-dimensional random code image recognition and extracting method based on image recognition
CN108734710A (en) A kind of intelligence fruits and vegetables selection method
CN106778785A (en) Build the method for image characteristics extraction model and method, the device of image recognition
CN100520806C (en) Automatic elimination method of artifact generated in face image synthesis
CN107798279A (en) Face living body detection method and device
CN108009493A (en) Face anti-fraud recognition methods based on action enhancing
CN108090485A (en) Display foreground extraction method based on various visual angles fusion
CN109215010A (en) A kind of method and robot face identification system of picture quality judgement
CN106650606A (en) Matching and processing method for face image and face image model construction system
CN109712095B (en) Face beautifying method with rapid edge preservation
CN106127735A (en) A kind of facilities vegetable edge clear class blade face scab dividing method and device
CN109359577A (en) A kind of Complex Background number detection system based on machine learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090729

Termination date: 20190126

CF01 Termination of patent right due to non-payment of annual fee