CN1870049A - Human face countenance synthesis method based on dense characteristic corresponding and morphology - Google Patents

Human face countenance synthesis method based on dense characteristic corresponding and morphology Download PDF

Info

Publication number
CN1870049A
CN1870049A CN 200610042981 CN200610042981A CN1870049A CN 1870049 A CN1870049 A CN 1870049A CN 200610042981 CN200610042981 CN 200610042981 CN 200610042981 A CN200610042981 A CN 200610042981A CN 1870049 A CN1870049 A CN 1870049A
Authority
CN
China
Prior art keywords
expression
vector
prime
shpb
human face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 200610042981
Other languages
Chinese (zh)
Inventor
游屈波
刘跃虎
袁泽剑
刘剑毅
郑南宁
杜少毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN 200610042981 priority Critical patent/CN1870049A/en
Publication of CN1870049A publication Critical patent/CN1870049A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method for synthesizing human face expression based on dense feature correspondence and morphology includes realizing effective separation of position information and grey scale information of human face feature in image by using decomposition-presentation of human face image and arresting shape and vein difference in expression variation with no interference to each other, utilizing morphological means to pick up black-white spot noise region and mapping said difference on new human face image as well as using linear interpolation mode to filter out noise for finalizing expression synthesis.

Description

Corresponding and the morphologic human face countenance synthesis method based on dense characteristic
Technical field
The present invention relates to computer vision and graphics technical field, particularly propose a kind of based on the countenance synthesis method of dense characteristic correspondence with the sense of reality facial image of morphologic filtering.
Background technology
Human face expression is human a kind of subjective definition to self face's behavior attitude.The psychologist studies show that people's face can produce about 55,000 kinds of different expressions, wherein has the natural language of can choosing of kind more than 30 to distinguish.The conversion of facial image expression is an important research branch of computer vision and field of Computer Graphics with synthesizing, and it all has application very widely at the coding and the aspects such as transmission, virtual reality, man-machine interaction, Remote Video Conference and film trick of facial image.Because the expression synthetic technology relates to a lot of contents such as physiological characteristic to understanding, classification and the expression of expression, also be closely related simultaneously, so be the focus of research always with the expression and the modeling of facial image.
Human face expression computing machine synthetic technology mainly contains following four classes at present:
One class is the method based on key frame (or being referred to as morphing) that is proposed by people such as Parke the earliest, this method generates the middle transition image under a series of two kinds of expression states by two kinds of different expression human face images are carried out interpolation techniques such as morphing.The deficiency of this method maximum is to provide same individual's the different images of expressing one's feelings of two width of cloth.If we have only someone's piece image, the facial image that synthesize under other expressions of this person is then powerless.
Second class is based on the countenance synthesis method of parameter, and these class methods comprise that the classical human face expression based on muscle model/pseudo-muscle model is synthetic.Though these class methods can synthesize effect human face expression preferably, these class methods have a common difficulty: need a large amount of loaded down with trivial details manual participations in the time of specific people's face and Model Matching, parameter is selected very difficulty simultaneously, and the computing cost amount is big.
The 3rd class methods are the methods that are referred to as based on the expression mapping.The thought of this method is simple, the image of the neutral face of given someone's a width of cloth and another width of cloth of this person have the image of certain expression, respectively the unique point (as canthus, the corners of the mouth etc.) of two width of cloth images is positioned by manual or automatic mode, then the position offset of the unique point in this expression conversion process is added on the individual features point of a new facial image, utilizes morphing to synthesize the new facial expression image of this new person's face at last.This method is simply effective, and calculated amount is little, has only considered the position change situation of human face characteristic point in the expression conversion process but topmost problem is it, and has ignored the conversion of grain details, such as the wrinkle that is produced in the expression conversion etc.
The 4th class be EKman according to the anatomical architectural feature of people's face, the FACS of proposition (Facial ActionCoding System), it comprise 44 the expression unit (Action Unit).The expression unit is the least unit of expression, and every kind of expression can be formed by part unit combination wherein, and the synthetic process of expression is exactly the process of these expression unit motions of control.Also usually and based on the parameter model of pseudo-muscle or based on the mask at reference mark combine based on the expression synthetic technology of FACS, synthesize human face expression.This method computing cost is bigger, and subjectivity is more, must the good various expressions of predefined unit, and then define the combinatorial formula of common expression.Owing to ignored the details of a lot of expression shape change, can only be similar to the characteristics of every kind of expression of reflection, synthetic expression is lively inadequately.
Summary of the invention
The objective of the invention is to overcome the shortcoming of existing human face countenance synthesis method, provide a kind of based on dense characteristic correspondence and morphologic human face countenance synthesis method, this method is based on facial image is carried out the dense characteristic corresponding expression, utilize morphological method to extract the noise that also filtering derives from alignment error, can synthesize the people's face that has more the sense of reality fast and newly express one's feelings.
Technical scheme of the present invention is achieved in that method step is as follows:
1) according to reference picture I AverageCharacteristic curve the neutrality expression B of the neutrality of specific people's face expression A, particular emotion A ' and people's face to be transformed is carried out Warp conversion based on characteristic curve, facial image after the distortion claims texture vector, the displacement of two image corresponding point promptly constitutes shape vector before and after the distortion, the vector quantization expression that obtains three width of cloth images like this is respectively (shpA, texA), (shpA ', texA ') and (shpB, texB), wherein shp represents shape vector, and tex represents texture vector;
2) the new shape vector after the reckoner end of love is changed: shpB '=shpB+ Δ Shp, wherein Δ Shp=shpA '-shpA;
3) scale map is changed in the reckoner end of love: R=texA '/texA;
4) in order to eliminate the melanoleukoderma noise among the scale map R, at first based on noise and non-noise region selected threshold with the scale map binaryzation, threshold value 1.3-1.6 is used to extract the hickie zone, threshold value 0.2-0.3 is used to extract the blackspot zone; And then these noise region are carried out the morphology expansive working, with complete extraction noise region transitional zone; At last noise region is carried out filling based on linear interpolation, completed percentage figure denoising.
5) the new texture vector after the reckoner end of love is changed: texB '=texB*R, the R has here finished 4) in denoising;
6) in (shpB ', texB '), shape vector shpB ' has write down each pixel sense of displacement and distance among the figure; Every pixel basis shpB ' among the texture vector texB ' is carried out displacement, just recovered the image B of reconstruct ', this is the process of forward direction Warp just.
Described noise region is carried out filling based on linear interpolation, adopts following interpolation formula:
R y ′ ( x , y ) = R ( x , y 1 ) ( 1 - y - y 1 y 2 - y 1 ) + R ( x , y 2 ) · y - y 1 y 2 - y 1 R x ′ ( x , y ) = R ( x 1 , y ) ( 1 - x - x 1 x 2 - x 1 ) + R ( x 2 , y ) · x - x 1 x 2 - x 1 R ′ ( x , y ) = R y ′ ( x , y ) + R x ′ ( x , y ) 2
R represents the scale map before the filtering, located the interpolation point (x, y) with the horizontal vertical intersecting point coordinate at edge, fill area after, promptly can utilize the gray-scale value of these points among the R to carry out linear interpolation respectively, and the result of average both direction is as final interpolation result.
The present invention expresses by the decomposition to facial image, can realize the positional information of face characteristic in the image and effective separation of half-tone information, and can non-interfering shape and the texture difference separately that is captured in the expression shape change.This species diversity is mapped on the new facial image, and filtering derives from the noise of alignment error to utilize morphological method to extract also, thereby finish the synthetic of expression.Experiment showed, that the present invention can well show the texture difference in the expression shape change, this thought also meets the understanding of people to expression shape change simultaneously, and the human face expression sense of reality that synthesizes strengthens greatly, has the little advantage of calculated amount in addition.
The present invention has following characteristics:
1) the facial image feature is carried out dense characteristic and express, and this modeling method is combined with the expression conversion of facial image.Adopt this expression way not only to avoid the complicated modeling that facial image is carried out, and do not exist parameter to select the more problem of subjective factor, make synthetic expression have the very strong sense of reality and expressive force.
2) thought of shining upon according to expression, the dense characteristic correspondence is combined with the expression scale map, facial image is decomposed into shape and texture two parts vector, when decomposing, finish the feature correspondence, thereby avoided in processing, using interpolation operation repeatedly, thereby improved processing speed greatly for the dense correspondence of realization pixel level.
3) employing is extracted the noise that also filtering derives from alignment error based on morphology methods.In the expression mapping method, scale map is to finish feature by manual graticule to aim at, so alignment error is unavoidable, produces hickie and blackspot noise.The present invention is based on morphological method and extract this noise region, and adopt linear interpolation that noise region is filled, thereby improved the robustness of synthetic expression method.
Description of drawings:
Fig. 1 is based on the expression composition algorithm schematic flow sheet of dense characteristic correspondence;
Fig. 2 is the feature line chart of hand labeled on facial image;
Fig. 3 is to the average resulting standard shape of face figure of sample image characteristic curve;
Fig. 4 is the average face figure that obtains from 200 sample images;
Fig. 5 is that the facial image that a width of cloth is new carries out the synoptic diagram that vector quantization is expressed.Among the figure,
(a) the new facial image of expression; (b) expression shape vector; (c) expression texture vector.
Embodiment
With reference to shown in Figure 1, be divided into off-line and online two parts, each step that comprises based on the expression composition algorithm of dense characteristic correspondence:
Suppose that reference picture is I Average, the neutrality of known certain specific people's face (nature) facial expression image is A, and the particular emotion image is A ', and the neutrality of people's face to be transformed (nature) facial expression image is B, synthesizes to have the facial image B ' that is similar to A ' expression.Concrete steps are as follows:
1) according to reference picture I AverageThe shape of face A, A ' and B are carried out the decomposition of image, obtain each self-corresponding image expression (shpA, texA), (shpA ', texA ') and (shpB, texB), wherein shp represents shape vector, tex represents texture vector;
2) the new shape vector after the reckoner end of love is changed: shpB '=shpB+ Δ Shp, wherein Δ Shp=shpA '-shpA;
3) the new texture vector after the reckoner end of love is changed: texB '=texB*R.Wherein, R=texA '/texA;
4) according to (shpB ', texB '), utilize forward direction warp, reconstruct obtain image B '.
Fig. 2 is the characteristic curve of hand labeled on facial image, marks the profile of different people face by 28 line segments, the face position, and these line segments have carried the shape information of people's face.Similarly mark need carry out on the facial image in all storehouses, and this is the basis of successive image vector quantization operation.
Fig. 3 is to the average resulting standard shape of face of sample image characteristic curve.The characteristic curve of each sample comes record by the coordinate position of 28 line segments in the storehouse, and this has just formed the vector of one 28 * 2 * 2 dimension, and the eigenvector of all samples (200) asks arithmetic mean just to obtain this standard shape of face.
Fig. 4 is the average face that obtains from 200 sample images.Last figure is the average shape of face that obtains after asking on average to the characteristic curve of 200 samples, and it has reflected the general character of all sample shapes of face.This figure directly asks on average the respective pixel gray-scale value of these samples, and it has reflected the general character of sample on texture.
Fig. 5 is that the facial image that a width of cloth is new carries out the synoptic diagram that vector quantization is expressed.Expressing based on the facial image of dense characteristic is a kind of vector quantization expression.The expression of vector quantization can be thought feature point set is arranged in an orderly vector, finishes feature by a definite reference picture and aims at.Providing after reference picture defines a standard shape of face, all will measure with respect to this reference picture for all features on the new facial image.In the present invention, this standard shape of face is to obtain by the characteristic curve in lineup's face image pattern storehouse is asked on average, and is promptly shown in Figure 3.
(a) new facial image: can be the positive gray scale facial image of any width of cloth, it carries out just being decomposed into shape and texture vector two parts after vector quantization is expressed at the standard shape of face, is introduced respectively below.
(b) shape vector: shape vector has been described the new facial image i of a width of cloth aWith respect to reference picture i RefFeature locations change.If feature is carried out the dense sampling of Pixel-level, then shape vector just comprises 2n value (n is the number of pixels of face image), and it is corresponding with it that promptly each pixel (or being referred to as unique point) all has one (Δ x, Δ y).Shape vector is designated as Shp a
(c) texture vector: texture vector is the new images that obtains on the standard shape of face by facial image warp.Owing to eliminated difference in shape between the facial image fully, texture vector is irrelevant with shape.
After the shape and texture vector of a known facial image, reconstruct corresponding facial image, in fact be exactly the process of a forward direction warp.
Under a kind of like this expression of facial image, synthetic someone just is converted into shape and the texture vector problem of finding the solution the corresponding facial image under this expression in the facial image problem under the new expression.
Consider A, two people's of B facial image, make S A, S A' represent the shape vector under the neutral face of A and certain expression, S respectively B, S B' be the neutral face of corresponding B and the shape vector under the same expression.According to the basic thought of expression mapping, the shape vector that draws after expression is synthesized should satisfy S B'=S B+ (S A'-S A).The sparse features that is different from the mapping of tradition expression is expressed, and the present invention adopts dense characteristic to express, and does not therefore need to carry out extra unique point interpolation processing, has accelerated processing speed greatly.
Before and after expression shape change, corresponding change also can take place in the grain details of facial image, and in order to catch the variation of this grain details, the present invention has adopted the notion based on expression scale map (Expression Ratio Image).
Suppose that P is any point on the surperficial ∏, n is the normal vector of this point, l i(direction of the expression of 1≤i≤m) from P to i light source, I iThe intensity of the light source that expression is corresponding, ρ is the reflection coefficient at P point place, so according to the Lambertain model, the brightness of ordering at P is:
I = ρΣ i = 1 m I i n · l i = ρE ( n )
After superficial makings changed, the brightness that P is ordered became: I ′ = ρ Σ i = 1 m I i n ′ · l ′ i = ρE ( n ′ )
Can on ∏, define a real function so, R ≡ Σ i = 1 m I i n · l i Σ i = 1 m I i n ′ · l ′ i = E ( n ′ ) E ( n ) , And R is referred to as one of surperficial ∏ expression scale map.The reflection coefficient of R and surperficial each point is irrelevant as can be seen from the above equation.
Consider A, two people's of B facial image, make T A, T A' represent the texture vector under the neutral face of A and certain expression, T respectively B, T B' be the neutral face of corresponding B and the texture vector under the same expression.Can obtain by top expression formula so: T A ′ T A = Σ i = 1 m I i n a ′ · l ia ′ Σ i = 1 m I i n a · l ia With T B ′ T B = Σ i = 1 m I i n b ′ · l ib ′ Σ i = 1 m I i n b · l ib , Because in our image expression, the unique point that texture vector is corresponding identical is so can think to have identical normal vector and identical light source direction at corresponding point position.Be n a=n b, n ' a=n ' bAnd l Ia=l Ib, l ' Ia=l ' Ib, therefore just have T A ′ T A = T B ′ T B , Promptly
T B ′ = T B T A ′ T A .
Can obtain the texture vector T of B under new expression like this B'.Owing in the process that expression shape change takes place, tend to produce some local wrinkles, T A, T A' not on all four at corresponding point position, so top expression formula can be caught the variation of this grain details by the caused part of expression shape change.Traditional method based on the expression mapping is not then considered the variation of this grain details, and directly makes T B'=T B
In the ERI method, scale map is finished the feature aligning by manual graticule and is obtained, so alignment error is unavoidable, just the difference of degree will cause blackspot and hickie noise like this.Hickie means the factor at high proportion of distortion, and blackspot then is a null divisor, and when the gray-scale map after this scale map and the mapping of target people face shape multiplied each other, they had all twisted the original intention of ERI, and has influence on the synthetic final impression of expression.
Extract the zone of this hickie and blackspot by the mode of selected threshold, and then it is filled with this noise of filtering.The present invention has adopted the method based on mathematical morphology that the noise region in the expression scale map is positioned.Threshold value to choose not touch texture region be principle; yet can not extract the transitional zone of noise region like this; so again to (dilation) computing of expanding of its melanoleukoderma zone of locating, purpose is complete extraction noise region and transitional zone and protection texture region.In scale map, this noise region is filled with filtering noise at last, fill based on approach based on linear interpolation.

Claims (2)

1, the corresponding and morphologic human face countenance synthesis method based on dense characteristic is characterized in that, may further comprise the steps:
1) according to reference picture I AverageCharacteristic curve the neutrality expression B of the neutrality of specific people's face expression A, particular emotion A ' and people's face to be transformed is carried out Warp conversion based on characteristic curve, facial image after the distortion claims texture vector, the displacement of two image corresponding point promptly constitutes shape vector before and after the distortion, the vector quantization expression that obtains three width of cloth images like this is respectively (shpA, texA), (shpA ', texA ') and (shpB, texB), wherein shp represents shape vector, and tex represents texture vector;
2) the new shape vector after the reckoner end of love is changed: shpB '=shpB+ Δ Shp, wherein Δ Shp=shpA '-shpA;
3) scale map is changed in the reckoner end of love: R=texA '/texA;
4) in order to eliminate the melanoleukoderma noise among the scale map R, at first based on noise and non-noise region selected threshold with the scale map binaryzation, threshold value 1.3-1.6 is used to extract the hickie zone, threshold value 0.2-0.3 is used to extract the blackspot zone; And then these noise region are carried out the morphology expansive working, with complete extraction noise region transitional zone; At last noise region is carried out filling based on linear interpolation, completed percentage figure denoising;
5) the new texture vector after the reckoner end of love is changed: texB '=texB*R, the R has here finished 4) in denoising;
6) in (shpB ', texB '), shape vector shpB ' has write down each pixel sense of displacement and distance among the figure; Every pixel basis shpB ' among the texture vector texB ' is carried out displacement, just recovered the image B of reconstruct ', this is the process of forward direction Warp just.
2, according to claim 1 based on dense characteristic corresponding with morphologic human face countenance synthesis method, it is characterized in that, described noise region is carried out filling based on linear interpolation, adopt following interpolation formula:
R y ′ ( x , y ) = R ( x , y 1 ) ( 1 - y - y 1 y 2 - y 1 ) + R ( x , y 2 ) · y - y 1 y 2 - y 1 R x ′ ( x , y ) = R ( x 1 , y ) ( 1 - x - x 1 x 2 - x 1 ) + R ( x 2 , y ) · x - x 1 x 2 - x 1 R ′ ( x , y ) = R y ′ ( x , y ) + R x ′ ( x , y ) 2
R represents the scale map before the filtering, located the interpolation point (x, y) with the horizontal vertical intersecting point coordinate at edge, fill area after, promptly can utilize the gray-scale value of these points among the R to carry out linear interpolation respectively, and the result of average both direction is as final interpolation result.
CN 200610042981 2006-06-15 2006-06-15 Human face countenance synthesis method based on dense characteristic corresponding and morphology Pending CN1870049A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200610042981 CN1870049A (en) 2006-06-15 2006-06-15 Human face countenance synthesis method based on dense characteristic corresponding and morphology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200610042981 CN1870049A (en) 2006-06-15 2006-06-15 Human face countenance synthesis method based on dense characteristic corresponding and morphology

Publications (1)

Publication Number Publication Date
CN1870049A true CN1870049A (en) 2006-11-29

Family

ID=37443706

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200610042981 Pending CN1870049A (en) 2006-06-15 2006-06-15 Human face countenance synthesis method based on dense characteristic corresponding and morphology

Country Status (1)

Country Link
CN (1) CN1870049A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102163322A (en) * 2011-03-16 2011-08-24 哈尔滨工程大学 Suppression method of co-channel interference in radar image based on Laplace operator
CN103198460A (en) * 2011-11-04 2013-07-10 索尼公司 Image processing apparatus, image processing method, and program
CN106570911A (en) * 2016-08-29 2017-04-19 上海交通大学 DAISY descriptor-based facial caricature synthesis method
CN107292939A (en) * 2016-04-07 2017-10-24 掌赢信息科技(上海)有限公司 A kind of wrinkle generation method and electronic equipment
CN110929681A (en) * 2019-12-05 2020-03-27 南京所由所以信息科技有限公司 Wrinkle detection method
CN112396693A (en) * 2020-11-25 2021-02-23 上海商汤智能科技有限公司 Face information processing method and device, electronic equipment and storage medium
CN112562026A (en) * 2020-10-22 2021-03-26 百果园技术(新加坡)有限公司 Wrinkle special effect rendering method and device, electronic equipment and storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102163322A (en) * 2011-03-16 2011-08-24 哈尔滨工程大学 Suppression method of co-channel interference in radar image based on Laplace operator
CN103198460A (en) * 2011-11-04 2013-07-10 索尼公司 Image processing apparatus, image processing method, and program
CN107292939A (en) * 2016-04-07 2017-10-24 掌赢信息科技(上海)有限公司 A kind of wrinkle generation method and electronic equipment
CN106570911A (en) * 2016-08-29 2017-04-19 上海交通大学 DAISY descriptor-based facial caricature synthesis method
CN106570911B (en) * 2016-08-29 2020-04-10 上海交通大学 Method for synthesizing facial cartoon based on daisy descriptor
CN110929681A (en) * 2019-12-05 2020-03-27 南京所由所以信息科技有限公司 Wrinkle detection method
CN110929681B (en) * 2019-12-05 2023-04-18 南京所由所以信息科技有限公司 Wrinkle detection method
CN112562026A (en) * 2020-10-22 2021-03-26 百果园技术(新加坡)有限公司 Wrinkle special effect rendering method and device, electronic equipment and storage medium
CN112396693A (en) * 2020-11-25 2021-02-23 上海商汤智能科技有限公司 Face information processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN1870049A (en) Human face countenance synthesis method based on dense characteristic corresponding and morphology
US11967083B1 (en) Method and apparatus for performing segmentation of an image
US8983178B2 (en) Apparatus and method for performing segment-based disparity decomposition
CN105574827B (en) A kind of method, apparatus of image defogging
CN104598915B (en) A kind of gesture identification method and device
CN1120629C (en) Image segmentation and object tracking method and corresponding system
CN1870047A (en) Human face image age changing method based on average face and senile proportional image
US9723296B2 (en) Apparatus and method for determining disparity of textured regions
CN103826032B (en) Depth map post-processing method
CN101441766B (en) SAR image fusion method based on multiple-dimension geometric analysis
CN105303598A (en) Multi-style video artistic processing method based on texture transfer
CN101923721B (en) Non-illumination face image reconstruction method and system
CN108564120A (en) Feature Points Extraction based on deep neural network
CN102457724B (en) Image motion detecting system and method
CN100337473C (en) Panorama composing method for motion video
CN1150769C (en) Static image generation method and device
CN106204461A (en) Compound regularized image denoising method in conjunction with non local priori
CN101063605A (en) Real time three-dimensional vision system based on two-dimension colorama encoding
CN104850232B (en) A kind of method obtaining long-range gesture path under the conditions of photographic head
CN106127763A (en) One has extensive adaptive image binaryzation method
CN1920880A (en) Video flow based people face expression fantasy method
CN106127765A (en) Image binaryzation system based on self-adapting window and smooth threshold method
CN100337472C (en) Video composing method with motion prospect
CN1286962A (en) Real-time body gait image detecting method
CN103955178B (en) A kind of elevating mechanism antenna array control method of three-dimensional dynamic scene display systems

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication