CN104123749A - Picture processing method and system - Google Patents

Picture processing method and system Download PDF

Info

Publication number
CN104123749A
CN104123749A CN201410352939.4A CN201410352939A CN104123749A CN 104123749 A CN104123749 A CN 104123749A CN 201410352939 A CN201410352939 A CN 201410352939A CN 104123749 A CN104123749 A CN 104123749A
Authority
CN
China
Prior art keywords
personage
face
region
model
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410352939.4A
Other languages
Chinese (zh)
Inventor
邢小月
孟昭龙
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201410352939.4A priority Critical patent/CN104123749A/en
Publication of CN104123749A publication Critical patent/CN104123749A/en
Priority to PCT/CN2015/077353 priority patent/WO2016011834A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a picture processing method and system. The picture processing method and system includes that a model of a first person is simulated according to at least one picture which contains the first person; a target picture which contains a second person is determined; displaying of feature information of the second person is determined in the target picture; the displaying of the first person is adjusted according to the feature information in the model of the first person; the second person is replaced by the first person after displaying adjustment in the target picture. According to the technical scheme, the problem that replaced persons and backgrounds are inconsistent and conflicting due to absence of the relation between the persons before or after picture processing replacement is solved.

Description

A kind of image processing method and system
Technical field
The present invention relates to image processing techniques, particularly a kind of image processing method and system.
Background technology
Prior art is being carried out image processing, when a personage in image is substituted in another image, just simply a personage's head or face are got off along profile cutting, be superimposed upon another personage's image relevant position, be similar to the effect of photo sticker.On the one hand, its personage who replaces can be because of illumination, visual angle etc. and the inconsistent personage of there will be of background conflicts mutually with background color, tone is not arranged in pairs or groups situation; On the other hand, when personage's face being substituted into another one people facial, the expression that can only retain original personage, and this expression is conventionally all inconsistent with background, this by the personage of appearance after prior art replacement and the inharmonious demand that obviously cannot meet people of background.
The deficiencies in the prior art are:
Not contact between personage before and after replacing, makes that personage after replacing is inharmonious with background appearance, the problem such as conflict.
Summary of the invention
The present invention is directed to the problems referred to above, proposed a kind of image processing method and system, while replacing in order to have solved image simulation, the problem that character image does not conform to replaced image background.
A kind of image processing method is provided in the embodiment of the present invention, can have comprised the steps:
According at least one image that comprises the first personage, simulate the first personage's model;
Determine the target image that comprises the second personage;
Determine the characteristic information that shows the second personage in target image;
In the first personage's model, according to described characteristic information, adjust the first personage's demonstration;
In target image, the second personage is replaced with to the first personage who shows after adjusting.
A kind of image processing system is provided in the embodiment of the present invention, can have comprised:
Modeling module, for the image that comprises the first personage according at least one, simulates the first personage's model;
Target image determination module, for determining the target image that comprises the second personage;
Characteristic information determination module, for determining the characteristic information that shows the second personage at target image;
Adjust display module, for the model the first personage, according to described characteristic information, adjust the first personage's demonstration;
Personage's replacement module, at target image, replaces with the second personage the first personage who shows after adjusting.
Beneficial effect of the present invention is as follows:
In the technical scheme providing in the embodiment of the present invention, first simulate the first personage's model, then the characteristic information showing in target image according to the second personage as replaced object is adjusted, like this, make the first personage and the second personage in target image, all possess same indicating characteristic, thereby overcome inharmonic problems such as replacing rear and destination image background.
Accompanying drawing explanation
Specific embodiments of the invention are described below with reference to accompanying drawings, wherein:
Fig. 1 is image processing method implementing procedure schematic diagram in the embodiment of the present invention;
Fig. 2 is people's face detection algorithm implementing procedure schematic diagram in the embodiment of the present invention;
Fig. 3 extracts Haar-like feature schematic diagram in the embodiment of the present invention;
Fig. 4 is the method implementing procedure schematic diagram of integrogram in the embodiment of the present invention;
Fig. 5 is Waterfall type cascade detectors schematic diagram in the embodiment of the present invention;
Fig. 6 is people's face portion schematic diagram of demarcating in the embodiment of the present invention;
Fig. 7 is the constructive process schematic diagram of local feature in the embodiment of the present invention;
Fig. 8 is the method implementing procedure schematic diagram that calculates the reposition of each unique point in the embodiment of the present invention;
Fig. 9 is people's face testing result schematic diagram in the embodiment of the present invention;
Figure 10 is three-dimensional facial reconstruction implementing procedure schematic diagram in the embodiment of the present invention;
Figure 11 is original image and three-dimensional model schematic diagram in the embodiment of the present invention;
Figure 12 is the expression example schematic of model in the embodiment of the present invention;
Figure 13 is personage's unique point schematic diagram of expressing one's feelings in the embodiment of the present invention;
Figure 14 is image processing system schematic diagram in the embodiment of the present invention.
Embodiment
For making object, technical scheme and the advantage of the embodiment of the present invention clearer, below in conjunction with accompanying drawing, the embodiment of the present invention is described in further details.At this, schematic description and description of the present invention is used for explaining the present invention, but not as a limitation of the invention.
Fig. 1 is image processing method implementing procedure schematic diagram, as shown in Figure 1, can comprise the steps:
Step 101: according at least one image that comprises the first personage, simulate the first personage's model;
Step 102: determine the target image that comprises the second personage;
Step 103: determine the characteristic information that shows the second personage in target image;
Step 104: adjust the first personage's demonstration in the first personage's model according to described characteristic information;
Step 105: in target image, the second personage is replaced with to the first personage who shows after adjusting.
Concrete, when image is replaced, the picture that can provide according to user or image sequence complete the editing to personage automatically.For example can be as follows:
A. user provides one or more picture or image sequence as material, all contains same personage, that is: the first personage in its all materials;
B. the material that system provides according to user, simulates this first personage's model.This model can be made corresponding adjustment for different visual angles, illumination etc., and can make different deformation;
C. user specifies another person's thing, that is: the second personage in picture or image sequence;
D. the correlated characteristic information of the second personage that system detects this appointment in every two field picture.These characteristic informations refer to the features such as position, profile, relative visual angle, illumination and deformation;
E. on every two field picture, the first object model is adjusted to the second personage in the characteristic of this two field picture, and replace the second personage.
In enforcement, the extraction that single image is carried out, processing, adjustment, conversion are illustrated, because each frame of multiple image sequences and video image is all to consist of single image, therefore, the technical scheme that the embodiment of the present invention of take provides is basis, can be easy to draw to multiple or picture forms in batches image sequence, or the processing to video image, such as, the simplest a kind of mode is: every image of image sequence or video is replaced after processing to image sequence or video after recomposition is replaced.How on the processing basis of single image, extending to the processing to whole image sequence or video, this is that those skilled in the art easily understand and make corresponding modify.
In enforcement, the personage in the embodiment of the present invention can be the personage that personalizes, and as cartoon figure, 3D personage etc., it is not limited only to the mankind's personage, also needs not to be the personage that nature exists, and is all called in an embodiment " personage ".In the following examples, be also mostly that image with the mankind is treated to example, this is because it is the most representative, also complicated.So the portrait of take here describes as example, but, the technical scheme that the embodiment of the present invention provides also can be processed with other image, because its disclosure is a kind of scheme that relates to replacement of processing for image, also be, so long as this purpose is replaced in all realizations in image processing field, can adopt the scheme in the embodiment of the present invention, be not limited in theory personage, so long as the replacement of pattern can, portrait is only for instructing those skilled in the art specifically how to implement Ben Dingming, but do not mean and only can be used in portrait, in implementation process, can need in corresponding environment, use in conjunction with practice.
In enforcement, at the image that comprises the first personage according at least one, while simulating the first personage's the model of face, can comprise:
Detect the region of the first personage's face;
In the region of detected face, determine the region of face and the profile of cheek;
By the profile of the region of the face that detect and cheek, fit to the rear model that obtains the first personage's who simulates face on the three-dimensional 3D model of existing people's face.
Concrete, according to the first personage's who simulates model, it can be the model of personage's whole body, also can be the model of character facial, in embodiment, by the example that is embodied as with character facial, but those skilled in the art should know, and adopt corresponding tool image to process, can obtain the processing mode that is not limited to face, such as the model of personage's whole body.
With the example that is embodied as of character facial, can be as follows:
Position and the region of people's face of the first personage are provided the picture a. providing from user or image sequence;
B. in the region of detected people's face, determine the region of face and the profile of cheek of people's face, as: eyes, nose, eyebrow, mouth and ear;
C. by the profile of the region of the face that detect and cheek, fit on existing people's face 3 dimension models, can, automatically according to the setting of parameter, present the variation of different visual angles, illumination and expression.
In enforcement, when determining the characteristic information of the face that shows the second personage in target image, can comprise:
Detect the region of the second personage's face;
In the region of detected face, determine the region of face and the profile of cheek;
According to the profile of the region of the face that detect and cheek, determine the characteristic information of the face that shows the second personage in target image.
Concrete, the correlation properties information of the second personage who detects appointment in every two field picture, can be as follows:
A. from picture or image sequence, detect position and the region of people's face of the second personage;
B. in the region of detected people's face, determine the region of face and the profile of cheek of people's face, as: eyes, nose, eyebrow, mouth and ear;
C. by the region of face and the profile of cheek that detect, infer the second personage's correlation properties information.These characteristic informations comprise the variation of visual angle, illumination and expression etc.
Concrete, in the region of detected face, adopt face recognition algorithms to determine the region of face and the profile of cheek, can adopt ASM (Active Shape Model, active shape model) algorithm to determine the region of face and the profile of cheek.
In force, adopt ASM algorithm to describe and be because ASM algorithm is more typical in face recognition algorithms, also comparatively conventional, easily by those skilled in the art, understood enforcement, so take ASM algorithm here as example; But, in theory, with other algorithm, be also fine, as long as can reach, determine the region of face and profile this purpose of cheek, for example can adopt AAM (Active Appearance Model, initiatively show model) or SDM (Supervised Descent Method, supervision gradient descent method) scheduling algorithm.Therefore, ASM algorithm is only for instructing those skilled in the art specifically how to implement the present invention, but do not mean and only can use ASM algorithm, in implementation process, can need to carry out to determine corresponding algorithm in conjunction with practice.
In enforcement, the second personage being replaced with to the first personage who shows after adjusting, is according to the region of the region of the first personage's face and the second personage's face, the second personage's face to be replaced with to the face that shows the first personage after adjusting.
Concrete, the first object model is further replaced to the second personage, can be as follows:
A. according to the second personage's correlated characteristic information, adjust the first personage's model, make itself and the second personage's correlation properties similar;
B. by the region of face and the profile of cheek that detect, the face area of the second personage in every two field picture is erased;
C. in every two field picture, the first personage's who adjusts model is placed on to the second personage's face area.
In enforcement, in the region of detected face, can adopt face recognition algorithms to determine the region of face and the profile of cheek.
In enforcement, in target image, after the second personage is replaced with to the first personage who shows after adjusting, further can comprise:
For the first personage in target image adds image.
This is to be convenient to the former personage after replacing to add the images such as stage property, and these stage properties comprise glasses, cap, clothes, knapsack and shoes etc.
In enforcement, in the above picture providing according to user or image sequence, detect the first personage or the second personage's the position of face and the method in region have a variety of, as shown in Figure 3.
In listed method, method based on statistical model is current popular method, specifically can be referring to: institute's work < < people faces such as Liang Luhong detect Review Study > > (being loaded in Chinese journal of computers Vol25No5May2002), and this scheme has larger superiority.Its advantage has:
1, do not rely on priori and the parameter model of people's face, the mistake that can avoid out of true or incomplete knowledge to cause;
2, the method for employing case-based learning is obtained the parameter of model, more reliable in statistical significance;
3, the example of learning by increase can expanding species detecting pattern scope, improves robustness.
One, the method for statistical model
The people's face detection algorithm based on ensemble machine learning being proposed by Viola and Jones about calendar year 2001 has clear superiority with respect to additive method, specifically can be referring to: the institute work < < people faces such as Ai Haizhou detect and retrieve > > (being loaded in Nsfc Projects 60273005); The multi-view face detection > > (be loaded in Journal of Computer Research and Development, 2005) of the work < < of institute such as Wu Bo based on continuous adaboost algorithm.In the recent period document also shows not yet to find to be at present better than other people face detecting method of Viola and Jones method, specifically can be referring to: the < < Comparative Testing of Face Detection Algorithms > > (Image and Signal Processing, 2010) that N Degtyarev et al. shows.The method not only accuracy of detection is high, most critical be that its arithmetic speed is greatly faster than additive method.
Several key steps in Viola and Jones method for detecting human face, specifically can be referring to: Paul Viola and Michael Jones shows < < Rapid object detection using a boosted cascade of simple features > > (being loaded in Accepted Conference on Computer Vision and Pattern Recognition2001):
1, extract Haar-like feature (Haar-like features, Lis Hartel is levied)
Haar-like type feature is a kind of simple rectangular characteristic that the people such as Viola propose, and because of similar Haar small echo, gains the name.The definition of Haar type feature is weight gray level summation poor in black rectangle and white rectangle corresponding region in image subwindow.As shown in Figure 4, two kinds of feature operators the simplest have been shown.In Fig. 4, can see, at people's face ad hoc structure place, operator calculates larger value.
2, calculated product component
When operator quantity is huge, above-mentioned calculated amount seems too large, and the human hairs such as Viola understand integrogram method, and computing velocity is accelerated greatly.As shown in Figure 5, the pixel integration that the value at point 1 place is a-quadrant, the value at point 2 places is the pixel integration in AB region.Whole pictures is carried out to integration operation one time, and just can calculate easily arbitrary region D pixel integration value is 4+1-2-3.
3, training Adaboost model
In discrete Adaboost algorithm, Haar-like feature operator result of calculation deducts certain threshold value, just can be considered a human-face detector.Because its accuracy rate is not high, be called Weak Classifier.In the circulation of Adaboost algorithm, first utilize various Weak Classifiers to classify to training plan valut, the highest Weak Classifier of accuracy remains, and improves the weight of wrongheaded picture simultaneously, enters next circulation.The Weak Classifier that the most each circulation retains combines, and becomes a human-face detector accurately, is called strong classifier.Concrete calculation process is shown in, specifically can be referring to: the multi-view face detection > > (be loaded in Journal of Computer Research and Development, 2005) of the work < < of institute such as Wu Bo based on continuous adaboost algorithm; Paul Viola and Michael Jones shows < < Rapid object detection using a boosted cascade of simple features > > (being loaded in Accepted Conference on Computer Vision and Pattern Recognition2001).
4, set up Waterfall type cascade detectors
Waterfall type cascade detectors is a kind of detection architecture proposing for people's face detection speed problem.As shown in Figure 6, every one deck of waterfall is a strong classifier being obtained by adaboost Algorithm for Training.The threshold value of every layer is set, most of facial images can be passed through, abandon on this basis counter-example as far as possible.Layer after position is more leaned on is more complicated, has stronger classification capacity.
Such detector arrangement is just thought the sieve that a series of screen sizes successively decrease, and each step can screen out the counter-example that sieve leaks down before some, and finally the sample by all sieves is accepted the face into people.Waterfall type detecting device training algorithm, specifically can be referring to: the multi-view face detection > > (be loaded in Journal of Computer Research and Development 2005) of the work < < of institute such as Wu Bo based on continuous adaboost algorithm.
Above algorithm is realized upper, adopts OpenCV (Open Source Computer Vision Library, the computer vision of increasing income storehouse) people's face trace routine flow process, and specific procedure source code can be recorded referring to following network address:
http://www.opencv.org.cn/index.php/%E4%BA%BA%E8%84%B8%E6%A3%80%E6%B5%8B。
OpenCV is a cross-platform computer vision storehouse based on (increasing income) distribution, may operate in Linux, Windows and Mac OS operating system.Its lightweight and efficient-consist of a series of C functions and a small amount of C++ class, provide the interface of the language such as Python, Ruby, MATLAB simultaneously, realized that image is processed and a lot of general-purpose algorithms of computer vision aspect.
People's face trace routine of OpenCV has adopted Viola and Jones method for detecting human face, is mainly to call the waterfall cascade classifier cascade training to carry out pattern match.
CvHaarDetectObjects, first by image gray processing, judges whether to carry out canny edge treated (acquiescence is not used) according to importing parameter into, then mates.After coupling, collect the match block of finding out, filtered noise, has surpassed setting (min_neighbors importing into) just as Output rusults if calculate adjacent number, otherwise has left out.
Coupling circulation: matched classifier is amplified to scale (importing value into) doubly, and former figure dwindles scale doubly simultaneously, mates, until the size of matched classifier is greater than former figure, returns to matching result.In the time of coupling, call cvRunHaarClassifierCascade and mate, deposit all results in CvSeq*Seq (capable of dynamic growth element sequence), result is passed to cvHaarDetectObjects.
CvRunHaarClassifierCascade function integral body is to mate according to the image and the cascade that import into.And can be different according to the cascade type imported into (tree type, stump (incomplete tree) or other), carry out different matching ways.
Function cvRunHaarClassifierCascade is for the detection to single width picture.Before function call, first utilize cvSetImagesForHaarClassifierCascade to set integrogram and suitable scale-up factor (=> window size).When the rectangle frame of analyzing all by the every one deck of cascade classifier time return on the occasion of (this is a candidate target), otherwise return to 0 or negative value.
Wherein the training of sorter adopts Ha Er sorter, and the training of Haar sorter is independent of people's face testing process.The training of sorter is divided into two stages:
A. create sample, the creatsamples.exe carrying with OpenCV completes;
B. training classifier, generates xml file, and the haartraining.exe being carried by OpenCV completes.
Training process, specifically can be referring to following 1 and 2:
1、http://034080116.blog.163.com/blog/static/334061912009641073715/;
2、\OpenCV\apps\HaarTraining\doc\haartraining.doc;
In above address, address 1 can see in blog, and the source file of the Ha Er training that address 2 provides can find in the openCVS installation kit catalogue after downloading and installing.
Meanwhile, the training algorithm adaboost adopting in OpenCV is gentle adaboost, is the scheme of the most applicable people's face detection.Specifically can be referring to:
1、http://www.opencv.org.cn/forum/viewtopic.php?f=1&t=4264#p15258
2、http://www.opencv.org.cn/forum/viewtopic.php?t=3880
For instance, in the human face region detecting, determine the face region of people's face, the profile information of position relationship and cheek, as: eyes, nose, eyebrow, mouth and ear etc., can be by there being a lot of algorithms to realize.Patent of the present invention is preferentially used ASM algorithm, below will be introduced ASM algorithm.
ASM is a kind of based on distributed model (Point Distribution Model, PDM) algorithm, in PDM, profile is wanted similar object, such as the geometric configuration of people's face, staff, heart, lung etc., can represent by the coordinate of some key feature points (landmarks) shape vector of formation of contacting successively.This patent just be take people's face and is introduced ultimate principle and the method for this algorithm as example.People's face portion picture of demarcating 68 key feature points of given first, as shown in Figure 6.ASM, in actual application, comprises training and two parts of search.
One, the training of ASM
ASM training comprises two parts.
1, set up shape: this part is comprised of following step
1.1 collect n training sample
If need to carry out ASM training to the facial critical area of people's face, just need to relate to n samples pictures that contains people's face facial zone.Need to remind, the picture of collection is just passable as long as people's face facial zone is contained in the inside, need not consider the problems such as normalization of picture size here.
1.2 manually record k key feature points in each training sample
As shown in Figure 7, for any one picture in training set, need to record the location coordinate information of several (being 68 in Fig. 7) key feature points, and in text, this coordinate information be preserved.As this step 1, programmer all can write small routine and completes.A training sample of the each loading of program, user clicks the key feature points in picture successively in order, and once, program is recorded the position coordinates that current mouse is clicked automatically, is preserved, for below in every click.
1.3 build the shape vector of training set
The k demarcating in one secondary figure key feature points formed to a shape vector.
a i = ( x 1 i , y 1 i , x 2 i , y 2 i , &CenterDot; &CenterDot; &CenterDot; &CenterDot; , x k i , y k i ) , i = 1,2 , . . . , n Formula (1)
Wherein, the coordinate that represents j unique point on i training sample, n represents the number of training sample.Thus, n training sample, has just formed n shape vector.
1.4 shape normalization
The object of this step is that the people's face shape to manually demarcating is normalized or alignment operation above, thereby in elimination picture, people's face is because the non-shape that the extraneous factors such as different angles, distance distance, posture changing cause is disturbed, thereby makes invocation point distributed model more effective.In general, this step all adopts Procrustes method to be normalized.In simple terms, the method is exactly by suitable translation, rotation, scale transformation a series of some distributed model, on the basis that does not change a distributed model, snap to same some distributed model, thereby change the rambling state of raw data obtaining, reduce the interference of non-shape factor.Utilize Procrustes method to π={ α 1, α 2..., α nthis training set process of aliging, need to be to each α wherein ithe parameter of calculating has 4: the anglec of rotation is revolved θ i, convergent-divergent yardstick s i, horizontal direction translational movement vertical direction translational movement make M (s i, θ i) [α i] represent α idoing an anglec of rotation is θ i, convergent-divergent yardstick is s iconversion.α ito α kthe process of alignment is asked θ exactly i, s i, make minimized process.Wherein Z i = &alpha; i - M ( s i , &theta; i ) [ &alpha; i ] - [ X x i , X y i , . . . , X x i , X y i ] . The W is here a diagonal matrix, and it can obtain by calculating below: make R kirepresent in a sub-picture distance between k point and the 1st point, order represent in whole training set R between different images kivariance, by calculating thereby obtain: be not difficult to find, Procrustes method is a kind of method that solves transformation matrix.And in ASM, utilized just Procrustes to carry out the alignment operation of a distributed model, concrete steps are as follows:
(1) all faceforms in training set are snapped to the 1st faceform;
(2) calculate average face model
(3) all faceforms are snapped to average face model
(4) repeat (2), (3) are until convergence.
1.5 carry out PCA processing by the shape vector after alignment
(1) calculate average shape vector:
&alpha; &OverBar; = 1 n &Sigma; i = 1 n &alpha; i Formula (2)
(2) calculate covariance matrix:
S = 1 n &Sigma; i = 1 n ( &alpha; i - &alpha; &OverBar; ) T &CenterDot; &alpha; i - &alpha; &OverBar; Formula (3)
(3) calculate the eigenwert of covariance matrix S and pressed sequence successively from big to small:
Like this, just, obtain λ 1, λ 2..., λ q, λ wherein 1> 0.T proper vector P=(p before selecting 1, p 2..., p t) make to meet with its characteristic of correspondence value:
&Sigma; i = 1 t &lambda; i &Sigma; i = 1 q &lambda; s > f v V T Formula (4)
The f here vbe one and carry out definite scale-up factor by proper vector number, value is 95% conventionally, and V tall feature sums.That is:
V T=∑λ i
Any one can be represented as for the shape vector of training like this:
&alpha; i &ap; &alpha; &OverBar; + P s b s Formula (5)
In the middle of formula above, b sthe vector that has comprised t parameter, wherein,
b s ( i ) = P T &CenterDot; ( &alpha; i - &alpha; &OverBar; )
In addition, in order to ensure by b sthe variation shape and the shape in training set that produce similar, need to be to b scarry out some restrictions,
D m 2 = &Sigma; i = 1 t ( b s ( i ) 2 &lambda; i ) &le; D max 2
D wherein maxbe generally 3, if b is D in renewal process m> D max, use
b s ( k ) = b s ( k ) &times; ( D max D m )
To b suse restraint.
2, for each unique point builds local feature
In order, finding its new position for each unique point in iterative process each time, to set up respectively local feature for them.For i unique point, the constructive process of its local feature as shown in Figure 7, the both sides of i unique point on i training image, along putting perpendicular to this, select respectively m pixel take to form the vector that a length is 2m+1 in the direction of former and later two unique point lines, the gray-scale value differentiate of the pixel that this vector is comprised obtains a local grain g ij, the i unique point in other training sample image in training set is carried out to same operation, just can obtain n local grain g of i unique point i1, g i2..., g in.Then, ask for their average:
g i &OverBar; = 1 n &Sigma; j = 1 n g ij Formula (6)
And variance:
S i = 1 n &Sigma; j = 1 n ( g ij - g i &OverBar; ) T ( g ij - g i &OverBar; ) Formula (7)
So just obtained the local feature of i unique point.Other all unique points are carried out to identical operation, just can obtain the local feature of each unique point.Like this, the similarity measurement between the new feature g of a unique point and local feature that it trains just can represent with mahalanobis distance:
f sim = ( g - g i &OverBar; ) S i - 1 ( g - g i &OverBar; ) T Formula (8)
Two, the search of ASM
Obtain after ASM model is set up carrying out ASM search training by sample set, first average shape carried out to affined transformation and obtain an initial model:
X=M (s, θ) [α i]+X cformula (9)
Formula above represents average shape Yi Qi center to be rotated counterclockwise θ convergent-divergent S, and then translation X cobtain initial model X.
With this initial model, in new image, search for target shape, the unique point in the net shape that makes to search and corresponding real unique point are the most approaching, and this search procedure is mainly that the variation by affined transformation and parameter b realizes.Specific algorithm can be realized by following two steps repeatedly:
2.1 calculate the reposition of each unique point
First initial ASM model is covered on image, as shown in Figure 8,
For i unique point in model, centered by it, 1 (1 > m) individual pixel is respectively selected on both sides on perpendicular to its former and later two unique point line directions, thereby gray-scale value derivative the normalization of then calculating this 1 pixel obtain a local feature, it comprises 2 (1-m)+1 sub-local feature, then utilize formula above to calculate the mahalanobis distance between this little local feature and the local feature of current unique point, make the center of that sub-local feature of mahalanobis distance minimum be the reposition of current unique point, will produce like this displacement.For all unique points find its reposition, and their displacement is formed to a vector:
dX=(dX 1,dX 2,...dX k)
Parameter in 2.2 affine variations and the renewal of b
By affined transformation and adjust its parameter and make the position X of current unique point the most approaching with corresponding new position X+dX.
After affined transformation, just can obtain the variable quantity d of affine transformation parameter s: d θ: by formula (9), obtained simultaneously:
M (s (1+ds), (θ+d θ)) [α i+ d α i]+(X c+ dX c) formula (10)
X can be represented by (9) again simultaneously, and therefore, above formula can be expressed as again:
M (s (1+ds), (θ+d θ)) [α i+ da i]=M (S, θ) [α i]+dX+Xc-(Xc=dX c) formula (11)
By formula (9), can be obtained simultaneously:
M -1(s, θ)=M (s -1, θ) formula (12)
By formula (11) and formula (12), can be obtained:
D α i=M (s (1+ds) -1,-(θ+d θ)) [M (S, θ)+dX-dXc]-α formula (13)
By formula (5), can be obtained simultaneously:
&alpha; i + d &alpha; i &ap; &alpha; &OverBar; + P ( b + db ) Formula (14)
By formula (14), deducting formula (5) can obtain:
D α i≈ P * db formula (15)
That is:
Db=P -1d α iformula (16)
Db=P td α iformula (17)
Convolution (17) and formula (13) can be in the hope of db.Therefore, above-mentioned parameter renewal process is:
so can do following renewal to affine transformation parameter and b:
X c=X c+ w tdX c, Y c=Y c+ w tdY cθ=θ+w θd θ, s=s (1+w sds), b=b+w bdb formula (18)
W in formula above t, w θ, w s, w bthe weights that change for controlling parameter.So just can obtain new shape by formula (5) and formula (9).When the parameter of affined transformation and the variation of b are not very large or the iterations threshold value that reaches appointment just finishes this search procedure.Testing result as shown in Figure 9.
By the profile of the region of the face that detect and cheek, fit on existing people's face 3 dimension models, can, automatically according to the setting of parameter, present the variation of different visual angles, illumination and expression.Its concrete methods of realizing is as follows:
Select " BJUT-3D Face Database " three-dimensional face storehouse, through resampling, the pre-service such as level and smooth and coordinates correction, select the data of 100 male sex and 100 everyone approximately 60000 points of women and 120000 triangular plates as dense people's face sample set.Then by manual interaction, choose everyone 60 three-dimensional feature points, as the sample set of sparse correspondence, and use this 200 people's averaging model as universal model.
Rebuild minute following four steps, as shown in figure 10:
A. by ASM template detection human face characteristic point.Adopt improved ASM algorithm.Automatically extract its 60 unique points;
B. utilize sparse deformation model to obtain unique point depth information.Utilize priori three-dimensional face statistical knowledge, three-dimensional feature is put to sample set and by plane projection and linear combination, carry out the two dimensional character point of best approximation photo, thereby obtain three-dimensional coordinate corresponding to photo unique point.
C. according to the displacement of three-dimensional feature point, common people's face model deformation is obtained to specific three dimensional people face.Select thin-plate spline interpolation algorithm (TPS), specifically can be referring to: BOOKSTEINFL.Principlewarps:thin-platesplines and the decompo sition of deformation (is loaded in IEEETranson PAMI198911 (6): 567-585), by master pattern elastic deformation, be Given Face model.
D. by the colouring information of texture reconstruction model.After photo texture is made to affined transformation, rectangular projection is to three-dimensional model surface.
Further, after former person model adjustment being substituted in the described image at target person place, can also comprise:
Former personage after replacing is added to stage property, and described stage property comprises glasses, cap, clothes, knapsack and shoes.
Concrete, after above-mentioned automatic edition system is substituted into the first personage of user's appointment on the second personage's picture or image sequence, can also further to the first personage after replacing, add stage property.Its stage property can be glasses, cap, clothes, knapsack etc.
Further, former person model adjustment is substituted in the described image at target person place, can also comprises:
The two dimensional character point detecting according to ASM, whole unique points of texture drop on facial zone and adjust with interior.
Further, whole unique points of described texture, can also comprise:
The unique point of using is proofreaied and correct through complexion model.
For instance, the two dimensional character point that the present invention uses ASM to detect in Model Reconstruction process, and the unique point that texture is used need to drop in face area whole unique points through the correction based on complexion model, thus the disappearance of side grain while avoiding texture.
1) colour of skin point is judged
Take YUV and YIQ space as basis and add Gamma to correct and reduce the method for illumination on picture quality impact, specifically can be referring to: CHEN Lu, the < < Automatic3D face model reconstruction using > > that YANG Jie shows, carries out the detection of Skin Color Information.
In yuv space, U and V are two mutually orthogonal vectors in plane, carrier chrominance signal (being U and V sum) is a two-dimensional vector, be referred to as carrier chrominance signal vector, and the corresponding carrier chrominance signal vector of each color, its saturation degree represents by mould value Ch, and tone is represented by phasing degree θ:
CH = | U | 2 + | V | 2 Formula (19)
&theta; = tan - 1 ( | U | | V | ) Formula (20)
The pixel P of coloured image is transformed to yuv space by rgb space, if satisfy condition θ p∈ [105,150], P is colour of skin point.In YIQ space, the tone of I component representative from orange to bluish-green, I value is less, and the yellow comprising is more, and blue-green is fewer.Can determine that with statistical study the I value of the colour of skin in YIQ space changes in [20,90] by experiment.Respectively R, G, tri-components of B are made to Gamma and correct, the value after correction is designated as respectively Rgamma, Ggamma, Bgamm:
U=-0.147 * R gamma-0.289 * G gamma+ 0.436 * B gammaformula (21)
V=-0.615 * R gamma-0.515 * G gamma-0.100 * B gammaformula (22)
&theta; = tan - 1 ( | U | | V | ) Formula (23)
I=0.596 * R gamma-0.274 * G gamma-0.322 * B gammaformula (24)
According to try to achieve and value, judge that this pixel is as colour of skin point.If meet
&theta; &Element; [ 105,150 ] I &Element; [ 20,90 ] Formula (25)
Judge that this pixel is as colour of skin point.
2) correction feature point
Because ASM template adopts symmetrical template, for the face characteristic extraction in incomplete front, there will be the out-of-bounds of a flank side surface unique point, and then texture reconstruction is below caused to side-information disappearance.Side unique point is carried out to colour of skin point and judge, if not colour of skin point drops on beyond face, by the indentation of Gai Dianxianglian center, until all sides unique point all becomes colour of skin point.
3) use the rear unique point of correction to carry out texture
Because Model Reconstruction must be used symmetrical unique point, still with the two dimensional character point before proofreading and correct, calculate three-dimensional feature point, finally obtain model.During pinup picture, by the unique point after proofreading and correct, shine upon three-dimensional feature point, so effectively avoided the texture disappearance of side.
The three-dimensional face model of different attitudes, illumination and the expression being generated by this model also has the good sense of reality.As shown in figure 11, the facial image of original input and the human face three-dimensional model of generation, for synthetic abundant human face expression, based on facial actions code system (Facial Action Coding System, FACS) set up 44 elemental motion unit (Action Unit, AU), each AU can control one or several human face characteristic point in three-dimensional displacement.AU combination by different, can produce the various expressions such as happiness, anger, grief and joy.Use TPS to carry out Interpolation Deformation to three-dimensional feature point, realize expression shape change, as shown in figure 12, the expression example of simulation.
By the region of face and the profile of cheek that detect, infer the second personage's correlation properties information.These characteristic informations comprise the variation of visual angle, illumination and expression etc.As shown in figure 13, specific embodiments is as follows.
The unique point of personage in picture or image sequence confirms, we can find the AU unit (as described above) that predefined is good easily.These AU are the expression of refinement ground description Yi Ge face very, as long as this algorithm has been determined the particular location of unique point and the concrete form of AU, has just determined the expression of character facial.
As for the attitude of the face characteristic point estimation personage head by two dimensional image, the utilization of this algorithm be POSIT method.
1, basic thought: algorithm divides two parts
(1) with the rectangular projection conversion SOP (Standard Operation Procedure, standard operating procedure) of scale-up factor, according to system of linear equations, obtain rotation matrix and translation vector;
(2) by the peaceful coefficient of discharge of shifting to of the rotation matrix drawing, upgrade scale-up factor (Scale Factor), then upgrade original point by scale-up factor, carry out iteration.
2, algorithmic procedure:
(1) suppose rotation matrix R = R 11 R 12 R 13 R 21 R 22 R 23 R 31 R 32 R 33 = R 1 T R 2 T R 3 T And translation vector T = T x T y T z , F is focal length; In perspective projection transformation and in SOP, wherein scale factor is s = f Z 0 ;
(2) make basic perspective projection transformation, by 3D point a=(a x, a y, a z) tperspective projection obtains homogeneous coordinates m=(wx, wy, w) to the plane of delineation t, conversion process is wx wy w = fR 1 T f T x f R 2 T f T y R 3 T T z &CenterDot; a x a y a z 1 Because m is homogeneous coordinates, so equation the right is divided by T z, can be not influenced, obtain wx wy w = fR 1 T f T x f R 2 T f T y R 3 T / T z 1 &CenterDot; a x a y a z 1 , S=f/T wherein z, obtain wx wy = fR 1 T f T x f R 2 T f T y &CenterDot; a x a y a z 1 , Wherein w = R 3 &CenterDot; a T z + 1 ;
(3) conversion process is now X Y Z 1 s R 1 s R 2 s T x s T t = wx wy , Be system of equations
Xs R 11 + Ys R 12 + Zs R 13 + s T x = wx Xs R 21 + Ys R 22 + Zs R 23 + s T y = wy , W initial value is 1;
(4) make K 1=(sR 11sR 12sR 13sT x) t, K 2=(sR 21sR 22sR 23sT y) t,
A = X 0 Y 0 Z 0 1 X 1 Y 1 Z 1 1 &CenterDot; &CenterDot; &CenterDot; &CenterDot; X n Y n Z n 1 , A is (n+1) * 4 matrixes, b 1 = x 0 x 1 &CenterDot; x n , b 2 = y 0 y 1 &CenterDot; y n , Then original equation group becomes AK 1 = b 1 A K 2 = b 2 , Application least square method, is separated K 1 = ( A T A ) - 1 A T b 1 K 2 = ( A T A ) - 1 A T b 2 ;
(5) have at least 4 not coplanar 2D-3D points right, obtain K1, after K2, it,, divided by known definite value s, can be obtained to R1, R2, Tx, Ty, then obtains R3=R1 * R2, and by R1, R2, R3 is normalized to vector of unit length;
(6) then upgrade since right to different 2D-3D points, s=f/T zbe definite value, f is focal length, is known definite value parameter, T zalso be known definite value parameter, can regard the mean value of all 3D point Z coordinate as; To different 3D points, a is different, so w is also just different, so original 2D point is become to (wx, wy) t;
(7), again from step (2), the 2D point by original 3D point and after upgrading, uses least square method solving equations, obtains new K1, K2; Upgrade again w, upgrade 2D point coordinate;
3, solution procedure:
(1) provide the initial position of video camera: focal distance f, image coordinate center, i.e. (c x, c y), image range, i.e. rational 2D coordinate figure scope.
(2) have 8 unknown numbers, at least need 4 2D-3D points right;
(3) first 2D-3D point is to being (0,0)-(0,0,0);
(3) algorithm execution stop condition is: restriction iterations, arranges change value size (degree of accuracy) threshold value that each 2D is ordered.
In enforcement, while adjusting the first personage's the demonstration of face according to described characteristic information in the first personage's model, described characteristic information can be one of following parameter or its combination: the state of elemental motion unit AU of the three-dimensional 3D attitude of the second personage's face, the second personage's face, the ratio of the length and width of the profile of the second personage's face, the bright dark degree of the unique point of the second personage's face skin around.
Concrete, the first object model is further replaced to the second personage, can be as follows:
A. according to the second personage's correlated characteristic information, adjust the first personage's model, make itself and the second personage's correlation properties similar; As wanted to be divided into:
A.1 according to the attitude of 3D people's face of the second personage who estimates, adjust the attitude of the first personage 3D model;
A.2 according to the state of the second personage's who estimates AU, adjust the expression of the first personage 3D model;
A.3 according to the profile of the second personage's face, be mainly the ratio of length and width, adjust the shape of face of the first personage 3D model;
A.4 according to the bright dark degree of all unique points of the second character facial skin around, adjust the bright dark of first personage's individual features point face around.
B. by the region of face and the profile of cheek that detect, the face area of the second personage in every two field picture is erased;
C. in every two field picture, the first personage's who adjusts model is placed on to the second personage's face area;
Based on same inventive concept, a kind of image processing system is also provided in the embodiment of the present invention, because the principle that this system is dealt with problems is similar to a kind of image processing method, so the enforcement of these systems can, referring to the enforcement of method, repeat part and repeat no more.
Figure 14 is image processing system schematic diagram, as shown in figure 14, can comprise:
Modeling module 1401, for the image that comprises the first personage according at least one, simulates the first personage's model;
Target image determination module 1402, for determining the target image that comprises the second personage;
Characteristic information determination module 1403, for determining the characteristic information that shows the second personage at target image;
Adjust display module 1404, for the model the first personage, according to described characteristic information, adjust the first personage's demonstration;
Personage's replacement module 1405, at target image, replaces with the second personage the first personage who shows after adjusting.
In enforcement, modeling module 1401 can comprise:
The first detecting unit, for detection of the region that goes out the first personage's face;
The first determining unit, in the region of detected face, determines the region of face and the profile of cheek;
Laminating unit, for by the profile of the region of the face that detect and cheek, fits to the rear model that obtains the first personage's who simulates face on existing people's face 3D model.
In enforcement, target image determination module 1402 can comprise:
The second detecting unit, for detection of the region that goes out the second personage's face;
The second determining unit, in the region of detected face, determines the region of face and the profile of cheek;
Feature unit, for according to the region of face and the profile of cheek that detect, determines the characteristic information of the face that shows the second personage in target image.
In enforcement, characteristic information determination module 1403 is further used for according to the region of the region of the first personage's face and the second personage's face, the second personage's face being replaced with the face that shows the first personage after adjusting.
In enforcement, adjust the demonstration that display module 1404 is further used for adjusting according to the described characteristic information of one of following parameter or its combination the first personage's face in the first personage's model: the state of AU of the 3D attitude of the second personage's face, the second personage's face, the ratio of the length and width of the profile of the second personage's face, the bright dark degree of the unique point of the second personage's face skin around.
In enforcement, adjust display module 1404 and be further used in the region of detected face, adopt face recognition algorithms to determine the region of face and the profile of cheek.
In enforcement, further can comprise:
Stage property adds module, at target image, after the second personage is replaced with to the first personage who shows after adjusting, for the first personage in target image adds image.
In the technical scheme providing in the embodiment of the present invention, simulate former personage's model, consider the characteristic information of target person, former person model adjustment is substituted in the image at target person place.When having solved image simulation and replacing, character image does not conform to replaced shooting visual angle image, and personage's unmodifiable problem of expressing one's feelings.Can be applied in a lot of scenes as: when friendship, love, parent-offspring, OK a karaoke club ok change face, on the spot in person takes a personage in the environment at another personage place to; Can also fictionalize a personage substitutes another personage and does some things; Can also replace the personage who does not like in photo, change the personage who oneself likes into.
The technical scheme that employing provides in the embodiment of the present invention, user only, with a width picture, just can replace or edit the face of any personage in any photo or video; After replacement, the first personage's facial shooting angle can change according to the variation of the shooting angle of target person; After replacement, the first personage's facial expression can change according to the variation of the expression of target person; Can also be on the spot in person the first personage be taken in the world at target person place.
Those skilled in the art should understand, embodiments of the invention can be provided as method, system or computer program.Therefore, the present invention can adopt complete hardware implementation example, implement software example or in conjunction with the form of the embodiment of software and hardware aspect completely.And the present invention can adopt the form that wherein includes the upper computer program of implementing of computer-usable storage medium (including but not limited to magnetic disk memory, CD-ROM, optical memory etc.) of computer usable program code one or more.
The present invention is with reference to describing according to process flow diagram and/or the block scheme of the method for the embodiment of the present invention, equipment (system) and computer program.Should understand can be in computer program instructions realization flow figure and/or block scheme each flow process and/or the flow process in square frame and process flow diagram and/or block scheme and/or the combination of square frame.Can provide these computer program instructions to the processor of multi-purpose computer, special purpose computer, Embedded Processor or other programmable data processing device to produce a machine, the instruction of carrying out by the processor of computing machine or other programmable data processing device is produced for realizing the device in the function of flow process of process flow diagram or a plurality of flow process and/or square frame of block scheme or a plurality of square frame appointments.
These computer program instructions also can be stored in energy vectoring computer or the computer-readable memory of other programmable data processing device with ad hoc fashion work, the instruction that makes to be stored in this computer-readable memory produces the manufacture that comprises command device, and this command device is realized the function of appointment in flow process of process flow diagram or a plurality of flow process and/or square frame of block scheme or a plurality of square frame.
These computer program instructions also can be loaded in computing machine or other programmable data processing device, make to carry out sequence of operations step to produce computer implemented processing on computing machine or other programmable devices, thereby the instruction of carrying out is provided for realizing the step of the function of appointment in flow process of process flow diagram or a plurality of flow process and/or square frame of block scheme or a plurality of square frame on computing machine or other programmable devices.
Although described the preferred embodiments of the present invention, once those skilled in the art obtain the basic creative concept of cicada, can make other change and modification to these embodiment.So claims are intended to all changes and the modification that are interpreted as comprising preferred embodiment and fall into the scope of the invention.

Claims (14)

1. an image processing method, is characterized in that, comprises the steps:
According at least one image that comprises the first personage, simulate the first personage's model;
Determine the target image that comprises the second personage;
Determine the characteristic information that shows the second personage in target image;
In the first personage's model, according to described characteristic information, adjust the first personage's demonstration;
In target image, the second personage is replaced with to the first personage who shows after adjusting.
2. the method for claim 1, is characterized in that, at the image that comprises the first personage according at least one, while simulating the first personage's the model of face, comprising:
Detect the region of the first personage's face;
In the region of detected face, determine the region of face and the profile of cheek;
By the profile of the region of the face that detect and cheek, fit to the rear model that obtains the first personage's who simulates face on the three-dimensional 3D model of existing people's face.
3. method as claimed in claim 1 or 2, is characterized in that, when determining the characteristic information of the face that shows the second personage in target image, comprising:
Detect the region of the second personage's face;
In the region of detected face, determine the region of face and the profile of cheek;
According to the profile of the region of the face that detect and cheek, determine the characteristic information of the face that shows the second personage in target image.
4. method as claimed in claim 3, it is characterized in that, the second personage being replaced with to the first personage who shows after adjusting, is according to the region of the region of the first personage's face and the second personage's face, the second personage's face to be replaced with to the face that shows the first personage after adjusting.
5. method as claimed in claim 4, it is characterized in that, while adjusting the first personage's the demonstration of face according to described characteristic information in the first personage's model, one of described characteristic information is following parameter or its combination: the state of elemental motion unit AU of the 3D attitude of the second personage's face, the second personage's face, the ratio of the length and width of the profile of the second personage's face, the bright dark degree of the unique point of the second personage's face skin around.
6. method as claimed in claim 3, is characterized in that, in the region of detected face, adopts face recognition algorithms to determine the region of face and the profile of cheek.
7. the method as described in as arbitrary in claim 1 to 6, is characterized in that, in target image, after the second personage is replaced with to the first personage who shows after adjusting, further comprises:
For the first personage in target image adds image.
8. an image processing system, is characterized in that, comprising:
Modeling module, for the image that comprises the first personage according at least one, simulates the first personage's model;
Target image determination module, for determining the target image that comprises the second personage;
Characteristic information determination module, for determining the characteristic information that shows the second personage at target image;
Adjust display module, for the model the first personage, according to described characteristic information, adjust the first personage's demonstration;
Personage's replacement module, at target image, replaces with the second personage the first personage who shows after adjusting.
9. system as claimed in claim 8, is characterized in that, described modeling module comprises:
The first detecting unit, for detection of the region that goes out the first personage's face;
The first determining unit, in the region of detected face, determines the region of face and the profile of cheek;
Laminating unit, for by the profile of the region of the face that detect and cheek, fits to the rear model that obtains the first personage's who simulates face on existing people's face 3D model.
10. system as claimed in claim 8 or 9, is characterized in that, described target image determination module comprises:
The second detecting unit, for detection of the region that goes out the second personage's face;
The second determining unit, in the region of detected face, determines the region of face and the profile of cheek;
Feature unit, for according to the region of face and the profile of cheek that detect, determines the characteristic information of the face that shows the second personage in target image.
11. systems as claimed in claim 10, it is characterized in that, described characteristic information determination module is further used for according to the region of the region of the first personage's face and the second personage's face, the second personage's face being replaced with the face that shows the first personage after adjusting.
12. systems as claimed in claim 11, it is characterized in that, described adjustment display module is further used for according to the described characteristic information of one of following parameter or its combination, adjusting the demonstration of the first personage's face in the first personage's model: the state of AU of the 3D attitude of the second personage's face, the second personage's face, the ratio of the length and width of the profile of the second personage's face, the bright dark degree of the unique point of the second personage's face skin around.
13. systems as claimed in claim 10, is characterized in that, described adjustment display module is further used in the region of detected face, adopt face recognition algorithms to determine the region of face and the profile of cheek.
14. systems as described in as arbitrary in claim 8 to 13, is characterized in that, further comprise:
Stage property adds module, at target image, after the second personage is replaced with to the first personage who shows after adjusting, for the first personage in target image adds image.
CN201410352939.4A 2014-07-23 2014-07-23 Picture processing method and system Pending CN104123749A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201410352939.4A CN104123749A (en) 2014-07-23 2014-07-23 Picture processing method and system
PCT/CN2015/077353 WO2016011834A1 (en) 2014-07-23 2015-04-24 Image processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410352939.4A CN104123749A (en) 2014-07-23 2014-07-23 Picture processing method and system

Publications (1)

Publication Number Publication Date
CN104123749A true CN104123749A (en) 2014-10-29

Family

ID=51769146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410352939.4A Pending CN104123749A (en) 2014-07-23 2014-07-23 Picture processing method and system

Country Status (2)

Country Link
CN (1) CN104123749A (en)
WO (1) WO2016011834A1 (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104376589A (en) * 2014-12-04 2015-02-25 青岛华通国有资本运营(集团)有限责任公司 Method for replacing movie and TV play figures
CN104378620A (en) * 2014-11-24 2015-02-25 联想(北京)有限公司 Image processing method and electronic device
CN105069745A (en) * 2015-08-14 2015-11-18 济南中景电子科技有限公司 face-changing system based on common image sensor and enhanced augmented reality technology and method
CN105118082A (en) * 2015-07-30 2015-12-02 科大讯飞股份有限公司 Personalized video generation method and system
WO2016011834A1 (en) * 2014-07-23 2016-01-28 邢小月 Image processing method and system
CN105488489A (en) * 2015-12-17 2016-04-13 掌赢信息科技(上海)有限公司 Short video message transmitting method, electronic device and system
CN105577517A (en) * 2015-12-17 2016-05-11 掌赢信息科技(上海)有限公司 Sending method of short video message and electronic device
CN106022221A (en) * 2016-05-09 2016-10-12 腾讯科技(深圳)有限公司 Image processing method and processing system
CN106054641A (en) * 2016-06-29 2016-10-26 Tcl集团股份有限公司 Method, apparatus, and system for turning on intelligent household appliance control interface
CN106127139A (en) * 2016-06-21 2016-11-16 东北大学 A kind of dynamic identifying method of MOOC course middle school student's facial expression
CN107945102A (en) * 2017-10-23 2018-04-20 深圳市朗形网络科技有限公司 A kind of picture synthetic method and device
CN108256497A (en) * 2018-02-01 2018-07-06 北京中税网控股股份有限公司 A kind of method of video image processing and device
CN108334821A (en) * 2018-01-18 2018-07-27 联想(北京)有限公司 A kind of image processing method and electronic equipment
CN108510581A (en) * 2018-03-30 2018-09-07 盎锐(上海)信息科技有限公司 Data capture method and model generating means
CN108537880A (en) * 2018-03-30 2018-09-14 盎锐(上海)信息科技有限公司 Data capture method with image comparing function and model generating means
CN108682030A (en) * 2018-05-21 2018-10-19 北京微播视界科技有限公司 Face replacement method, device and computer equipment
CN108711180A (en) * 2018-05-02 2018-10-26 北京市商汤科技开发有限公司 Makeups/generation and makeups of special efficacy of changing face program file packet/special efficacy of changing face generation method and device
CN108897856A (en) * 2018-06-29 2018-11-27 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN109271931A (en) * 2018-09-14 2019-01-25 辽宁奇辉电子系统工程有限公司 It is a kind of that gesture real-time identifying system is pointed sword at based on edge analysis
CN109977739A (en) * 2017-12-28 2019-07-05 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109978754A (en) * 2017-12-28 2019-07-05 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110096925A (en) * 2018-01-30 2019-08-06 普天信息技术有限公司 Enhancement Method, acquisition methods and the device of Facial Expression Image
CN111274602A (en) * 2020-01-15 2020-06-12 腾讯科技(深圳)有限公司 Image characteristic information replacement method, device, equipment and medium
CN111886609A (en) * 2018-03-13 2020-11-03 丰田研究所股份有限公司 System and method for reducing data storage in machine learning
CN112990123A (en) * 2021-04-26 2021-06-18 北京世纪好未来教育科技有限公司 Image processing method, apparatus, computer device and medium
CN113766339A (en) * 2021-09-07 2021-12-07 网易(杭州)网络有限公司 Bullet screen display method and device
CN113962845A (en) * 2021-08-25 2022-01-21 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and storage medium

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644455B (en) * 2017-10-12 2022-02-22 北京旷视科技有限公司 Face image synthesis method and device
CN109448069B (en) * 2018-10-30 2023-07-18 维沃移动通信有限公司 Template generation method and mobile terminal
CN109461117B (en) * 2018-10-30 2023-11-24 维沃移动通信有限公司 Image processing method and mobile terminal
CN109410298B (en) * 2018-11-02 2023-11-17 北京恒信彩虹科技有限公司 Virtual model manufacturing method and expression changing method
CN109788312B (en) * 2019-01-28 2022-10-21 北京易捷胜科技有限公司 Method for replacing people in video
CN109685044B (en) * 2019-02-18 2023-06-06 上海德拓信息技术股份有限公司 Face recognition retrieval method based on k-means clustering algorithm
CN112101073B (en) * 2019-06-18 2023-12-19 北京陌陌信息技术有限公司 Face image processing method, device, equipment and computer storage medium
CN113569790B (en) * 2019-07-30 2022-07-29 北京市商汤科技开发有限公司 Image processing method and device, processor, electronic device and storage medium
CN110415341A (en) * 2019-08-01 2019-11-05 腾讯科技(深圳)有限公司 A kind of generation method of three-dimensional face model, device, electronic equipment and medium
CN110503599B (en) * 2019-08-16 2022-12-13 郑州阿帕斯科技有限公司 Image processing method and device
CN112949360A (en) * 2019-12-11 2021-06-11 广州市久邦数码科技有限公司 Video face changing method and device
CN111508050B (en) * 2020-04-16 2022-05-13 北京世纪好未来教育科技有限公司 Image processing method and device, electronic equipment and computer storage medium
CN112330529A (en) * 2020-11-03 2021-02-05 上海镱可思多媒体科技有限公司 Dlid-based face aging method, system and terminal
CN113963425B (en) * 2021-12-22 2022-03-25 北京的卢深视科技有限公司 Testing method and device of human face living body detection system and storage medium
CN113963127B (en) * 2021-12-22 2022-03-15 深圳爱莫科技有限公司 Simulation engine-based model automatic generation method and processing equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030007700A1 (en) * 2001-07-03 2003-01-09 Koninklijke Philips Electronics N.V. Method and apparatus for interleaving a user image in an original image sequence
US8831379B2 (en) * 2008-04-04 2014-09-09 Microsoft Corporation Cartoon personalization
CN101930618B (en) * 2010-08-20 2012-05-30 无锡幻影科技有限公司 Method for producing individual two-dimensional anime
CN104123749A (en) * 2014-07-23 2014-10-29 邢小月 Picture processing method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
林源 等: "基于真实感三维头重建的人脸替换", 《清华大学学报》 *
高岩 等: "约束条件下的人脸五官替换算法", 《中国图象图形学报》 *

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016011834A1 (en) * 2014-07-23 2016-01-28 邢小月 Image processing method and system
CN104378620A (en) * 2014-11-24 2015-02-25 联想(北京)有限公司 Image processing method and electronic device
CN104376589A (en) * 2014-12-04 2015-02-25 青岛华通国有资本运营(集团)有限责任公司 Method for replacing movie and TV play figures
CN105118082A (en) * 2015-07-30 2015-12-02 科大讯飞股份有限公司 Personalized video generation method and system
CN105118082B (en) * 2015-07-30 2019-05-28 科大讯飞股份有限公司 Individualized video generation method and system
CN105069745A (en) * 2015-08-14 2015-11-18 济南中景电子科技有限公司 face-changing system based on common image sensor and enhanced augmented reality technology and method
CN105488489A (en) * 2015-12-17 2016-04-13 掌赢信息科技(上海)有限公司 Short video message transmitting method, electronic device and system
CN105577517A (en) * 2015-12-17 2016-05-11 掌赢信息科技(上海)有限公司 Sending method of short video message and electronic device
US10810742B2 (en) 2016-05-09 2020-10-20 Tencent Technology (Shenzhen) Company Limited Dynamic and static image processing method and system
CN106022221A (en) * 2016-05-09 2016-10-12 腾讯科技(深圳)有限公司 Image processing method and processing system
CN106127139A (en) * 2016-06-21 2016-11-16 东北大学 A kind of dynamic identifying method of MOOC course middle school student's facial expression
CN106127139B (en) * 2016-06-21 2019-06-25 东北大学 A kind of dynamic identifying method of MOOC course middle school student's facial expression
CN106054641A (en) * 2016-06-29 2016-10-26 Tcl集团股份有限公司 Method, apparatus, and system for turning on intelligent household appliance control interface
CN107945102A (en) * 2017-10-23 2018-04-20 深圳市朗形网络科技有限公司 A kind of picture synthetic method and device
CN109978754A (en) * 2017-12-28 2019-07-05 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109977739A (en) * 2017-12-28 2019-07-05 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108334821B (en) * 2018-01-18 2020-12-18 联想(北京)有限公司 Image processing method and electronic equipment
CN108334821A (en) * 2018-01-18 2018-07-27 联想(北京)有限公司 A kind of image processing method and electronic equipment
CN110096925A (en) * 2018-01-30 2019-08-06 普天信息技术有限公司 Enhancement Method, acquisition methods and the device of Facial Expression Image
CN110096925B (en) * 2018-01-30 2021-05-14 普天信息技术有限公司 Enhancement method, acquisition method and device of facial expression image
CN108256497A (en) * 2018-02-01 2018-07-06 北京中税网控股股份有限公司 A kind of method of video image processing and device
CN111886609B (en) * 2018-03-13 2021-06-04 丰田研究所股份有限公司 System and method for reducing data storage in machine learning
CN111886609A (en) * 2018-03-13 2020-11-03 丰田研究所股份有限公司 System and method for reducing data storage in machine learning
CN108537880A (en) * 2018-03-30 2018-09-14 盎锐(上海)信息科技有限公司 Data capture method with image comparing function and model generating means
CN108510581A (en) * 2018-03-30 2018-09-07 盎锐(上海)信息科技有限公司 Data capture method and model generating means
CN108711180A (en) * 2018-05-02 2018-10-26 北京市商汤科技开发有限公司 Makeups/generation and makeups of special efficacy of changing face program file packet/special efficacy of changing face generation method and device
CN108711180B (en) * 2018-05-02 2021-08-06 北京市商汤科技开发有限公司 Method and device for generating makeup and/or face-changing special effect program file package and method and device for generating makeup and/or face-changing special effect
CN108682030A (en) * 2018-05-21 2018-10-19 北京微播视界科技有限公司 Face replacement method, device and computer equipment
CN108897856A (en) * 2018-06-29 2018-11-27 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN109271931A (en) * 2018-09-14 2019-01-25 辽宁奇辉电子系统工程有限公司 It is a kind of that gesture real-time identifying system is pointed sword at based on edge analysis
CN111274602A (en) * 2020-01-15 2020-06-12 腾讯科技(深圳)有限公司 Image characteristic information replacement method, device, equipment and medium
CN112990123A (en) * 2021-04-26 2021-06-18 北京世纪好未来教育科技有限公司 Image processing method, apparatus, computer device and medium
CN113962845A (en) * 2021-08-25 2022-01-21 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and storage medium
CN113962845B (en) * 2021-08-25 2023-08-29 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and storage medium
CN113766339A (en) * 2021-09-07 2021-12-07 网易(杭州)网络有限公司 Bullet screen display method and device
CN113766339B (en) * 2021-09-07 2023-03-14 网易(杭州)网络有限公司 Bullet screen display method and device

Also Published As

Publication number Publication date
WO2016011834A1 (en) 2016-01-28

Similar Documents

Publication Publication Date Title
CN104123749A (en) Picture processing method and system
JP7200139B2 (en) Virtual face makeup removal, fast face detection and landmark tracking
US10169905B2 (en) Systems and methods for animating models from audio data
US10559111B2 (en) Systems and methods for generating computer ready animation models of a human head from captured data images
CN110111418A (en) Create the method, apparatus and electronic equipment of facial model
US20190012578A1 (en) 3D Spatial Transformer Network
US11282257B2 (en) Pose selection and animation of characters using video data and training techniques
KR20160095735A (en) Method and system for complex and multiplex emotion recognition of user face
Angelopoulou et al. Evaluation of different chrominance models in the detection and reconstruction of faces and hands using the growing neural gas network
US11361467B2 (en) Pose selection and animation of characters using video data and training techniques
CN105405143B (en) Gesture segmentation method and system based on global expectation-maximization algorithm
Nguyen et al. Vision-based global localization of points of gaze in sport climbing
Moreno et al. Marker-less feature and gesture detection for an interactive mixed reality avatar
Patil et al. Human skin detection using image fusion
Pawar et al. Machine learning approach for object recognition
JP6814374B2 (en) Detection method, detection program and detection device
Abbas et al. An Improved Statistical Model of Appearance under Partial Occlusion.
Büyüksaraç Sign language recognition by image analysis
Hu et al. A 3D gesture recognition framework based on hierarchical visual attention and perceptual organization models
Sugimura et al. Using Motion Blur to Recognize Hand Gestures in Low-light Scenes
Alfaqheri et al. 3D Visual Interaction for Cultural Heritage Sector
Wang et al. Research Article Hand Motion and Posture Recognition in a Network of Calibrated Cameras
Jorstad Measuring deformations and illumination changes in images with applications to face recognition
Gingir Hand gesture recognition system
Saini Real time spatio temporal segmentation of RGBD cloud and applications

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20141029