CN107330371A - Acquisition methods, device and the storage device of the countenance of 3D facial models - Google Patents
Acquisition methods, device and the storage device of the countenance of 3D facial models Download PDFInfo
- Publication number
- CN107330371A CN107330371A CN201710407215.9A CN201710407215A CN107330371A CN 107330371 A CN107330371 A CN 107330371A CN 201710407215 A CN201710407215 A CN 201710407215A CN 107330371 A CN107330371 A CN 107330371A
- Authority
- CN
- China
- Prior art keywords
- human face
- rgbd
- characteristic point
- facial models
- models
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
- G06V20/653—Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention provides a kind of acquisition methods, device and the storage device of the countenance of 3D facial models, this method includes:Obtain 3D facial models;The movement locus of human face characteristic point is obtained by RGBD image sequences;The movement locus of the human face characteristic point is mapped to the 3D facial models, so that the movement locus of human face characteristic point of the movement locus of the human face characteristic point of the 3D facial models with obtaining matches.The device includes processor, the computing device above method.The storage device has program stored therein data, and described program data can be performed to realize the above method.The expression information that the present invention is obtained is more comprehensive, accurate, so that 3D facial models can react the expression of face more life-likely.
Description
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of acquisition side of the countenance of 3D facial models
Method, device and storage device.
Background technology
RGBD images are obtained by RGBD cameras photographed scene, refer to the image that depth information is merged with colour information, often
Individual pictorial element is respectively provided with associated color value and associated depth value, and the depth value is represented from imaging sensor to scene
In body surface distance.
Expression is the external presentation of mood and emotion, and expression can be divided into six classes by Basic emotions theory model:Angry, detest,
It is frightened, glad, sad, surprised.Expression recognition all has very important Research Significance all the time, in man-machine interaction, public affairs
The multiple fields such as safety, intelligent video display have huge market value altogether.
To in the research and practice process of prior art, the inventors found that the people only obtained from RGB image
Facial expression information can not map out the true expression of face comprehensively, exactly.
The content of the invention
The present invention provides a kind of acquisition methods, device and the storage device of the countenance of 3D facial models, can solve the problem that
The problem of there is the true expression that can not comprehensively, accurately map face in prior art.
In order to solve the above technical problems, one aspect of the present invention is:A kind of face of 3D facial models is provided
The acquisition methods of portion's expression, this method comprises the following steps:Obtain 3D facial models;Face is obtained by RGBD image sequences special
Levy movement locus a little;The movement locus of the human face characteristic point is mapped to the 3D facial models, so that the 3D faces
The movement locus of human face characteristic point of the movement locus of the human face characteristic point of model with obtaining matches.
In order to solve the above technical problems, another technical solution used in the present invention is:A kind of 3D facial models are provided
The acquisition device of countenance.The device includes processor, and the processor is used to obtain 3D facial models;Pass through RGBD images
The movement locus of retrieval human face characteristic point;The movement locus of the human face characteristic point is mapped to the 3D facial models,
So that the movement locus of human face characteristic point of the movement locus of the human face characteristic point of the 3D facial models with obtaining matches.
In order to solve the above technical problems, another technical scheme that the present invention is used is:A kind of storage device is provided, this is deposited
Storage device has program stored therein data, and described program data can be performed to realize the above method.
The beneficial effects of the invention are as follows:The situation of prior art is different from, the present invention obtains people by RGBD image sequences
The movement locus of face characteristic point is simultaneously mapped in 3D facial models, so that the expression of 3D facial models and the human face expression obtained
Match somebody with somebody, it is achieved thereby that the mapping of human face expression.Because RGBD images have Pixel Information and depth information, thus the face characteristic
Point and its motion track information also can be reflected comprehensively from Pixel Information and depth information so that the expression information of acquisition
More comprehensively, it is accurate, so that 3D facial models can react the expression of face more life-likely.
Brief description of the drawings
Fig. 1 is a kind of flow signal of the acquisition methods embodiment of the countenance for 3D facial models that the present invention is provided
Figure;
Fig. 2 is that a kind of flow of another embodiment of acquisition methods of the countenance for 3D facial models that the present invention is provided is shown
It is intended to;
Fig. 3 is the schematic flow sheet of the embodiments of step S21 mono- in Fig. 2;
Fig. 4 is the schematic flow sheet of another embodiments of step S21 in Fig. 2;
Fig. 5 is the schematic flow sheet of step S22 embodiments in Fig. 2;
Fig. 6 is the schematic flow sheet of step S222 embodiments in Fig. 5;
Fig. 7 is the schematic flow sheet of another embodiments of step S222 in Fig. 5;
Fig. 8 is the schematic flow sheet of step S223 embodiments in Fig. 5;
Fig. 9 is a kind of signal of one application scenarios of acquisition methods of the countenance for 3D facial models that the present invention is provided
Figure;
Figure 10 is the acquisition methods of countenance of a kind of 3D facial models another application scenarios that the present invention is provided
Schematic diagram;
Figure 11 is a kind of structural representation of the acquisition device of the countenance for 3D facial models that the present invention is provided.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is only a part of embodiment of the present invention, rather than whole embodiments.It is based on
Embodiment in the present invention, it is all other that those of ordinary skill in the art are obtained under the premise of creative work is not made
Embodiment, belongs to the scope of protection of the invention.
Referring to Fig. 1, Fig. 1 is a kind of acquisition methods embodiment of the countenance for 3D facial models that the present invention is provided
Schematic flow sheet.The acquisition methods of the countenance of 3D facial models shown in Fig. 1 comprise the following steps:
S11, acquisition 3D facial models.
Specifically, the 3D facial models can be the 3D facial models on complete head, i.e. 3D head models, the 3D faces
Model can also be the 3D facial models of only face, and it can show in certain angular range, the 3D facial models
It can also be the 3D image facial models being made by animation design technique.
The acquisition of 3D facial models can be called from 3D face model databases are pre-stored in, for example, real in this method
Before applying, a variety of 3D facial models are preserved in 3D face model databases for selection.Can also be the 3D faces set up immediately
Portion's model, such as locally carrying out three-dimensional modeling using RGBD cameras to obtain 3D facial models.
S12, the movement locus by RGBD image sequences acquisition human face characteristic point.
Wherein, human face characteristic point can by gather face member usually obtain, face element include eyebrow, eyes, nose,
One or more in face, cheek and chin etc..When the expression of face changes, at least part of people
Face characteristic point can change, for example, when laughing at, the corners of the mouth raises up, face opens.
The movement locus of human face characteristic point can prestore, and can obtain in real time.For example, can be in advance by face
The movement locus of characteristic point is stored in expression storehouse with corresponding expression, can be from the expression storehouse when cartoon making is carried out
Call;And when carrying out net cast, the movement locus of human face characteristic point can be obtained in real time, it is for instance possible to use RGBD
Capture apparatus captures the movement locus of human face characteristic point by the RGBD image sequences of captured in real-time face successive frame.
S13, the movement locus of human face characteristic point is mapped to 3D facial models, so that the human face characteristic point of 3D facial models
Movement locus and the movement locus of human face characteristic point that obtains match.
In step S13, the movement locus of human face characteristic point is mapped in 3D facial models so that in 3D facial models
The movement locus of human face characteristic point of the movement locus of human face characteristic point with obtaining matches, so that the 3D faceforms
The expression shape change of face of the dynamic expression with obtaining matches.
Wherein, the 3D facial models can mark good person's face characteristic point in advance.For example, the expression of the face obtained is micro-
Laugh at, now the corners of the mouth of face raises up, the movement locus of the human face characteristic point of acquisition for face the right and left the corners of the mouth respectively towards a left side
Upper, upper right side motion.The movement locus of the human face characteristic point is mapped in 3D facial models, that is, finds 3D facial model subscripts
The human face characteristic point of note, for example, face, the specially corners of the mouth, and the movement locus of the human face characteristic point of acquisition is mapped to 3D faces
On portion's model, i.e. the corners of the mouth of the right and left of the face of 3D facial models is moved respectively towards upper left, upper right side, so that in 3D
The expression of the smile of the face of acquisition is mapped out on the face of facial model.
In addition, the matching relationship of above-mentioned 3D facial models and expression can also be stored, for example, end can be stored into
Memory is held, or high in the clouds is stored in by network pipeline, or be stored in the memory module of upper computer software etc., in order to which the later stage enters
Row is called.
Prior art is different from, the present invention obtains the movement locus of human face characteristic point by RGBD image sequences and is mapped to
In 3D facial models, so that the expression of 3D facial models is matched with the human face expression of acquisition, it is achieved thereby that human face expression reflects
Penetrate.Because RGBD images have Pixel Information and depth information, thus the human face characteristic point and its motion track information also can be complete
Reflected from Pixel Information and depth information to face so that the expression information of acquisition is more comprehensive, accurate, so that 3D
Facial model can react the expression of face more life-likely.
Referring to Fig. 2, Fig. 2 is a kind of another implementation of acquisition methods of the countenance for 3D facial models that the present invention is provided
The schematic flow sheet of example.
The method of the present embodiment comprises the following steps:
S21, acquisition 3D facial models.
Specifically, in one embodiment, as shown in figure 3, Fig. 3 is the flow signal of the embodiments of step S21 mono- in Fig. 2
Figure.Step S21 shown in Fig. 3 includes:
S211, multiple angles of acquisition face RGBD images.
In step S211, it can be shot by RGBD cameras from multiple angles on the head of human body, so as to logical
Cross the information that the RGBD images obtain multiple angles of face.
S212, the RGBD picture construction 3D facial models using multiple angles.
After the information of multiple angles for obtaining face in step S211 according to the RGBD images of multiple angles, step
S212 is that 3D facial models can be built according to the information of the face.
S213, in 3D facial models mark human face characteristic point.
The method of the extraction of human face characteristic point and mark can be with following step S223's and step S224 in step S213
Method is identical.Certainly, in some other embodiment, it would however also be possible to employ people is extracted and marked to other methods commonly used in the art
Face characteristic point.
In another embodiment, as shown in figure 4, Fig. 4 is the schematic flow sheet of another embodiments of step S21 in Fig. 2.Figure
Step S21 shown in 4 includes:
S211 ', multiple angles of acquisition face RGBD images
S212 ', the RGBD picture construction 3D facial models using multiple angles.
Wherein, step S211 ' and S212 ' are identical with the step S211 and S212 of a upper embodiment.
S213 ', the header information for extracting 3D facial models.
Specifically, header information can include the profile information and dimension information on head.
S214 ', call human body full header 3D models data, 3D facial models are generated with reference to header information complete
3D head models.
In step S214 ', the data on human body head in the 3D model databases of human body full header can be called, are tied
The header informations such as profile, the size extracted in conjunction step S213 ' carry out the complete 3D head models of component.
Specifically, step S214 ' can using the method such as Poisson resurfacing or adaptive surface Elastic Matching by
Complete 3D head models are built into the 3D facial models of structure.
For example, Poisson surface algorithm for reconstructing, it belongs to implicit function method, and the algorithm is by the resurfacing of directed set of points
It is converted into a space poisson problem.Input data S is the sampling point set s ∈ S of cloud data, and each sample includes a point
S.p and inside normal vectorAccording to the integral relation between sampled point and the indicator function of model, it is assumed that point set exists
On the surface of Unknown Model or near it, by estimating that the indicator function of model obtains the approximate representation of model, then extract etc.
Value face, is finally approached one seamless triangle of resurfacing.
An indicator function is created to represent surface model:
Construct Poisson's equation:
(1) by oriented always cloud estimate vector
(2) solved function x, its gradient is closest to vector field:
(3) divergence operator is applied, a poisson problem can be become:
Algorithm is realized:
(1) Octree space is set up
Octree O is defined using the position of sample point, a function F is then added0Each contact o to Octree
∈ O, selected from Octree and additional function, are allowed to meet following condition:
1) vector fieldF can be accurately and efficiently expressed as0Linear summation.
2) matrix of Poisson's equation is represented according to F0The mode that can effectively solve is expressed.
3) expression of indicator function, is used as F0Sum, can the accurate effective estimation near model surface, and by itself and
It is used as indicator function.
(2) vector field is calculated
Definition space function:
Calculate vector field:
(3) solution Poisson's equation seeks indicator function
Calculate indicator function.
Solved function x, its gradient is closest to vector field:
Construct coefficient matrix solution Poisson's equation and solve indicator function.
(4) contour surface is extracted
Surface is rebuild to obtain, it is necessary first to select an equivalence, then extract corresponding etc. by calculating indicator function
Value face.The position of the sample point of the equivalent contour surface close approximation input for extract of selection.First by the position of sample point
Estimation is put, then contour surface is extracted using average value:
Scene has been rebuild using Poisson surface algorithm for reconstructing on the basis of the reconstruction of coefficient point cloud, the present embodiment for face or
The surface on head.This method can solve the three dimensions point cloud generated based on image well, and noise is big, coefficient, skewness
The problems such as, the demand do not applied to model accuracy excessive demand can be met.
S215 ', on 3D head models mark human face characteristic point.
The method that the method for the extraction of human face characteristic point and mark can be with following step S223 and S224 in step S215 '
It is identical.Certainly, in some other embodiment, it would however also be possible to employ other methods commonly used in the art extract and marked the face special
Levy a little.
In yet another embodiment, 3D facial models are the face's mould for the 3D images produced using animation design technique
Type.It is can making immediately or called from database.
S22, the movement locus by RGBD image sequences acquisition human face characteristic point.
Referring to Fig. 5, Fig. 5 is the schematic flow sheet of step S22 embodiments in Fig. 2.Step S22 shown in Fig. 5 includes:
S221, acquisition include the RGBD image sequences of face, and RGBD images include RGB image and depth image, wherein,
The pixel of RGB image and depth image is corresponded.
Wherein, RGBD image sequences refer to the RGBD images of successive frame in special time, can be obtained by RGBD capture apparatus
Take.
S222, to RGBD images carry out Face datection, to extract the RGBD images of human face region.
RGBD images in step S222 can be any RGBD images in RGBD image sequences in step S11, for example
First RGBD image in RGBD image sequences.Human face region can be extracted only in a RGBD image therefrom.
In one embodiment, as shown in fig. 6, Fig. 6 is the schematic flow sheet of step S222 embodiments in Fig. 5.Specifically,
S222 comprises the following steps:
S2221, the RGB image obtained in RGBD images.
In the present embodiment, step S2221 RGBD images are first RGBD figures of RGBD image sequences in step S221
Picture.Due to including depth image and RGB image in RGBD images, therefore, the RGB image can directly be obtained from RGBD images
Take.
S2222, Face datection is carried out to RGB image to extract the RGB image of human face region.
In step S2222, the RGB image for extracting human face region is to detect the human face region in 2D images.Detect 2D figures
The method of the human face region of picture has a variety of, for example, can combine features of skin colors carries out Face datection by AdaBoost algorithms.Lift
For example, this method includes:
First, initial alignment is done with AdaBoost algorithms:
(1) sample is prepared:Some simple Harry features are extracted in one or two 20*20 picture, its white portion is calculated
With its difference of the pixel sum of black region, size of these characteristic values in the same position of face and non-face picture is different
's.
(2) grader is trained:Face picture and up to ten thousand background pictures using thousands of well cuttings are used as training sample.
Training picture typically normalizes to 20*20 size, and thousands of effective Harr features are selected come group by AdaBoost algorithms
Into human-face detector.
(3) detect:Once scaled in proportion using by grader, then mobile search window in image, detects each
Position determines possible face.
Obtain after preliminary testing result, record its average face area SaAs subsequently comparing.
2nd, by judging that colour of skin point obtains preliminary human face region:
By three primary colours rgb space by converting, the chrominance representation of various color spaces can be obtained.For example in yuv space
The analysis of face complexion distribution character is spatially carried out with YIQ.
In yuv space, U and V are two mutually orthogonal vectors in plane.One colourity letter of each color correspondence
Number vector, its protection degree represents that tone is represented by phase angle θ by modulus value Ch:
θ=tan-1(V/U)。
The tone range of face complexion is understood according to the θ values analytical structure to facial image sample, is carried out as feature
Image segmentation can filter the background for having larger difference on tone with face complexion.
Further, it is also possible to spatially strengthen segmentation effect using colored saturation infromation in YIQ.By yuv color space
33 ° of UV planes counterclockwise rotates, obtained the IQ planes in YIQ spaces.I value scopes are determined according to experiment.It is empty using YUV
Between phase angle θ and YIQ spaces I component as feature, the chrominance information distribution for determining face complexion can be combined.I.e.
The pixel p of coloured image transforms to yuv space and YIQ spaces by rgb space, if meeting the scope of θ and I values simultaneously, p is
Colour of skin point, then determines a preliminary candidate face region.
The coarse positioning of step 2 can detect most of faces in image, and speed, but detect face
While, also can be by non-face background area flase drop.Therefore, the essence detection of step 3 to initial candidate region by carrying out
A series of limitation of geometric properties, eliminates the noise for not meeting face condition, filters out face.
3rd, initial candidate region is smoothed with the closed operation of mathematical morphology, removed empty present in
Hole, obtains the region of multiple connections.
Secondly, each connected domain to candidate carries out Face geometric eigenvector judgement:
(1) by the area S of each connected domainiThe face area detected with AdaBoost algorithms is compared, by gap
Very big (such as Si< 0.3SaAnd Si> 2Sa) connected domain be determined as non-face and rejected, thus can will be many bright
Aobvious background and its noise are excluded.
(2) it is differentiated with the geometric properties of face.For example, first trying to achieve outermost layer rectangular edges to each connected domain
Boundary, then the length-width ratio of rectangle is calculated, for face, length-width ratio is generally 1 or so, therefore, and length-width ratio is more than to 2 region
It is determined as non-face region, the non-face region such as arm, leg can be excluded from candidate region.
(3) interference region is removed by the region area occupation rate of connected region outermost layer square boundary.Calculate rectangle
The pixel number of connected domain accounts for the ratio of whole rectangular pixels number in border, if the ratio just connects this less than certain threshold value
Logical regional determination is non-face region, and it is excluded from candidate region.
(4) interference region is removed by connected region and the girth occupation rate of outermost layer rectangle.Calculate connected region
The ratio of contour curve girth and square boundary girth, if the ratio just the connected region is determined as less than certain threshold value it is non-
Human face region, it is excluded from candidate region.
Finally, by AdaBoost algorithm initial alignments, the essence detection of color combining feature and geometric properties is examined as face
The structure of survey, so as to extract the Pixel Information of human face region.
S2223, region corresponding with the RGB image of human face region is extracted from the depth image of RGBD images, so that
Obtain the RGBD images of human face region.
In step S2223, because the pixel of RGB image and depth image is corresponded, therefore, in depth image, with
The image in the corresponding region of RGB image of human face region is the depth image of human face region, the RGB image and face of human face region
The depth image combination in region is the RGBD images of human face region.
In another embodiment, as shown in fig. 7, Fig. 7 is the schematic flow sheet of another embodiments of step S222 in Fig. 5.
Specifically, S222 comprises the following steps:
S2221 ', the depth image obtained in RGBD images.
In the present embodiment, step S2221 ' RGBD images are first RGBD image of RGBD image sequences in S222.
Because the RGBD images include depth image and RGB image, therefore, the depth image can directly be obtained from RGBD images
Take.
S2222 ', Face datection is carried out to depth image to extract the depth image of human face region.
In step S2222 ', the depth image for extracting human face region is to detect the human face region in 3D rendering.From depth
The method of Face datection extraction human face region is carried out in image a variety of, and the method that the present embodiment is used is specific as follows:
Remember Di,jFor pixel D (i, j) empty in current depth figure, its depth information is:
Wherein:Dp,qFor depth map midpoint D (p, q) depth;wp,qIt is current depth figure midpoint D (p, q) to point D (i, j)
The contribution margin of depth, it is bigger to represent that contribution is bigger, on the contrary it is smaller;wp,qDetermined by following formula:
Ci,jFor pixel values of the depth map midpoint D (i, j) in cromogram, Cp,qIt is depth map midpoint D (p, q) in cromogram
In pixel value.
The loss of depth information part in figure is recovered using above-mentioned algorithm, and calculation is increased using region to depth map
Method, to exclude the interference that other parts are caused to human face region.
By the analysis to depth areas information, whether the face that can interpolate that current detection is real face rather than photograph
Piece face concrete methods of realizing is as follows:Remember the depth information that d (x, y) is pixel p (x, y) in image, then human face region is flat
Equal depth AvgdIt can be expressed as
P (x, y) ∈ Areaface};
The variance Vari of depth informationdIt is expressed as:
P (x, y) ∈ Areaface}.
Finally, it is confirmed whether to have filtered by depth information by the judgement to Varid values.
Haar features are extended, to represent the face variation characteristic on depth information exactly.Extend Haar features abundant
Change of the human face region on depth information is make use of, invariant feature can be provided for the training of Face datection grader.
Weak Classifier in being trained using extension Haar features as AdaBoost, completes final strong point in the following way
The training of class device.
Provide training set (x1,y1) ... ..., (xm,ym).Wherein, xi∈ X, yi∈ Y={ -1,1 }, X representative features are empty here
Between, Y represents object space.That is, -1 represents non-face region, and 1 represents human face region.For i=1,2 ..., m Weak Classifier,
Uniform words initial weight distribution D1 (i)=1/m, D1 (i) represent the weights of ith feature in the 1st iteration, carry out t=1,
2 ..., T iteration.
The set that H is all Weak Classifiers is remembered, according to its weights DtFind out weak point that current iteration process maximizes threshold value
Class device is
Wherein,
Ru Guo ∣ 0.5- φ ∣≤β, β is the threshold value pre-set, then stops iteration, and output strong classifier selects a reality
Number αt∈ R, are calculated by following formula and determined:
Ownership Distribution value is updated according to following formula, continues iteration until stopping:
Finally output strong classifier combination, i.e.,
S2223 ', region corresponding with the depth image of human face region is extracted from the RGB image in RGBD images, from
And obtain the RGBD images of human face region.
In step S2223 ', because the pixel of RGB image and depth image is corresponded, therefore, in RGB image, with
The image in the corresponding region of depth image of human face region is the RGB image of human face region, the RGB image and face of human face region
The depth image combination in region is the RGBD images of human face region.
S223, from the RGBD images of human face region obtain human face characteristic point.
Human face characteristic point can by gather face member usually obtain, face element include eyebrow, eyes, nose, face,
One or more in cheek and chin etc..
In one embodiment, as shown in figure 8, Fig. 8 is the schematic flow sheet of step S223 embodiments in Fig. 5.Specifically,
Step S223 includes:
S2231, the identification RGB human face characteristic points from the RGB image of the human face region in the RGBD images of human face region.
The RGBD images of human face region include the RGB image of human face region and the depth image of human face region, the present embodiment
Human face characteristic point is extracted from the RGB image of human face region.The RGB image of human face region in step S251 can be from face
Obtained in the RGBD images in region, can also directly use the RGB image of the human face region obtained in step S23.RGB faces are special
The acquisition methods levied a little have a variety of, for example:
(1) method based on half-tone information
Geometric projection:Geometric projection method is the difference using face characteristic gray scale and other parts, is first counted not
Gray value on equidirectional and, specific change point is found out according to the change of sum, then using projection grey-value based on statistics
Change point position on different directions is combined by method, finds the position of human face characteristic point.
Paddy is analyzed:The dark region of picture point is referred to as paddy around brightness ratio in image, by brightness ratio compared with method, it is possible to
Each key position such as eyes, eyebrow, nose, face etc. of face dark region relatively are positioned.
(2) method of priori rules
Some empirical rules are summed up according to the general characteristic of face characteristic and are referred to as the method based on priori rules.Face
Image has some obvious essential characteristics, such as the length ratio of face is met in " three five, front yards ", eyes, the nose of face area
Brightness at the face feature such as son and face is generally below its neighboring area;The triangle of symmetrical and eyes and nose between two
The regularity of distribution, is all the important basis of recognition of face.
Mosaic map mosaic method:It can be gone to divide image with the grid of one group of formed objects, the gray scale of each grid takes each picture in lattice
The average of plain gray scale, determines which is probably the grid spaces of face according to certain rule, determination there may be into face
The elongated of grid halve, rebuild mosaic map mosaic, repeat the work of the first step, find eyes, nose, the face feature such as face
The position at place, then to the face area binaryzation that this is obtained twice, each feature is finally accurately positioned using rim detection
Position.
Binaryzation is positioned:The histogram of image is obtained, suitable threshold values is selected by image binaryzation, binaryzation rear region
The geological information such as relative position and area shape just may be used to determine the position of pupil, then pass through eyes and other characteristic points
Position relationship and geometrical relationship etc. are positioned to other human face characteristic points.Obvious this method is by illumination and picture quality etc.
Influence is larger.
Generalized Symmetric method:Obviously, in facial image, eye, eyebrow, nose etc. all have stronger point-symmetry property.
Point-symmetry property is described by defining Generalized Symmetric Transformation, by investigate human eye central point strong symmetry and face feature it is several
What distribution positions come the characteristic point to face.
(3) method based on geometry
Shake algorithms:This method is recycled one and matched using a closed curve being made up of several control points
Energy function be used as evaluation criterion, just navigate to face characteristic when continuous iteration finally make it that energy function is minimized
Point.
Deformable template method:It is to have two parabolas (upper palpebra inferior) and a circle (iris) to constitute eye feature
Geometric figure, its parameter is adjusted by the method for optimization to reach optimal matching, face, chin etc. can also use similar
Geometric figure modeling.
Based on points distribution models algorithm:ASM and AAM are all based on points distribution models (Point Distribution
Model, PDM) algorithm in PDM, the similar particular category object of profile, such as the shape of face, human hand pass through it is some close
The coordinate of the characteristic point of key is concatenated into original-shape vector.All shape vectors in training set are carried out after alignment operation, it is right
They carry out PCA analysis modelings, and the principal component of reservation forms final shape, and the parameter of shape reflects shape
Main alterable pattern, ASM search then matches the more preferably position for obtaining each characteristic point, warp by local texture model first
Cross after similarity transformation alignment, row constraint is entered to it by statistical shape model, the matching of local texture model, shape are then carried out again
Into an iterative process, the shape model of input is finally matched up to shape.
And in AAM, then employ the statistical restraint that both shape and texture are merged, i.e., so-called statistics apparent model.
AAM searches for the thought for having used for reference the analytical technology based on synthesis, enables model continuous by optimizing and revising for model parameter
The pattern of actually entering is approached, the renewal of model parameter then abandons the local grain search procedure in ASM, it is linear using only one
Model parameter is predicted and updated to forecast model according to the difference between "current" model and input pattern.
(4) method based on statistical model
Colour of skin lip color split plot design:This method is that the colour model of face characteristic is set up using statistical method, during positioning time
Candidate region is gone through, the human face characteristic point of candidate is filtered out according to the matching degree of the color of measured point and model.This method is mainly
Color information to face features is studied, and constructs the colour model of face characteristic, is believed using the color of face complexion
Breath carries out positioning feature point.
Eigenfaces:The high dimension vector for characterizing face is mapped to by several characteristic vectors by this method using Karhunen-Loeve transformation
In the subspace that (also referred to as Eigenface eigenfaces) opens, tested region first is reconstructed with principal component model, reconstruct image is obtained
The distance between with artwork, when distance is less than certain threshold value, that is, it is identified as candidate region.
SVMs:SVMs (Support Vector Machines, SVM) is the base of the propositions such as Vapnik
In the Statistical Learning Theory of risk minimization principle, for classification and regression problem.SVM methods are used for facial feature detection,
Square scanning window is used, using eyebrow and object of the eyes as an entirety as positioning, so as to reduce eyebrow to fixed
The interference of position.
Template matching method:Template matching method is relatively early for one of method of facial characteristics point location, is also to use scope
Wider one kind.This is due to that template matching method has the advantages that comparison is directly perceived, is easy to construction.What is pre-processed to image
On the basis of produce feature candidate region, afterwards by one have geometrical constraint (correlative of face template) template to spy
Levy and positioned.Sako et al. splits face area and lip region using color histogram drawing method, and according to the structure of eyes
Eye template is constructed in advance with half-tone information feature, and the process of the location matches to determine eyes is scanned for using this template
It is to slide to carry out characteristic matching positioning in candidate window pointwise using the template for the facial characteristics set up in advance.
Artificial neural network:Artificial neural network (ANN) has a wide range of applications in pattern-recognition, is particularly suitable for research
Nonlinear problem.Complete facial image is influenceed larger by changes such as individual difference, eye state and destination object postures, and
Subcharacter point (including left and right canthus and upper inferior orbit summit) near zone is stablized relatively, and according to this feature, Waite etc. is with each
Gray level image near subcharacter point is input, and neutral net is set up respectively.During detection, first with each neutral net to target area
Domain carries out traversal search, and search result is screened and combined in conjunction with priori.This Algorithm for Training process is simpler
It is single, there is stronger robustness.
Bayesian probability network method:Kin and Cipolla is modeled using one 3 layers of probability net to shape of face.They
Bottom-up search strategy is employed in the search, and Gaussian filter is used in combination and edge detection algorithm finds out double eyebrow, noses
With the candidate point of mouth (the 1st layer corresponded in network), according to the relativeness between neighbor candidate point be paired into two-by-two level or
Vertical cartel (the 2nd layer corresponded in network), and 4, the upper and lower, left and right region of face is further included into (corresponding to network
In the 3rd layer), so as to weed out false-alarm point.
(5) method based on small echo
Elastic graph matching method:This method is another important algorithm of facial crucial special card positioning, and this method is by people's face
The attribute and its position relationship between them of the crucial special card point in portion are described by an attributed graph, and the summit of figure is to key
The local grain of characteristic point is modeled (by Gabor characteristic), and the side of figure then reflects the position relationships such as the distance between characteristic point.
To the image newly inputted, its characteristic point is then positioned by the Displacement Estimation combination figure matching technique based on Phase Prediction.Pass through
The deformation of attributed graph, on the one hand matches the Gabor local features of apex, on the other hand matching global geometry feature.
DWN (Gabor wavelet network):Gabor wavelet is introduced image processing field by Kr ü ger etc., uses one group of homologous group
Raw Gabor wavelet function replaces the basic function of RBF neural, by training, target image can be decomposed into several
The linear combination of wavelet function.Simultaneously the parameter of related weights and wavelet function in itself is made to optimize in GWN training, this causes
GWN models can realize the parsing and reconstruct to destination object with the wavelet function of smaller amount.Feris uses two layers of GWN
Tree-model positions face feature, and two layers of GWN be respectively used to characterize full face and each face feature.In training, they are every
Width training figure sets up a GWN tree-model, and calibrates in the position of each face feature, deposit face database.Actual search when
Wait, they first by full face compare found out from storehouse with immediate model of target image, then with the mark of the model
Positioning is set to search starting point, in a small range, obtains face by the comparison of facial feature information corresponding with the model special
The exact position levied.
S2232, extract and RGB face characteristics from the depth image of the human face region in the RGBD images of human face region
The corresponding characteristic point of point, so as to obtain human face characteristic point.
In step S2232, due to RGB image and depth image one-to-one corresponding, therefore, in depth image, with RGB faces
The corresponding characteristic point of characteristic point is depth map human face characteristic point, and RGB human face characteristic points and depth map human face characteristic point, which are combined, is
The human face characteristic point of RGBD images.
In another embodiment, in step S223, the acquisition methods of human face characteristic point can be above-described embodiment from RGB
Obtained in image, then in corresponding to the RGBD images to obtain the human face characteristic point of RGBD images.Can also be from depth map
In directly detect human face characteristic point.
In the present embodiment, human face characteristic point is obtained using Susan algorithms.Specifically, the positioning of face key feature points
Method:9 characteristic points of face are chosen, the distribution of these characteristic points has angle invariability, respectively 2 eyeball central points, 4
Individual canthus point, the midpoint in two nostrils and 2 corners of the mouth points.Each device of the face relevant with recognizing can be readily available on this basis
Official's feature and other characteristic point positions of extension, for further recognizer.
When carrying out face characteristic extraction, due to local marginal information effectively can not be organized, traditional side
Edge detective operators can not reliably extract the feature (profile of eyes or mouth) of face, but from human visual system, fully
The positioning of face key feature points is carried out using the feature of edge and angle point, then can greatly improve its reliability.
Wherein selection Susan (Smallest Univalue Segment Assimilating Nucleus) operator is used for
Extract the edge and Corner Feature of regional area.According to the characteristic of Susan operators, it both can be used to detect edge, can be used for again
Extract angle point.Therefore compared with the edge detection operator such as Sobel, Canny, Susan operators are more suitable for carrying out face eye
The extraction of the feature such as portion and face, is especially automatically positioned to canthus point and corners of the mouth point.
The following is the introduction of Susan operators:
With a circular shuttering traversing graph picture, if the gray value of other any pixels and template center's pixel (core) in template
The difference of gray value be less than certain threshold value, being considered as the point and core has the gray value of identical (or close), meets such condition
Pixel composition region be referred to as the similar area of core value (Univalue Segment Assimilating Nucleus, USAN).
It is the base of SUSAN criterions that each pixel in image is associated with the regional area with close gray value.
During specific detection, it is to scan whole image with circular shuttering, compares the ash of each pixel and center pixel in template
Angle value, and given threshold value differentiates whether the pixel belongs to USAN regions, such as following formula:
In formula, c (r, r0) for the discriminant function for the pixel for belonging to USAN regions in template, I (r0) it is template center's pixel
The gray value of (core), I (r) is the gray value of other any pixels in template, and t is gray scale difference thresholding.It influences to detect angle point
Number.T reduces, and more fine changes in image is obtained, so as to provide relatively large number of amount detection.Thresholding t must root
The factors such as contrast and noise according to image are determined.Then the USAN area sizes of certain point can be expressed from the next in image:
Wherein, g is geometric threshold, influences the angle point shape detected, and g is smaller, and the angle point detected is more sharp.(1)t,g
Determination thresholding g determine output angle point USAN regions maximum, as long as pixel that is, in image is with the USAN smaller than g
Region, the point is just judged as angle point.G size is not only determined can extract the number of angle point from image, and as before, it
Also determine the acuity of detected angle point.So once it is determined that the quality (acuity) of required angle point, g can
To take a changeless value.Thresholding t represents to can be detected the minimum contrast of angle point, is also the maximum for the noise that can ignore
Tolerance limit.It essentially dictates the feature quantity that can be extracted, and t is smaller, can extract feature from the lower image of contrast, and
The feature of extraction is also more.Therefore for the image of different contrast and noise situations, different t values should be taken.SUSAN operators have
One prominent advantage, is exactly that, to local insensitive for noise, anti-noise ability is strong.This is due to that it splits independent of early stage image
Result, and gradient calculation is avoided, in addition, USAN regions are that with template center pixel have similar gray-value in template
Pixel is cumulative and obtains, and this is actually an integral process, has good inhibiting effect for Gaussian noise.
The last stage of SUSAN two dimensional characters detection, exactly finds the local maximum of initial angle point response, also
It is non-maximum suppression processing, to obtain final corner location.Non- maximum suppression is exactly in subrange, such as its name suggests
The initial communication of fruit center pixel is the maximum in this region, then retains its value, otherwise delete, so can be obtained by part
The maximum in region.
(1) eyeball and canthus are automatically positioned.During being automatically positioned of eyeball and canthus, first using normalization mould
The method Primary Location face of plate matching.The general area of face is determined in whole facial image.Common human eye positioning
Algorithm is positioned according to the valley point property of eyes, and is then used the symmetrical of the search of valley point and direction projection and eyeball herein
Property the method that is combined, the degree of accuracy of eyes positioning can be improved using the correlation between two.To the upper left of face area
Gradient map integral projection is carried out with upper right portion, and the histogram of integral projection is normalized, first according to floor projection
Valley point determine approximate location of the eyes in y directions, then allow x to change in the larger context, find the paddy in this region
Point, o'clock the eyeball central point of two is used as using what is detected.
On the basis of two eyeball positions are obtained, ocular is handled, first using self-adaption binaryzation method
Threshold value is determined, the automatic binary image of ocular is obtained, then in conjunction with Susan operators, is examined using edge and angle point
The algorithm of survey is accurately positioned interior tail of the eye point in ocular.
The ocular edge image obtained by above-mentioned algorithm, carries out angle to the boundary curve in image on this basis
Point extraction can obtain accurate two intraoculars external eyes corner location.
(2) nose characteristic of field point is automatically positioned.The key feature points of face nasal area are defined as two nostrils center
The midpoint of line, i.e. muffle central point.The position of face muffle central point is relatively stablized, and for facial image normalizing
The effect of datum mark is may also function as when changing pretreatment.
Based on two eyeball positions found, the position in two nostrils is determined using the method for area grayscale integral projection
Put.
The strip region of two eye pupil hole widths is intercepted first, carries out Y-direction integral projection, then drop shadow curve is divided
Analysis.It can be seen that, along drop shadow curve from the search downwards of the Y-coordinate of eyeball position height, the position for finding first valley point is (logical
The appropriate peak valley Δ value of adjustment selection is crossed, ignores the middle burr influence that may be produced due to factors such as face's scar or glasses),
Using this valley point as naris position Y-coordinate datum mark;Second step is chosen using two eyeball X-coordinate as width, in the Y-coordinate of nostril
Lower δ pixels (for example, choosing δ=[nostril Y-coordinate-eyeball Y-coordinate] × 0.06) carry out X-direction integration for the region of height and thrown
Shadow, is then analyzed drop shadow curve, using the X-coordinate at two eye pupil hole midpoints as central point, and both sides are carried out to the left and right respectively
Search, first valley point found is the X-coordinate of the central point in left and right nostril.The midpoint in two nostrils is calculated as in muffle
Point, obtains the accurate location of muffle central point, and delimits nasal area.
(3) corners of the mouth is automatically positioned.Because the difference of human face expression may cause the large variation of face shape, and
Face region is easier to be disturbed by factors such as beards, thus mouth feature point extract accuracy for identification influence compared with
Greatly.Influence relative variability smaller because the position of corners of the mouth point is expressed one's feelings etc., the position of angle point is more accurate, so taking mouth region
Important Characteristic Points for two corners of the mouth points positioning method.
On the basis of eyes region and nose characteristic of field point is determined, first with the method for area grayscale integral projection
It is determined that from first valley point of the following Y-coordinate drop shadow curve in nostril (similarly, it is necessary to eliminated by appropriate peak valley Δ value due to
The burr influence that the factors such as beard, mole trace are produced) it is used as the Y-coordinate position of face;Then face region is selected, to area image
Handled, obtained after mouth edge figure using Susan operators;Angle point grid is finally carried out, two corners of the mouths just can be obtained
Exact position.
The position of S224, mark human face characteristic point on the RGBD images of human face region.
Step S224 is to the mark of human face characteristic point in order to which subsequent step S15 is tracked to the human face characteristic point.
S225, in RGBD image sequences track human faces characteristic point movement locus, and record motion track information.
In step S225, the good human face characteristic point of trace labelling in RGBD image sequences, so as to obtain the face characteristic
The motion track information of point, and is recorded, so that according to the movement locus of human face characteristic point, for example, the characteristic point such as the corners of the mouth, canthus
Movement locus captures the expression of face and catches the change of human face expression.
Specifically, the tracking of the human face characteristic point in step S225 can use and be based on KLT (Kanade-
Lucas-Tomasi) the Facial features tracking method of algorithm, the Facial features tracking method of Gabor wavelet and AAM people
The methods such as face characteristic point tracking.
For example, Lucas-Kanade feature point tracking algorithms, give adjacent two field pictures I1And I2, to I1In characteristic point p
=(x, y)T, it is assumed that light stream is d=(u, v)T, then in I2In corresponding characteristic point be p+d, the purpose of Lucas-Kanade algorithms
It is to search for displacement make it that the matching error of the neighborhood related to corresponding points is minimum, i.e., is defined such as on p local neighborhood N (p)
Under cost function:
Wherein, w (r) is weight function.
Its oil painting is obtained being solved:D=G-1H, wherein,
S23, the movement locus of human face characteristic point is mapped to 3D facial models, so that the human face characteristic point of 3D facial models
Movement locus and the movement locus of human face characteristic point that obtains match.
In step S23, the movement locus of human face characteristic point is mapped in 3D facial models so that in 3D facial models
The movement locus of human face characteristic point of the movement locus of human face characteristic point with obtaining matches, so that the 3D faceforms
The expression shape change of face of the dynamic expression with obtaining matches.
For example, as shown in figure 9, Fig. 9 is a kind of acquisition side of the countenance for 3D facial models that the present invention is provided
The schematic diagram of one application scenarios of method.A and B are carried out in video session, one embodiment, the RGBD for the mobile phone that A face passes through A
Camera sets up the first 3D facial models after shooting, meanwhile, A Expression Mapping to the first 3D facial model A ', the first 3D
Facial model is shown on B display interface, B is seen A '.And after the RGBD cameras for the mobile phone that B face passes through B are shot
And the 2nd 3D facial model B ' are set up, meanwhile, B Expression Mapping is to the 2nd 3D facial models, and the 2nd 3D facial models are in A
Display interface on show so that A can see B '.Certainly, in other embodiments, the interaction can be many people's interactions,
The 3D facial models of other people each can be seen on its display interface per capita.
In another embodiment, as shown in Figure 10, Figure 10 is a kind of face's table for 3D facial models that the present invention is provided
The schematic diagram of another application scenarios of the acquisition methods of feelings.The RGBD cameras of the mobile phone that A face passes through A set up the after shooting
One 3D facial models, meanwhile, A Expression Mapping is to the first 3D facial models, and the first 3D facial models are simultaneously A's and B
Shown on display interface, A and B is seen A 3D facial models.And the RGBD cameras for the mobile phone that B face passes through B are shot
The 2nd 3D facial models are set up afterwards, meanwhile, B Expression Mapping to the 2nd 3D facial models, the 2nd 3D facial models are same
When on A and B display interface show so that A and B can see B 3D facial models.Certainly, in other embodiments
In, the interaction can be many people's interactions, and everyone can see proprietary 3D facial models on its display interface.
In addition, In yet another embodiment, call the human face characteristic point in database motion track information be mapped to it is dynamic
In the 3D facial models for drawing the image made, and show on A and B et al. display interface the 3D facial models.
The movement locus of human face characteristic point is mapped to 3D facial models, so that the fortune of the human face characteristic point of 3D facial models
The movement locus of human face characteristic point of the dynamic rail mark with obtaining matches.
S24, preservation 3D facial models and its human face characteristic point movement locus.
Figure 11 is referred to, Figure 11 is a kind of structure of the acquisition device of the countenance for 3D facial models that the present invention is provided
Schematic diagram.The acquisition device of the countenance of 3D facial models shown in Figure 11 includes processor 10, RGBD cameras 11 and storage
Device 12, wherein, RGBD cameras 11 are connected with processor 10.
Processor 10 is used to obtain 3D facial models;The movement locus of human face characteristic point is obtained by RGBD image sequences;
The movement locus of human face characteristic point is mapped to 3D facial models so that the movement locus of the human face characteristic point of 3D facial models with
The movement locus of the human face characteristic point of acquisition matches.
In one embodiment, RGBD cameras 11 are used for the RGBD images for obtaining multiple angles of face.Specifically, Ke Yi
Multiple angle orientations on the head of people are respectively provided with RGBD cameras and shot.Processor 10 is used to scheme using the RGBD of multiple angles
As building 3D facial models.
In another embodiment, processor 10 is additionally operable to extract the header information of 3D facial models;Call human body entire header
The data of the 3D models in portion, generate 3D facial models with reference to header information the 3D head models completed.
In yet another embodiment, 3D facial models are the face's mould for the 3D images produced using animation design technique
Type.
Alternatively, processor 10 is additionally operable to mark human face characteristic point in 3D facial models.
Alternatively, RGBD cameras 11 are additionally operable to obtain the RGBD image sequences for including face, and RGBD images include RGB image
And depth image, wherein, the pixel of RGB image and depth image is corresponded.
Processor 10 is additionally operable to carry out Face datection to RGBD images, to extract the RGBD images of human face region;From people
Human face characteristic point is obtained in the RGBD images in face region;Mark position of the human face characteristic point on the RGBD images of human face region;
The movement locus of track human faces characteristic point in RGBD image sequences, and record motion track information.
Memory 12 is used to preserve 3D facial models and its human face characteristic point movement locus.
Present invention also offers a kind of storage device, the storage device is had program stored therein data, and the routine data can be by
The acquisition methods of the countenance for the 3D facial models for performing to realize any of the above-described embodiment.
For example, the storage device can be portable storage media, for example USB flash disk, mobile hard disk, read-only storage
(ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD
Deng.It is to be appreciated that storage device can also be that server etc. is various can be with the medium of store program codes.
In summary, the expression information that the present invention is obtained is more comprehensive, accurate, so that 3D facial models can be forced more
Very react the expression of face.
Embodiments of the present invention are the foregoing is only, are not intended to limit the scope of the invention, it is every to utilize this
Equivalent structure or equivalent flow conversion that description of the invention and accompanying drawing content are made, or directly or indirectly it is used in other correlations
Technical field, is included within the scope of the present invention.
Claims (13)
1. a kind of acquisition methods of the countenance of 3D facial models, it is characterised in that comprise the following steps:
Obtain 3D facial models;
The movement locus of human face characteristic point is obtained by RGBD image sequences;
The movement locus of the human face characteristic point is mapped to the 3D facial models, so that the face of the 3D facial models is special
The movement locus for levying human face characteristic point of the movement locus with obtaining a little matches.
2. according to the method described in claim 1, it is characterised in that include the step of the acquisition 3D facial models:
Obtain the RGBD images of multiple angles of face;
Utilize the RGBD picture construction 3D facial models of the multiple angle.
3. method according to claim 2, it is characterised in that the RGBD picture construction 3D faces using multiple angles
Also include after the step of model:
Extract the header information of the 3D facial models;
The data of the 3D models of human body full header are called, the 3D facial models are generated what is completed with reference to the header information
3D head models.
4. according to the method described in claim 1, it is characterised in that in the step of the acquisition 3D facial models, the 3D faces
Portion's model is the facial model for the 3D images produced using animation design technique.
5. the method according to any one of claim 2 to 4, it is characterised in that the step of the acquisition 3D facial models it
Afterwards, in addition to:
Human face characteristic point is marked in the 3D facial models.
6. according to the method described in claim 1, it is characterised in that it is described obtain human face characteristic point movement locus the step of wrap
Include:
The RGBD image sequences for including face are obtained, the RGBD images include RGB image and depth image, wherein, the RGB
The pixel of image and depth image is corresponded;
Face datection is carried out to the RGBD images, to extract the RGBD images of human face region;
Human face characteristic point is obtained from the RGBD images of the human face region;
Mark position of the human face characteristic point on the RGBD images of the human face region;
The movement locus of the human face characteristic point is tracked in the RGBD image sequences, and records motion track information.
7. a kind of acquisition device of the countenance of 3D facial models, it is characterised in that including processor, the processor is used for
Obtain 3D facial models;The movement locus of human face characteristic point is obtained by RGBD image sequences;By the fortune of the human face characteristic point
Dynamic rail mark is mapped to the 3D facial models, so that the movement locus of the human face characteristic point of the 3D facial models and the people obtained
The movement locus of face characteristic point matches.
8. device according to claim 7, it is characterised in that also including RGBD cameras, the RGBD cameras and the place
Manage device connection;
The RGBD cameras are used for the RGBD images for obtaining multiple angles of face;
The processor is used for the RGBD picture construction 3D facial models using the multiple angle.
9. device according to claim 8, it is characterised in that the processor is additionally operable to extract the 3D facial models
Header information;The data of the 3D models of human body full header are called, the 3D facial models are generated with reference to the header information
The 3D head models of completion.
10. device according to claim 7, it is characterised in that the 3D facial models are to use animation design technique system
The facial model for the 3D images made.
11. the device according to claim 8 to 10, it is characterised in that the processor is additionally operable in 3D faces mould
Human face characteristic point is marked in type.
12. device according to claim 8, it is characterised in that the RGBD cameras are additionally operable to obtain comprising face
RGBD image sequences, the RGBD images include RGB image and depth image, wherein, the picture of the RGB image and depth image
Element is corresponded;
The processor is additionally operable to carry out Face datection to the RGBD images, to extract the RGBD images of human face region;From
Human face characteristic point is obtained in the RGBD images of the human face region;The human face characteristic point is marked in the RGBD of the human face region
Position on image;The movement locus of the human face characteristic point is tracked in the RGBD image sequences, and records movement locus
Information.
13. a kind of storage device, it is characterised in that have program stored therein data, described program data can be performed to realize such as
Method described in any one of claim 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710407215.9A CN107330371A (en) | 2017-06-02 | 2017-06-02 | Acquisition methods, device and the storage device of the countenance of 3D facial models |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710407215.9A CN107330371A (en) | 2017-06-02 | 2017-06-02 | Acquisition methods, device and the storage device of the countenance of 3D facial models |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107330371A true CN107330371A (en) | 2017-11-07 |
Family
ID=60193915
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710407215.9A Pending CN107330371A (en) | 2017-06-02 | 2017-06-02 | Acquisition methods, device and the storage device of the countenance of 3D facial models |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107330371A (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107807537A (en) * | 2017-11-16 | 2018-03-16 | 四川长虹电器股份有限公司 | Intelligent household appliances control system and method based on expression recognition |
CN108648251A (en) * | 2018-05-15 | 2018-10-12 | 深圳奥比中光科技有限公司 | 3D expressions production method and system |
CN109035384A (en) * | 2018-06-06 | 2018-12-18 | 广东您好科技有限公司 | Pixel synthetic technology based on 3-D scanning, the automatic vertex processing engine of model |
CN109191548A (en) * | 2018-08-28 | 2019-01-11 | 百度在线网络技术(北京)有限公司 | Animation method, device, equipment and storage medium |
CN109272473A (en) * | 2018-10-26 | 2019-01-25 | 维沃移动通信(杭州)有限公司 | A kind of image processing method and mobile terminal |
CN109410298A (en) * | 2018-11-02 | 2019-03-01 | 北京恒信彩虹科技有限公司 | A kind of production method and expression shape change method of dummy model |
CN109829965A (en) * | 2019-02-27 | 2019-05-31 | Oppo广东移动通信有限公司 | Action processing method, device, storage medium and the electronic equipment of faceform |
CN110197154A (en) * | 2019-05-30 | 2019-09-03 | 汇纳科技股份有限公司 | Pedestrian recognition methods, system, medium and the terminal again of fusion site texture three-dimensional mapping |
CN110348344A (en) * | 2019-06-28 | 2019-10-18 | 浙江大学 | A method of the special facial expression recognition based on two and three dimensions fusion |
CN111680577A (en) * | 2020-05-20 | 2020-09-18 | 北京的卢深视科技有限公司 | Face detection method and device |
CN111985425A (en) * | 2020-08-27 | 2020-11-24 | 闽江学院 | Image verification device under multi-person scene |
CN112001334A (en) * | 2020-08-27 | 2020-11-27 | 闽江学院 | Portrait recognition device |
CN112581518A (en) * | 2020-12-25 | 2021-03-30 | 百果园技术(新加坡)有限公司 | Eyeball registration method, device, server and medium based on three-dimensional cartoon model |
CN112700523A (en) * | 2020-12-31 | 2021-04-23 | 魔珐(上海)信息科技有限公司 | Virtual object face animation generation method and device, storage medium and terminal |
CN113361297A (en) * | 2020-02-19 | 2021-09-07 | 山东大学 | Micro-expression detection method based on light stream and windmill mode feature fusion |
CN113780251A (en) * | 2021-11-11 | 2021-12-10 | 聊城中超智能设备有限公司 | Positioning method and system of ophthalmologic detection equipment |
CN117671774A (en) * | 2024-01-11 | 2024-03-08 | 好心情健康产业集团有限公司 | Face emotion intelligent recognition analysis equipment |
CN117671774B (en) * | 2024-01-11 | 2024-04-26 | 好心情健康产业集团有限公司 | Face emotion intelligent recognition analysis equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150243035A1 (en) * | 2014-02-21 | 2015-08-27 | Metaio Gmbh | Method and device for determining a transformation between an image coordinate system and an object coordinate system associated with an object of interest |
US20160042559A1 (en) * | 2014-08-08 | 2016-02-11 | Nvidia Corporation | Lighting simulation analysis using light path expressions |
CN105847684A (en) * | 2016-03-31 | 2016-08-10 | 深圳奥比中光科技有限公司 | Unmanned aerial vehicle |
CN106407875A (en) * | 2016-03-31 | 2017-02-15 | 深圳奥比中光科技有限公司 | Target feature extraction method and apparatus |
-
2017
- 2017-06-02 CN CN201710407215.9A patent/CN107330371A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150243035A1 (en) * | 2014-02-21 | 2015-08-27 | Metaio Gmbh | Method and device for determining a transformation between an image coordinate system and an object coordinate system associated with an object of interest |
US20160042559A1 (en) * | 2014-08-08 | 2016-02-11 | Nvidia Corporation | Lighting simulation analysis using light path expressions |
CN105847684A (en) * | 2016-03-31 | 2016-08-10 | 深圳奥比中光科技有限公司 | Unmanned aerial vehicle |
CN106407875A (en) * | 2016-03-31 | 2017-02-15 | 深圳奥比中光科技有限公司 | Target feature extraction method and apparatus |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107807537A (en) * | 2017-11-16 | 2018-03-16 | 四川长虹电器股份有限公司 | Intelligent household appliances control system and method based on expression recognition |
CN108648251A (en) * | 2018-05-15 | 2018-10-12 | 深圳奥比中光科技有限公司 | 3D expressions production method and system |
CN109035384A (en) * | 2018-06-06 | 2018-12-18 | 广东您好科技有限公司 | Pixel synthetic technology based on 3-D scanning, the automatic vertex processing engine of model |
CN109191548A (en) * | 2018-08-28 | 2019-01-11 | 百度在线网络技术(北京)有限公司 | Animation method, device, equipment and storage medium |
CN109272473B (en) * | 2018-10-26 | 2021-01-15 | 维沃移动通信(杭州)有限公司 | Image processing method and mobile terminal |
CN109272473A (en) * | 2018-10-26 | 2019-01-25 | 维沃移动通信(杭州)有限公司 | A kind of image processing method and mobile terminal |
CN109410298A (en) * | 2018-11-02 | 2019-03-01 | 北京恒信彩虹科技有限公司 | A kind of production method and expression shape change method of dummy model |
CN109410298B (en) * | 2018-11-02 | 2023-11-17 | 北京恒信彩虹科技有限公司 | Virtual model manufacturing method and expression changing method |
CN109829965A (en) * | 2019-02-27 | 2019-05-31 | Oppo广东移动通信有限公司 | Action processing method, device, storage medium and the electronic equipment of faceform |
CN109829965B (en) * | 2019-02-27 | 2023-06-27 | Oppo广东移动通信有限公司 | Action processing method and device of face model, storage medium and electronic equipment |
CN110197154A (en) * | 2019-05-30 | 2019-09-03 | 汇纳科技股份有限公司 | Pedestrian recognition methods, system, medium and the terminal again of fusion site texture three-dimensional mapping |
CN110348344A (en) * | 2019-06-28 | 2019-10-18 | 浙江大学 | A method of the special facial expression recognition based on two and three dimensions fusion |
CN110348344B (en) * | 2019-06-28 | 2021-07-27 | 浙江大学 | Special facial expression recognition method based on two-dimensional and three-dimensional fusion |
CN113361297B (en) * | 2020-02-19 | 2022-07-29 | 山东大学 | Micro-expression detection method based on light stream and windmill mode feature fusion |
CN113361297A (en) * | 2020-02-19 | 2021-09-07 | 山东大学 | Micro-expression detection method based on light stream and windmill mode feature fusion |
CN111680577A (en) * | 2020-05-20 | 2020-09-18 | 北京的卢深视科技有限公司 | Face detection method and device |
CN111985425A (en) * | 2020-08-27 | 2020-11-24 | 闽江学院 | Image verification device under multi-person scene |
CN112001334A (en) * | 2020-08-27 | 2020-11-27 | 闽江学院 | Portrait recognition device |
CN112001334B (en) * | 2020-08-27 | 2024-01-19 | 闽江学院 | Portrait recognition device |
CN111985425B (en) * | 2020-08-27 | 2024-01-19 | 闽江学院 | Image verification device under multi-person scene |
CN112581518A (en) * | 2020-12-25 | 2021-03-30 | 百果园技术(新加坡)有限公司 | Eyeball registration method, device, server and medium based on three-dimensional cartoon model |
WO2022143197A1 (en) * | 2020-12-31 | 2022-07-07 | 魔珐(上海)信息科技有限公司 | Method and apparatus for generating virtual object facial animation, storage medium, and terminal |
CN112700523B (en) * | 2020-12-31 | 2022-06-07 | 魔珐(上海)信息科技有限公司 | Virtual object face animation generation method and device, storage medium and terminal |
CN112700523A (en) * | 2020-12-31 | 2021-04-23 | 魔珐(上海)信息科技有限公司 | Virtual object face animation generation method and device, storage medium and terminal |
CN113780251A (en) * | 2021-11-11 | 2021-12-10 | 聊城中超智能设备有限公司 | Positioning method and system of ophthalmologic detection equipment |
CN117671774A (en) * | 2024-01-11 | 2024-03-08 | 好心情健康产业集团有限公司 | Face emotion intelligent recognition analysis equipment |
CN117671774B (en) * | 2024-01-11 | 2024-04-26 | 好心情健康产业集团有限公司 | Face emotion intelligent recognition analysis equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107330371A (en) | Acquisition methods, device and the storage device of the countenance of 3D facial models | |
CN107368778A (en) | Method for catching, device and the storage device of human face expression | |
Tian | Evaluation of face resolution for expression analysis | |
CN103632132B (en) | Face detection and recognition method based on skin color segmentation and template matching | |
CN109472198B (en) | Gesture robust video smiling face recognition method | |
Silva et al. | A flexible approach for automatic license plate recognition in unconstrained scenarios | |
CN104008370B (en) | A kind of video face identification method | |
CN104268583B (en) | Pedestrian re-recognition method and system based on color area features | |
CN106778468B (en) | 3D face identification method and equipment | |
JP5629803B2 (en) | Image processing apparatus, imaging apparatus, and image processing method | |
JP4743823B2 (en) | Image processing apparatus, imaging apparatus, and image processing method | |
Ibraheem et al. | Comparative study of skin color based segmentation techniques | |
KR20200063292A (en) | Emotional recognition system and method based on face images | |
CN105893946A (en) | Front face image detection method | |
CN106778474A (en) | 3D human body recognition methods and equipment | |
CN106529494A (en) | Human face recognition method based on multi-camera model | |
CN109086724A (en) | A kind of method for detecting human face and storage medium of acceleration | |
CN106599785A (en) | Method and device for building human body 3D feature identity information database | |
CN110543848B (en) | Driver action recognition method and device based on three-dimensional convolutional neural network | |
CN106611158A (en) | Method and equipment for obtaining human body 3D characteristic information | |
Stringa | Eyes detection for face recognition | |
Rao et al. | On merging hidden Markov models with deformable templates | |
CN112766145B (en) | Method and device for identifying dynamic facial expressions of artificial neural network | |
CN106156739A (en) | A kind of certificate photo ear detection analyzed based on face mask and extracting method | |
KR20040042501A (en) | Face detection based on template matching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171107 |