CN109166176A - The generation method and device of three-dimensional face images - Google Patents

The generation method and device of three-dimensional face images Download PDF

Info

Publication number
CN109166176A
CN109166176A CN201810968117.7A CN201810968117A CN109166176A CN 109166176 A CN109166176 A CN 109166176A CN 201810968117 A CN201810968117 A CN 201810968117A CN 109166176 A CN109166176 A CN 109166176A
Authority
CN
China
Prior art keywords
image
feature
grid
images
grid image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810968117.7A
Other languages
Chinese (zh)
Other versions
CN109166176B (en
Inventor
庞文杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810968117.7A priority Critical patent/CN109166176B/en
Publication of CN109166176A publication Critical patent/CN109166176A/en
Application granted granted Critical
Publication of CN109166176B publication Critical patent/CN109166176B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a kind of generation method of three-dimensional face images and devices, which comprises obtains N two-dimension human face images of different perspectives, the N is the integer greater than 1;Grid dividings are carried out to the N two-dimension human face images, obtain the corresponding N number of grid image of each feature on face;N number of grid image corresponding to each feature samples, so that the pixel of the grid image in the corresponding N number of grid image of each feature is identical, and visual angle interval is equal;N number of grid image after the corresponding sampling of all features is decomposed and compressed, determines the surface optical field data of the face;According to the surface optical field data and the N two-dimension human face images, three-dimensional face images are obtained.That is the present embodiment obtains the surface optical field data of face according to N two-dimension human face images, and is rendered using the surface optical field data to N two-dimension human face images, generates high accuracy three-dimensional facial image.

Description

The generation method and device of three-dimensional face images
Technical field
The present embodiments relate to technical field of image processing more particularly to a kind of generation method of three-dimensional face images with Device.
Background technique
With the development of image processing techniques, the application scenarios of three-dimensional face images are also more and more.Nowadays, Yong Huti The two-dimension human face image based on actual photographed is gone out and has generated 3-D image facial image, to carry out the requirement of displaying more true to nature.
In the prior art, three-dimensional face images are converted by two-dimension human face image and is generally basede on following process progress: in conjunction with Various relationships between two-dimension picture and three-dimensional grid are constructed constraint equation and are become based on the constraint equation to three-dimensional grid Then the image information of two-dimension picture is mapped on deformed three-dimensional grid by shape, to generate adaptive three-dimensional face figure Picture.
However, the three-dimensional face images that the prior art generates are not clear enough.
Summary of the invention
The embodiment of the present invention provides the generation method and device of a kind of three-dimensional face images.
In a first aspect, the embodiment of the present invention provides a kind of generation method of three-dimensional face images, comprising:
N two-dimension human face images of different perspectives are obtained, the N is the integer greater than 1;
Grid dividings are carried out to the N two-dimension human face images, obtain the corresponding N number of grid chart of each feature on face Picture;
N number of grid image corresponding to each feature samples, so that in the corresponding N number of grid image of each feature The pixel of grid image is identical, and visual angle interval is equal;
N number of grid image after the corresponding sampling of all features is decomposed and compressed, determines the surface of the face Light field data;
According to the surface optical field data and the N two-dimension human face images, three-dimensional face images are obtained.
In a kind of possible implementation of first aspect, N number of grid image corresponding to each feature is carried out Before sampling, the method also includes:
The corresponding N number of grid image of the same feature is aligned.
In the alternatively possible implementation of first aspect, if the same picture on the N two-dimension human face images Vegetarian refreshments is a feature of the face, then described to carry out grid dividing to the N two-dimension human face images, is obtained every on face The corresponding N number of grid image of a feature, comprising:
According to pixel, grid dividing is carried out to the N two dimensional images, obtains each grid of the corresponding N of each pixel Image.
It is described if the face includes preset M feature in the alternatively possible implementation of first aspect Grid dividings are carried out to the N two-dimension human face images, obtain the corresponding N number of grid image of each feature on face, comprising:
According to M feature of the face, grid dividing is carried out to the N two dimensional images, it is corresponding to obtain each feature N number of grid image.
It is described according to the surface optical field data and the N in the alternatively possible implementation of first aspect Two-dimension human face image obtains three-dimensional face images, comprising:
According to the N two-dimension human face images, the threedimensional model of the face is obtained;
Using the surface optical field data, the threedimensional model is rendered, the three-dimensional face images after being rendered.
It is described by the corresponding N number of grid image of the same feature in the alternatively possible implementation of first aspect It is aligned, comprising:
One is obtained from the corresponding N number of grid image of the same feature throws the net table images as target gridding image, by institute State the N-1 grid image and the target gridding image alignment in N number of grid image in addition to the target gridding image.
It is described from the corresponding N number of grid image of the same feature in the alternatively possible implementation of first aspect Middle acquisition one throws the net table images as target gridding image, comprising:
Using the maximum grid image of diffusion color value in the corresponding N number of grid image of the same feature as described same The target gridding image of feature.
It is described from the corresponding N number of grid image of the same feature in the alternatively possible implementation of first aspect Middle acquisition one throws the net table images as target gridding image, comprising:
Determine the sum of the energy of the corresponding N number of grid image of all same features;
Using the corresponding grid image of each feature when the sum of described energy is minimum as the target gridding of each feature Image.
In the alternatively possible implementation of first aspect, all same features of the determination are corresponding N number of The sum of energy of grid image, comprising:
According to formulaDetermine all same features The sum of energy of corresponding N number of grid image E (P);
Wherein, describedIt is characterized the i-th of the f color value for throwing the net table images, it is describedIt is describedIt is corresponding bright Angle value, it is describedIt is describedCorresponding sample quality, the f ' is the adjacent mesh of the feature f, describedFor institute The jth for stating feature f ' is thrown the net the color value of table images, describedIt is shared on side for the feature f and feature f ' Color difference.
It is described that the target will be removed in N number of grid image in the alternatively possible implementation of first aspect N-1 grid image and the target gridding image alignment except grid image, comprising:
Determine the similar energies value of each grid image and the target gridding image in the N-1 grid image;
When the similar energies value maximum, each grid image and the target in the N-1 grid image are determined Grid image alignment.
In the alternatively possible implementation of first aspect, each net in the determination N-1 grid image The similar energies value of table images and the target gridding image, comprising:
According to formulaDetermine each grid chart in the N-1 grid image As the similar energies value E with the target gridding imagef(Df,t);
Wherein, the DfIt is described for the target gridding image of the feature fIt is characterized the i-th of the f face for throwing the net table images Color value, the tiThe translational movement for throwing the net table images for the i-th of the feature f.
In the alternatively possible implementation of first aspect, each net in the determination N-1 grid image The similar energies value of table images and the target gridding image, comprising:
According to formulaDetermine the N-1 grid image In each grid image and the target gridding image similar energies value Ef(Df,t);
Wherein, the DfIt is described for the target gridding image of the feature fIt throws the net table images for the i-th of the feature f Color value, the tiThe translational movement for throwing the net table images for the i-th of the feature f, the t0For preset value.
In the alternatively possible implementation of first aspect, the grid is triangular mesh.
Second aspect, the embodiment of the present application provide a kind of generating means of three-dimensional face images, comprising:
Module is obtained, for obtaining N two-dimension human face images of different perspectives, the N is the integer greater than 1;
It is corresponding to obtain each feature on face for carrying out grid dividing to the N two-dimension human face images for division module N number of grid image;
Sampling module, for being sampled to the corresponding N number of grid image of each feature, so that the corresponding N of each feature The pixel of grid image in a grid image is identical, and visual angle interval is equal;
Determining module determines institute for N number of grid image after the corresponding sampling of all features to be decomposed and compressed State the surface optical field data of face;
Model obtains module, for obtaining three-dimensional people according to the surface optical field data and the N two-dimension human face images Face image.
In a kind of possible implementation of second aspect, described device further include: alignment module, being used for will be same The corresponding N number of grid image of feature is aligned.
In the alternatively possible implementation of second aspect, if the same picture on the N two-dimension human face images Vegetarian refreshments is a feature of the face, then the division module, is specifically used for according to pixel, to the N two dimensional images Grid dividing is carried out, each grid image of the corresponding N of each pixel is obtained.
It is described if the face includes preset M feature in the alternatively possible implementation of second aspect Division module carries out grid dividing to the N two dimensional images, obtains each specifically for the M feature according to the face The corresponding N number of grid image of feature.
In the alternatively possible implementation of second aspect, the model obtains module, is specifically used for according to the N Two-dimension human face image is opened, the threedimensional model of the face is obtained;Using the surface optical field data, the threedimensional model is carried out Rendering, the three-dimensional face images after being rendered.
In the alternatively possible implementation of second aspect, the alignment module, comprising:
Acquiring unit throws the net table images as target for obtaining one from the corresponding N number of grid image of the same feature Grid image;
Alignment unit, for by N-1 grid image in N number of grid image in addition to the target gridding image With the target gridding image alignment.
It is described from the corresponding N number of grid image of the same feature in the alternatively possible implementation of second aspect Middle acquisition one throws the net table images as target gridding image, comprising:
The acquiring unit is used for the maximum grid of diffusion color value in the corresponding N number of grid image of the same feature Target gridding image of the image as the same feature.
It is described from the corresponding N number of grid image of the same feature in the alternatively possible implementation of second aspect Middle acquisition one throws the net table images as target gridding image, comprising:
The acquiring unit, specifically for the sum of the energy of the corresponding N number of grid image of all same features of determination; Using the corresponding grid image of each feature when the sum of described energy is minimum as the target gridding image of each feature.
In the alternatively possible implementation of second aspect, the acquiring unit is specifically used for:
According to formulaDetermine all same features The sum of energy of corresponding N number of grid image E (P);
Wherein, describedIt is characterized the i-th of the f color value for throwing the net table images, it is describedIt is describedIt is corresponding bright Angle value, it is describedIt is describedCorresponding sample quality, the f ' is the adjacent mesh of the feature f, describedFor institute The jth for stating feature f ' is thrown the net the color value of table images, describedIt is shared on side for the feature f and feature f ' Color difference.
In the alternatively possible implementation of second aspect, the alignment unit is specifically used for:
Determine the similar energies value of each grid image and the target gridding image in the N-1 grid image;
When the similar energies value maximum, each grid image and the target in the N-1 grid image are determined Grid image alignment.
In the alternatively possible implementation of second aspect, the alignment unit is specifically included:
According to formulaDetermine each grid chart in the N-1 grid image As the similar energies value E with the target gridding imagef(Df,t);
Wherein, the DfIt is described for the target gridding image of the feature fIt is characterized the i-th of the f face for throwing the net table images Color value, the tiThe translational movement for throwing the net table images for the i-th of the feature f.
In the alternatively possible implementation of second aspect, the alignment unit is also specifically included, comprising:
According to formulaDetermine the N-1 grid image In each grid image and the target gridding image similar energies value Ef(Df,t);
Wherein, the DfIt is described for the target gridding image of the feature fIt throws the net table images for the i-th of the feature f Color value, the tiThe translational movement for throwing the net table images for the i-th of the feature f, the t0For preset value.
In the alternatively possible implementation of second aspect, the grid is triangular mesh.
The third aspect, the embodiment of the present application provide a kind of generating means of three-dimensional face images, comprising:
Memory, for storing computer program;
Processor, for executing the computer program, to realize such as the described in any item three-dimensional face figures of first aspect The generation method of picture.
Fourth aspect, the embodiment of the present application provide a kind of generating means of three-dimensional face images, comprising: the phase of communication connection Machine and processor,
The camera, for obtaining N two-dimension human face images of different perspectives, the N is the integer greater than 1;
The processor obtains each feature pair on face for carrying out grid dividing to the N two-dimension human face images The N number of grid image answered, and the corresponding N number of grid image of each feature is sampled, so that the corresponding N number of net of each feature The pixel of grid image in table images is identical, and visual angle interval is equal;To N number of grid chart after the corresponding sampling of all features As being decomposed and being compressed, the surface optical field data of the face are determined;According to the surface optical field data and the N two dimensions Facial image obtains three-dimensional face images.
In a kind of possible implementation of fourth aspect, the processor is also used to the corresponding N of the same feature A grid image is aligned.
In the alternatively possible implementation of fourth aspect, if the same picture on the N two-dimension human face images Vegetarian refreshments is a feature of the face, then the processor is used for:
According to pixel, grid dividing is carried out to the N two dimensional images, obtains each grid of the corresponding N of each pixel Image.
It is described if the face includes preset M feature in the alternatively possible implementation of fourth aspect Processor is used for:
According to M feature of the face, grid dividing is carried out to the N two dimensional images, it is corresponding to obtain each feature N number of grid image.
In the alternatively possible implementation of fourth aspect, the processor is specifically used for:
According to the N two-dimension human face images, the threedimensional model of the face is obtained;
Using the surface optical field data, the threedimensional model is rendered, the three-dimensional face images after being rendered.
In the alternatively possible implementation of fourth aspect, the processor is specifically used for:
One is obtained from the corresponding N number of grid image of the same feature throws the net table images as target gridding image, by institute State the N-1 grid image and the target gridding image alignment in N number of grid image in addition to the target gridding image.
In the alternatively possible implementation of fourth aspect, the processor is specifically used for:
Using the maximum grid image of diffusion color value in the corresponding N number of grid image of the same feature as described same The target gridding image of feature.
In the alternatively possible implementation of fourth aspect, the processor is specifically used for:
Determine the sum of the energy of the corresponding N number of grid image of all same features;
Using the corresponding grid image of each feature when the sum of described energy is minimum as the target gridding of each feature Image.
In the alternatively possible implementation of fourth aspect, the processor is specifically used for:
According to formulaDetermine all same features The sum of energy of corresponding N number of grid image E (P);
Wherein, describedIt is characterized the i-th of the f color value for throwing the net table images, it is describedIt is describedIt is corresponding bright Angle value, it is describedIt is describedCorresponding sample quality, the f ' is the adjacent mesh of the feature f, describedFor institute The jth for stating feature f ' is thrown the net the color value of table images, describedIt is shared on side for the feature f and feature f ' Color difference.
In the alternatively possible implementation of fourth aspect, the processor is specifically used for:
Determine the similar energies value of each grid image and the target gridding image in the N-1 grid image;
When the similar energies value maximum, each grid image and the target in the N-1 grid image are determined Grid image alignment.
In the alternatively possible implementation of fourth aspect, the processor is specifically used for:
According to formulaDetermine each grid chart in the N-1 grid image As the similar energies value E with the target gridding imagef(Df,t);
Wherein, the DfIt is described for the target gridding image of the feature fIt is characterized the i-th of the f face for throwing the net table images Color value, the tiThe translational movement for throwing the net table images for the i-th of the feature f.
In the alternatively possible implementation of fourth aspect, the processor is specifically used for:
According to formulaDetermine the N-1 grid image In each grid image and the target gridding image similar energies value Ef(Df,t);
Wherein, the DfIt is described for the target gridding image of the feature fIt throws the net table images for the i-th of the feature f Color value, the tiThe translational movement for throwing the net table images for the i-th of the feature f, the t0For preset value.
In the alternatively possible implementation of fourth aspect, the processor is specifically used for, and the grid is triangle Shape grid.
5th aspect, the embodiment of the present application provide a kind of computer storage medium, store computer in the storage medium Program, the computer program realize the generation method such as the described in any item three-dimensional face images of first aspect when being executed.
The generation method and device of three-dimensional face images provided in an embodiment of the present invention, by N that obtain different perspectives Two-dimension human face image, the N are the integer greater than 1;Grid dividing is carried out to the N two-dimension human face images, is obtained on face The corresponding N number of grid image of each feature;N number of grid image corresponding to each feature samples, so that each feature pair The pixel for the grid image in N number of grid image answered is identical, and visual angle interval is equal;After the corresponding sampling of all features N number of grid image is decomposed and is compressed, and determines the surface optical field data of the face;According to the surface optical field data and institute N two-dimension human face images are stated, three-dimensional face images are obtained.That is the present embodiment obtains face according to N two-dimension human face images Surface optical field data, and N two-dimension human face images are rendered using the surface optical field data, generate high accuracy three-dimensional people Face image.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair Bright some embodiments for those of ordinary skill in the art without any creative labor, can be with It obtains other drawings based on these drawings.
Fig. 1 is the flow diagram of the generation method for the three-dimensional face images that one embodiment of the invention provides;
Fig. 2 be the present embodiments relate to part of the surface light field data schematic diagram;
Fig. 3 be the present embodiments relate to the schematic diagram of selection target image from unjustified each image;
Fig. 4 be the present embodiments relate to alignment after each image schematic diagram;
Fig. 5 is the surface optical field schematic diagram data optimized to surface optical field data shown in Fig. 2;
Fig. 6 is that the present embodiment is related to setting the goal really the flow example figure of grid image;
Fig. 7 is the flow example figure for the progress image alignment that the present embodiment is related to;
Fig. 8 is the structural schematic diagram of the generating means for the three-dimensional face images that the embodiment of the present invention one provides;
Fig. 9 is the structural schematic diagram of the generating means of three-dimensional face images provided by Embodiment 2 of the present invention;
Figure 10 is the structural schematic diagram of the generating means for the three-dimensional face images that the embodiment of the present invention three provides;
Figure 11 is the structural schematic diagram of the generating means for the three-dimensional face images that one embodiment of the invention provides;
Figure 12 is the structural schematic diagram of the generating means for the three-dimensional face images that one embodiment of the invention provides.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
Surface optical field is a branch of light field, has been pushed further by the geological information using target object traditional Light field Rendering, can in bigger visible angle (360 °) undistorted record light field.Therefore, surface optical field be used to restore one A bit with the object of more complicated appearance and geometrical model.
The generation method for the three-dimensional face images that the application proposes is obtained by handling N two-dimension human face images The surface optical field data of face generate the three-dimensional of approaching to reality environment according to the surface optical field data and N two-dimension human face images Faceform improves the generation precision of three-dimensional face model.
Technical solution of the present invention is described in detail with specifically embodiment below.These specific implementations below Example can be combined with each other, and the same or similar concept or process may be repeated no more in some embodiments.
Fig. 1 is the flow diagram of the generation method for the three-dimensional face images that one embodiment of the invention provides.It should be such as Fig. 1 institute Show, the method for the present embodiment may include:
S101, N two-dimension human face images for obtaining different perspectives, the N is the integer greater than 1.
The executing subject of the present embodiment can be the life of the three-dimensional face images of the systematic function with three-dimensional face images At device, abbreviation generating means.The generating means of the present embodiment can be a part of electronic equipment, for example, electronic equipment Processor.
The generating means of optional the present embodiment can also be individual electronic equipment.
The electronic equipment of the present embodiment can be smart phone, desktop computer, laptop, Intelligent bracelet, AR equipment, The electronic equipments such as VA equipment.
The present embodiment is illustrated so that executing subject is electronic equipment as an example.
Optionally, the two-dimension human face images of the N of the present embodiment different perspectivess can be obtains from other equipment, example Such as read from video camera or server.
Optionally, the electronic equipment of the present embodiment can also have shooting function, i.e. the electronic equipment has camera, can Two-dimension human face images are opened with the N for the different perspectives for using the camera on electronic equipment to shoot face.
S102, grid dividings are carried out to the N two-dimension human face images, obtains the corresponding N number of net of each feature on face Table images.
Specifically, carrying out grid after obtaining N two-dimension human face images according to above-mentioned steps to N two-dimension human face images and drawing Point, generate the corresponding N number of grid image of each feature on face.For example, obtaining N number of grid image of feature nose.
Wherein, the division of feature can according to need to determine on face, such as can according to need determining face includes 8 A feature: left eye, right eye, nose, mouth, left cheekbone, right cheekbone, left cheek and right cheek.
It optionally, can also be using each pixel in two-dimension human face image as a feature of face.
Optionally, the grid that any shape can be used in the present embodiment divides two-dimension human face image, for example, using The shape changeables grid such as quadrangle, pentagon, hexagon divides two-dimension human face image.
In a kind of example, in order to reduce the difficulty of division, triangular mesh is can be used to two-dimension human face in the present embodiment Image carries out grid dividing.In this way, two-dimension human face image is divided long multiple triangle gridding images.
The size of each grid can be the same or different, and same shape can be used to the two dimensional image of same face Grid divided, grid dividing of different shapes also can be used, the present embodiment is without limitation, with specific reference to reality Demand determines.
Optionally, the sizing grid that the same feature uses on face is identical with shape.
Grid dividing is carried out to N two-dimension human face images according to the above method, it is corresponding N number of to obtain each feature on face Grid image.
For example, face includes 8 left eye, right eye, nose, mouth, left cheekbone, right cheekbone, left cheek and right cheek features, N two-dimension human face images are divided according to 8 features, the corresponding N number of net of each feature in 8 features can be obtained Table images.
In practical applications, it is assumed that use surface mesh M, grid dividing is carried out to two-dimension human face image, by every two-dimentional people Face image is divided into M grid image, at this point, the grid image set F={ f of every two-dimension human face imagei... fM}。
The present embodiment can extract the camera of N two-dimension human face images of shooting when obtaining N two-dimension human face images External parameter.
S103, N number of grid image corresponding to each feature sample, so that the corresponding N number of grid chart of each feature The pixel of grid image as in is identical, and visual angle interval is equal.
Since the pixel (i.e. size) of the corresponding N number of grid image of the same feature may be different, and shooting two-dimentional people When face image, the angle of camera is also not equally spaced, therefore, in order to improve the precision of image procossing, needs to the same spy Corresponding each grid image is levied to be sampled.Specifically, so that the pixel of the corresponding N number of grid image of the same feature is identical, And visual angle interval is equal.
In this way, the pixel of each grid image is identical in the corresponding grid image of same feature, and visual angle interval is equal, When carrying out subsequent decomposition and compression, the accuracy of decomposition and compression can be improved, so that the surface optical field ultimately produced Data are more accurate.
S104, N number of grid image after the corresponding sampling of all features is decomposed and is compressed, determine the face Surface optical field data.
S105, two-dimension human face grid images are opened according to the surface optical field data and the N, obtains three-dimensional face images.
Specifically, being sampled according to above-mentioned steps to the corresponding grid image of the same feature, so that the same feature The pixel of corresponding each grid image is identical, and visual angle interval is equal.Then, to the grid chart after the corresponding sampling of all features As being decomposed, then the grid image after decomposition is compressed, generates the high-precision surface optical field data of face.It connects , according to the surface optical field data and N two-dimension human face grid images, obtain three-dimensional face images.
In a kind of example, according to the surface optical field data and the N two-dimension human face grid images, three-dimensional people is obtained Face image may include: to obtain the grayscale image difference of each pixel on two-dimension human face image, root according to surface optical field data Corresponding depth information is generated according to grayscale image difference, to estimate out three-dimensional face images.
In a kind of example, according to the surface optical field data and the N two-dimension human face grid images, three-dimensional people is obtained Face image may include: to obtain the threedimensional model of the face according to the N two-dimension human face images;Use the surface light Field data, renders the threedimensional model, the three-dimensional face images after being rendered.Wherein, according to N two-dimension human face figures Prior art acquisition can be used in picture, the threedimensional model for obtaining the face.
In VR (Virtual Reality, virtual reality) or AR (Augmented Reality, augmented reality) technology, The surface optical field data that above-mentioned determination can be used render object, object to be presented in virtual environment in true environment In light-field effects, to improve user experience.
Wherein, the present embodiment grid image is decomposed and the mode compressed with no restrictions, specifically used existing One mode.
In a kind of example, surface optical field can be expressed as four-dimensional function L (u, v, s, t), wherein (u, v) is on surface Position, (s, t) are view direction.
Light field function can be further broken into a small amount of sum of products of low-dimensional function:
L(u,v,s,t)≈∑S(u,v)V(s,t) (1)
Wherein, S (u, v) is toroidal function, V (s, t) view functions.
This decomposition attempts to separate the variation of the variation of surface texture and illumination.These functions can be by using PCA (Principal Component Analysis, principal component analysis) or nonlinear optimization construct.Functional parameter can be deposited Storage is in texture mapping and real-time rendering.
In order to be easily achieved curved surface light field under rendering pipeline, L (u, v, s, t) is made to cross over small surface element, and It independently is each part and establishes approximation.Specifically, it is sampled in a circular mesh areas by vertex x and constructs one Group vertex light field Lx(u,v,s,t)。
In the implementation, vertex light field function Lx(u, v, s, t) can be expressed as matrix Lx[u,v,s,t]∈Rm×n, in curved surface Discretization on piece and perspective view.Matrix column n indicates camera view, and row m indicates surface location.Storage matrix Lx's Full set be it is unpractical, need to decompose light field data and compressed.In addition, according to the reason of dichromatic reflection theory By, it is also necessary to from matrix LxMiddle separation diffuses component Dx[s, t] and LxResidual components Gx[s,t,u,v].To remainder Carry out following conventional compact:
Lx[u, v, s, t]=Dx[u,v]+Gx[u, v, s, t]=Dx[u,v]+∑Sx[u,v]Vx[s,t] (2)
Wherein, Sx[u, v] is the surface mapping matrix of vertex x, Vx[s, t] is the View Mapping matrix of vertex x, Ke Yicong Discretization in surface and view functions in formula (1).(u, v) is synthesized by its an annular triangle neighbours by vertex Space coordinate in dough sheet.[s, t] is the view coordinate in hemisphere harmonic wave.
In a kind of example, the G of SVD (Singular Value Decomposition, singular value decomposition) is usedx= Sx.VxBy the residual color G of resamplingx(residual components of i.e. above-mentioned Lx) are decomposed into k exterior views and view.Wherein Sx It is m × k matrix of left singular vector, will be multiplied by the singular value diagonal matrix for sequence sequence of successively decreasing, VxIt is the k of right singular vector × n matrix, k < n.
It follows that the present embodiment does not execute SVD not instead of completely, k item before being iterated to calculate using power iteration method.
The generation method of three-dimensional face images provided in an embodiment of the present invention passes through N that obtain different perspectives two-dimentional people Face image, the N are the integer greater than 1;Grid dividing is carried out to the N two-dimension human face images, obtains each spy on face Levy corresponding N number of grid image;N number of grid image corresponding to each feature samples, so that each feature is corresponding N number of The pixel of grid image in grid image is identical, and visual angle interval is equal;To N number of grid after the corresponding sampling of all features Image is decomposed and is compressed, and determines the surface optical field data of the face;According to the surface optical field data and the N two Facial image is tieed up, three-dimensional face images are obtained.That is the present embodiment obtains the surface optical field of face according to N two-dimension human face images Data, and N two-dimension human face images are rendered using the surface optical field data, generate high accuracy three-dimensional facial image.
In some implementations of the present embodiment, if the same pixel on the N two-dimension human face images is institute A feature of face is stated, then grid dividing is carried out to the N two-dimension human face images described in above-mentioned S102, is obtained every on face The corresponding N number of grid image of a feature may include: to carry out grid dividings according to pixel to the N two dimensional images, obtain Obtain the corresponding each grid image of N of each pixel.
Specifically, assuming in N two-dimension human face images, every two-dimension human face image includes n pixel, in this way, can be with Every two-dimension human face image is divided into n grid, the corresponding grid image of each grid.
For a pixel a, the corresponding net of pixel a on every two dimensional image is obtained in N two-dimension human face images Table images, and then obtain N number of grid image of pixel a.Referring to this method, each pixel in n pixel can be obtained The corresponding N number of grid image of point.
The present embodiment realizes the thin of feature using a pixel in two-dimension human face image as a feature of face Change, the feature after refinement is allowed to show face in further detail.
In other implementations of the present embodiment, if the face includes preset M feature, above-mentioned S102 pairs The N two-dimension human face images carry out grid dividings, obtain the corresponding N number of grid image of each feature on face, may include: According to M feature of the face, grid dividing is carried out to the N two dimensional images, obtains the corresponding N number of grid of each feature Image.
Specifically, M feature of above-mentioned face can be preset in the present embodiment, for example, user draws according to actual needs Point, be also possible to Computer Automatic Recognition division, such as user input M each two-dimension human face image of number, computer according to Existing rule, automatically divides two-dimension human face image.
For example, M is 8,8 features of face may include: left eye, right eye, nose, mouth, left cheekbone, right cheekbone, a left side Cheek and right cheek.
In a kind of possible implementation of the present embodiment, before above-mentioned S103, the method for the present embodiment further include:
S100, the corresponding N number of grid image of the same feature is aligned.
As shown in Fig. 2, traditional direct compression may cause artifact when geometry is obviously inaccurate.
In order to solve the technical problem, the present embodiment optimizes grid before light field decomposition, to eliminate inaccurate geometric form The negative effect of shape.
Specifically, according to above-mentioned steps, the N for obtaining each feature throws the net after table images, N corresponding to each feature Grid image is aligned, for example, by the 3 of feature A throw the net table images alignment.
Before surface optical field carries out compression process, sampling grid image is aligned, and then eliminates inaccurate geometric form The influence of shape, to reduce surface optical field sampling and handle the dependence to precise geometrical model.
Optionally, the corresponding N number of grid image of the same feature be aligned by above-mentioned S100 may include:
S1001, acquisition one throws the net table images as target network trrellis diagram from the same feature corresponding N number of grid image Picture;
S1002, by N number of grid image in addition to the target gridding image N-1 grid image with it is described Target gridding image alignment.
Specifically, illustrate by taking feature A as an example in alignment procedure, throws the net from the corresponding N of feature A select in table images first A target gridding image as grid A out.Such as shown in Fig. 3, Dk Trellis image is characterized the target network trrellis diagram of A Picture.
Then, remaining grid image in this feature A in addition to target gridding image is carried out with the target gridding image Alignment, for example, introducing 2D in space translates ti, calculate in feature A between remaining each grid image and target gridding image Translation ti, by remaining grid image and the target gridding image alignment.Such as shown in Fig. 4, by remaining grid chart of feature A As the target gridding image alignment with Dk Trellis image.
In this way, the set that each grid image after feature A alignment is formed is as feature A grid image set.
Referring to above-mentioned steps, the grid image set of each feature can be obtained.
In a kind of example, above-mentioned S1001 obtains one from the corresponding N number of grid image of the same feature and throws the net table images As target gridding image, may is that
For each feature, the maximum grid image of diffusion color value in the corresponding each grid image of this feature is made For the target gridding image of this feature.
For example, the diffusion color value for table images of often throwing the net is determined, by the maximum grid chart of diffusion color value for feature A As the target gridding image as this feature A.
In this way, the corresponding target gridding image of each property lattice can be obtained.
Fig. 5 is the schematic diagram compressed after each grid image corresponding to Fig. 2 is aligned, as shown in figure 5, after alignment Recompression can effectively reduce ghost image, improve the precision of surface optical field data.
In another example, as shown in fig. 6, above-mentioned S1001 is obtained from the corresponding N number of grid image of the same feature One throws the net table images as target gridding image, can specifically include:
The sum of S201, the energy for determining the corresponding N number of grid image of all same features.
In practical application, since brightness change is big etc., factors may use existing method that can not find target network trrellis diagram Picture.The present embodiment solves this problem by finding optimal mesh image in all grid images.Specifically, by people The color value of the corresponding all grid images of all same features (i.e. all features) on the faceAs energy Flow function, with the sum of the energy of the corresponding all grid images of all features of determination.
In a kind of example, the sum of the brightness of the corresponding all grid images of all features is calculated, by the sum of brightness conduct The sum of the energy of the corresponding all grid images of all features.
In another example, the quality sum of the corresponding all grid images of all grids is calculated, quality sum is made For the sum of the energy of the corresponding all grid images of all grids.
In another example, the quality sum of the corresponding all grid images of all features is calculated, quality sum is made For the sum of the energy of the corresponding all grid images of all features.
In another example, the sum of brightness of the corresponding all grid images of all features and quality sum are calculated, Using the sum of brightness with quality sum as the sum of energy of grid image.
In a kind of possible implementation of the present embodiment, all same spies can also be determined according to formula (3) Levy the sum of energy of corresponding N number of grid image E (P):
Wherein, describedIt is characterized the i-th of the f color value for throwing the net table images, it is describedIt is describedIt is corresponding bright Angle value, it is describedIt is describedCorresponding sample quality, the f ' is the adjacent mesh of the feature f, describedFor institute The jth for stating feature f ' is thrown the net the color value of table images, describedIt is shared on side for the feature f and feature f ' Color difference.
Above-mentioned,It statesIt can be determined according to existing mode, the present embodiment is herein not It repeats again.
Optionally,
Assuming that diffusion should be captured under the conditions of the favorable luminance of not DE Specular Lighting, and in order to reach this purpose, meter Calculate the luminance mean value of each gridAnd variance5% minimum sample of average brightness is abandoned, these samples may be in no foot It is captured in the case where enough light.
Then it usesWithThe most probable luminance mean value of schema extraction and variance, define brightnessEnergy Amount.
Since DE Specular Lighting may result inWithCompared to notable difference, therefore ElBe conducive to being illuminated for not no bloom Grid.
Optionally,
It indicatesOriginal projection size.The present embodiment can choose the projection size of mass term, because it is The group indicater of angular distance between camera position distance and camera view and triangle normal.
Optionally,
The present embodiment, P are characterized the shared edge of f and feature f '.AndIt isThe RGB information of middle p point.Letter speech It, Es calculates the color difference on the shared side of two adjacent triangles.
S202, using when the sum of described energy minimum when the corresponding grid image of each feature as the target of each feature Grid image.
Specifically, can determine the sum of the energy of the corresponding all grid images of all features according to aforesaid way.It connects , the sum of energy minimum is enabled, can determine one group of grid image, this group of grid image is corresponded into the mesh as each feature Mark grid image.
For example, enable above-mentioned formula (3) minimum, will at this time in formula (3) the corresponding grid image of each feature as each The target gridding image of feature.
The method of the present embodiment, when determining target gridding image, it is contemplated that the color difference of the adjacent dough sheet of grid influences, because This, so that the target gridding image determined is more in line with actual needs.
In some embodiments, as shown in fig. 7, above-mentioned S1002 will remove the target network trrellis diagram in N number of grid image N-1 grid image and the target gridding image alignment as except, can specifically include:
The similar energies value of S301, remaining for determining the grid each grid image and the target gridding image.
Specifically, in the present embodiment by remaining N-1 grid image of the same feature and the target network trrellis diagram of this feature It, can be by determining remaining each grid image of feature and the similar energies value of target gridding image when as alignment, and make similar Energy value maximum guarantees remaining N-1 grid image and target gridding image alignment.
Specific side of the present embodiment to the similar energies value of remaining the N-1 grid image and target gridding image for determining feature Formula is with no restrictions.
In a kind of example, above-mentioned S301 be can be according to formula (4), determine each net in the N-1 grid image The similar energies value E of table images and the target gridding imagef(Df,t)
Wherein, the DfIt is described for the target gridding image of the feature fIt is characterized the i-th of the f face for throwing the net table images Color value, the tiThe translational movement for throwing the net table images for the i-th of the feature f.
It is in original mesh image space there is 2D to shift t=(tx, ty) sampling model again.Due to It needs to carry out similarity system design around mirror surface information, so mutual information measurement can be used as MI in the present embodiment.According to above-mentioned Formula (4) can calculate suitable D by alternate search tf
In another example, above-mentioned S301 be can be according to formula (5), be determined each in the N-1 grid image The similar energies value E of grid image and the target gridding imagef(Df,t)
Wherein, the DfIt is described for the target gridding image of the feature fIt is characterized the i-th of the f face for throwing the net table images Color value, the tiThe translational movement for throwing the net table images for the i-th of the feature f, the t0For preset value.
t0For avoiding zero offset and adjusting tiWeight, by distance limits tmaxInside wolfishly search for tiTo solve this A problem.
Optionally, t0It can be (15,15), tmaxIt is set as 3 pixels.
S302, when the similar energies value maximum, determine in the N-1 grid image each grid image with it is described Target gridding image alignment.
According to above-mentioned steps, the phase of remaining N-1 grid image and the target gridding image of the same feature is determined Like energy value Ef(Df,t).When similar energies value maximum, remaining N-1 grid image and target network of this feature can be determined Table images alignment.
The present embodiment passes through the similar energies value of remaining N-1 grid image and the target gridding image for determining feature Mode make remaining each grid image and target gridding image alignment of grid, and then improve the reliability and effect of alignment Rate.
Fig. 8 is the structural schematic diagram of the generating means for the three-dimensional face images that the embodiment of the present invention one provides.In above-mentioned reality On the basis of applying example, as shown in figure 8, the generating means 100 of the three-dimensional face images of the present embodiment may include:
Module 110 is obtained, for obtaining N two-dimension human face images of different perspectives, the N is the integer greater than 1;
Division module 120 obtains each feature on face for carrying out grid dividing to the N two-dimension human face images Corresponding N number of grid image;
Sampling module 130, for being sampled to the corresponding N number of grid image of each feature, so that each feature is corresponding N number of grid image in grid image pixel it is identical, and visual angle interval is equal;
Determining module 140, for N number of grid image after the corresponding sampling of all features to be decomposed and compressed, really The surface optical field data of the fixed face;
Model obtains module 150, for obtaining three according to the surface optical field data and the N two-dimension human face images Tie up facial image.
The generating means of the three-dimensional face images of the embodiment of the present invention can be used for executing above-mentioned shown embodiment of the method Technical solution, it is similar that the realization principle and technical effect are similar, and details are not described herein again.
Fig. 9 is the structural schematic diagram of the generating means of three-dimensional face images provided by Embodiment 2 of the present invention.In above-mentioned reality On the basis of applying example, as described in Figure 9, the generating means 100 of the three-dimensional face images of described the present embodiment can also include alignment Module 130:
Alignment module 160, for the corresponding N number of grid image of the same feature to be aligned.
In a kind of possible implementation of the present embodiment, if the same pixel on the N two-dimension human face images Point is a feature of the face, then the division module 120, is specifically used for according to pixel, to the N two dimensional images Grid dividing is carried out, each grid image of the corresponding N of each pixel is obtained.
It is described if the face includes preset M feature in the alternatively possible implementation of the present embodiment Division module 120 carries out grid dividing to the N two dimensional images, obtains specifically for the M feature according to the face The corresponding N number of grid image of each feature.
In the alternatively possible implementation of the present embodiment, the model obtains module 150, is specifically used for according to institute N two-dimension human face images are stated, the threedimensional model of the face is obtained;Using the surface optical field data, to the threedimensional model It is rendered, the three-dimensional face images after being rendered.
The generating means of the three-dimensional face images of the embodiment of the present invention can be used for executing above-mentioned shown embodiment of the method Technical solution, it is similar that the realization principle and technical effect are similar, and details are not described herein again.
Figure 10 is the structural schematic diagram of the generating means for the three-dimensional face images that the embodiment of the present invention three provides.In above-mentioned reality On the basis of applying example, as described in Figure 10, the alignment module 160 of described the present embodiment may include:
Acquiring unit 161 throws the net table images as mesh for obtaining one from the corresponding N number of grid image of the same feature Mark grid image;
Alignment unit 162, for by N-1 grid in N number of grid image in addition to the target gridding image Image and the target gridding image alignment.
In a kind of possible implementation of the present embodiment, the acquiring unit 161, for the same feature is corresponding N number of grid image in target gridding image of the maximum grid image of diffusion color value as the same feature.
In the alternatively possible implementation of the present embodiment, the acquiring unit 161 is all specifically for determination The sum of the energy of the corresponding N number of grid image of same feature;Each feature when the sum of described energy is minimum is corresponding Target gridding image of the grid image as each feature.
In the alternatively possible implementation of the present embodiment, the acquiring unit 161 is specifically used for:
According to formulaDetermine all same features The sum of energy of corresponding N number of grid image E (P);
Wherein, describedIt is characterized the i-th of the f color value for throwing the net table images, it is describedIt is describedIt is corresponding bright Angle value, it is describedIt is describedCorresponding sample quality, the f ' is the adjacent mesh of the feature f, describedFor institute The jth for stating feature f ' is thrown the net the color value of table images, describedIt is shared on side for the feature f and feature f ' Color difference.
In the alternatively possible implementation of the present embodiment, the alignment unit 162 is specifically used for:
Determine the similar energies value of each grid image and the target gridding image in the N-1 grid image;
When the similar energies value maximum, each grid image and the target in the N-1 grid image are determined Grid image alignment.
In the alternatively possible implementation of the present embodiment, the alignment unit 162 is specifically included:
According to formulaDetermine each grid chart in the N-1 grid image As the similar energies value E with the target gridding imagef(Df,t);
Wherein, the DfIt is described for the target gridding image of the feature fIt is characterized the i-th of the f face for throwing the net table images Color value, the tiThe translational movement for throwing the net table images for the i-th of the feature f.
In the alternatively possible implementation of the present embodiment, the alignment unit 162 is also specifically included, comprising:
According to formulaDetermine the N-1 grid image In each grid image and the target gridding image similar energies value Ef(Df,t);
Wherein, the DfIt is described for the target gridding image of the feature fIt throws the net table images for the i-th of the feature f Color value, the tiThe translational movement for throwing the net table images for the i-th of the feature f, the t0For preset value.
Optionally, the grid is triangular mesh.
The generating means of the three-dimensional face images of the embodiment of the present invention can be used for executing above-mentioned shown embodiment of the method Technical solution, it is similar that the realization principle and technical effect are similar, and details are not described herein again.
Figure 11 is the structural schematic diagram of the generating means for the three-dimensional face images that one embodiment of the invention provides, such as Figure 11 institute Show, the generating means 200 of the three-dimensional face images of the present embodiment include:
Memory 220, for storing computer program;
Processor 230, for executing the computer program, to realize the generation method of above-mentioned three-dimensional face images, The realization principle and technical effect are similar, and details are not described herein again.
Figure 12 is the structural schematic diagram of the generating means for the three-dimensional face images that one embodiment of the invention provides, such as Figure 12 institute To show, the generating means 300 of three-dimensional face images include: the camera 310 and processor 320 of communication connection,
The camera 310, for obtaining N two-dimension human face images of different perspectives, the N is the integer greater than 1;
The processor 320 obtains each spy on face for carrying out grid dividing to the N two-dimension human face images Corresponding N number of grid image is levied, and the corresponding N number of grid image of each feature is sampled, so that the corresponding N of each feature The pixel of grid image in a grid image is identical, and visual angle interval is equal;To N number of net after the corresponding sampling of all features Table images are decomposed and are compressed, and determine the surface optical field data of the face;According to the surface optical field data and the N Two-dimension human face image obtains three-dimensional face images.
Optionally, the generating means 300 of the three-dimensional face images further include memory, and the memory 330 is based on storing Calculation machine program.Processor 320 reads computer program from the memory 330, and executes the computer program.
The generating means of the three-dimensional face images of the embodiment of the present invention can be used for executing above-mentioned shown embodiment of the method Technical solution, it is similar that the realization principle and technical effect are similar, and details are not described herein again.
In one possible implementation, the processor 320 is carried out to the corresponding N number of grid image of each feature Before sampling, it is also used to:
The corresponding N number of grid image of the same feature is aligned.
In alternatively possible implementation, if the same pixel on the N two-dimension human face images is described One feature of face, the then processor 320 are used for:
According to pixel, grid dividing is carried out to the N two dimensional images, obtains each grid of the corresponding N of each pixel Image.
In alternatively possible implementation, if the face includes preset M feature, the processor 320, For:
According to M feature of the face, grid dividing is carried out to the N two dimensional images, it is corresponding to obtain each feature N number of grid image.
In alternatively possible implementation, the processor 320 is specifically used for:
According to the N two-dimension human face images, the threedimensional model of the face is obtained;
Using the surface optical field data, the threedimensional model is rendered, the three-dimensional face images after being rendered.
In alternatively possible implementation, the processor 320 is specifically used for:
One is obtained from the corresponding N number of grid image of the same feature throws the net table images as target gridding image, by institute State the N-1 grid image and the target gridding image alignment in N number of grid image in addition to the target gridding image.
In alternatively possible implementation, the processor 320 is specifically used for:
Using the maximum grid image of diffusion color value in the corresponding N number of grid image of the same feature as described same The target gridding image of feature.
In alternatively possible implementation, the processor 320 is specifically used for:
Determine the sum of the energy of the corresponding N number of grid image of all same features;
Using the corresponding grid image of each feature when the sum of described energy is minimum as the target gridding of each feature Image.
In alternatively possible implementation, the processor 320 is specifically used for:
According to formulaDetermine all same features The sum of energy of corresponding N number of grid image E (P);
Wherein, describedIt is characterized the i-th of the f color value for throwing the net table images, it is describedIt is describedIt is corresponding bright Angle value, it is describedIt is describedCorresponding sample quality, the f ' is the adjacent mesh of the feature f, describedFor institute The jth for stating feature f ' is thrown the net the color value of table images, describedIt is shared on side for the feature f and feature f ' Color difference.
In alternatively possible implementation, the processor 320 is specifically used for:
Determine the similar energies value of each grid image and the target gridding image in the N-1 grid image;
When the similar energies value maximum, each grid image and the target in the N-1 grid image are determined Grid image alignment.
In alternatively possible implementation, the processor 320 is specifically used for:
According to formulaDetermine each grid chart in the N-1 grid image As the similar energies value E with the target gridding imagef(Df,t);
Wherein, the DfIt is described for the target gridding image of the feature fIt is characterized the i-th of the f face for throwing the net table images Color value, the tiThe translational movement for throwing the net table images for the i-th of the feature f.
In alternatively possible implementation, the processor 320 is specifically used for:
According to formulaDetermine the N-1 grid image In each grid image and the target gridding image similar energies value Ef(Df,t);
Wherein, the DfIt is described for the target gridding image of the feature fIt throws the net table images for the i-th of the feature f Color value, the tiThe translational movement for throwing the net table images for the i-th of the feature f, the t0For preset value.
Optionally, the grid is triangular mesh.
The generating means of the three-dimensional face images of the embodiment of the present invention can be used for executing above-mentioned shown embodiment of the method Technical solution, it is similar that the realization principle and technical effect are similar, and details are not described herein again.
Further, when at least part function of the generation method of three-dimensional face images in the embodiment of the present invention is by soft When part is realized, the embodiment of the present invention also provides a kind of computer storage medium, and computer storage medium is above-mentioned right for being stored as The computer software instructions of the generation of three-dimensional face images execute computer when run on a computer State the generation method of various possible three-dimensional face images in embodiment of the method.Load and execute on computers the computer When executing instruction, can entirely or partly it generate according to process or function described in the embodiment of the present invention.The computer instruction It can store in computer storage medium, or passed from a computer storage medium to another computer storage medium Defeated, the transmission can be by wireless (such as cellular communication, infrared, short-distance wireless, microwave etc.) mode to another website Website, computer, server or data center are transmitted.The computer storage medium can be what computer can access Any usable medium either includes the data storage devices such as one or more usable mediums integrated server, data center. The usable medium can be magnetic medium, (for example, floppy disk, hard disk, tape), optical medium (for example, DVD) or semiconductor Medium (such as SSD) etc..
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent Pipe present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its according to So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into Row equivalent replacement;And these are modified or replaceed, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution The range of scheme.

Claims (20)

1. a kind of generation method of three-dimensional face images characterized by comprising
N two-dimension human face images of different perspectives are obtained, the N is the integer greater than 1;
Grid dividings are carried out to the N two-dimension human face images, obtain the corresponding N number of grid image of each feature on face;
N number of grid image corresponding to each feature samples, so that the grid in the corresponding N number of grid image of each feature The pixel of image is identical, and visual angle interval is equal;
N number of grid image after the corresponding sampling of all features is decomposed and compressed, determines the surface optical field of the face Data;
According to the surface optical field data and the N two-dimension human face images, three-dimensional face images are obtained.
2. the method according to claim 1, wherein N number of grid image corresponding to each feature carries out Before sampling, the method also includes:
The corresponding N number of grid image of the same feature is aligned.
3. method according to claim 1 or 2, which is characterized in that if the same picture on the N two-dimension human face images Vegetarian refreshments is a feature of the face, then described to carry out grid dividing to the N two-dimension human face images, is obtained every on face The corresponding N number of grid image of a feature, comprising:
According to pixel, grid dividing is carried out to the N two dimensional images, obtains each grid image of the corresponding N of each pixel.
4. method according to claim 1 or 2, which is characterized in that if the face includes preset M feature, institute It states and grid dividings is carried out to the N two-dimension human face images, obtain the corresponding N number of grid image of each feature on face, comprising:
According to M feature of the face, grid dividing is carried out to the N two dimensional images, it is corresponding N number of to obtain each feature Grid image.
5. the method according to claim 1, wherein described according to the surface optical field data and the N two Facial image is tieed up, three-dimensional face images are obtained, comprising:
According to the N two-dimension human face images, the threedimensional model of the face is obtained;
Using the surface optical field data, the threedimensional model is rendered, the three-dimensional face images after being rendered.
6. according to the method described in claim 2, it is characterized in that, it is described by the corresponding N number of grid image of the same feature into Row alignment, comprising:
One is obtained from the corresponding N number of grid image of the same feature throws the net table images as target gridding image, it will be described N number of N-1 grid image and the target gridding image alignment in grid image in addition to the target gridding image.
7. according to the method described in claim 6, it is characterized in that, described from the corresponding N number of grid image of the same feature It obtains one and throws the net table images as target gridding image, comprising:
Using the maximum grid image of diffusion color value in the corresponding N number of grid image of the same feature as the same feature Target gridding image.
8. according to the method described in claim 6, it is characterized in that, described from the corresponding N number of grid image of the same feature It obtains one and throws the net table images as target gridding image, comprising:
Determine the sum of the energy of the corresponding N number of grid image of all same features;
Using the corresponding grid image of each feature when the sum of described energy is minimum as the target gridding image of each feature.
9. according to the method described in claim 8, it is characterized in that, all corresponding N number of net of the same feature of the determination The sum of energy of table images, comprising:
According to formulaDetermine that all same features are corresponding The sum of the energy of N number of grid image E (P);
Wherein, describedIt is characterized the i-th of the f color value for throwing the net table images, it is describedIt is describedCorresponding brightness value, It is describedIt is describedCorresponding sample quality, the f ' is the adjacent mesh of the feature f, describedFor the spy The jth of sign f ' is thrown the net the color values of table images, describedThe color on side is shared for the feature f and feature f ' Difference.
10. according to the method described in claim 6, it is characterized in that, described will remove the target network in N number of grid image N-1 grid image and the target gridding image alignment except table images, comprising:
Determine the similar energies value of each grid image and the target gridding image in the N-1 grid image;
When the similar energies value maximum, each grid image and the target gridding in the N-1 grid image are determined Image alignment.
11. according to the method described in claim 10, it is characterized in that, each net in the determination N-1 grid image The similar energies value of table images and the target gridding image, comprising:
According to formulaDetermine in the N-1 grid image each grid image with The similar energies value E of the target gridding imagef(Df,t);
Wherein, the DfIt is described for the target gridding image of the feature fIt is characterized the i-th of the f color for throwing the net table images Value, the tiThe translational movement for throwing the net table images for the i-th of the feature f.
12. according to the method described in claim 10, it is characterized in that, each net in the determination N-1 grid image The similar energies value of table images and the target gridding image, comprising:
According to formulaIt determines every in the N-1 grid image The similar energies value E of a grid image and the target gridding imagef(Df,t);
Wherein, the DfIt is described for the target gridding image of the feature fThe face for throwing the net table images for the i-th of the feature f Color value, the tiThe translational movement for throwing the net table images for the i-th of the feature f, the t0For preset value.
13. the method according to claim 1, wherein the grid is triangular mesh.
14. a kind of generating means of three-dimensional face images characterized by comprising
Module is obtained, for obtaining N two-dimension human face images of different perspectives, the N is the integer greater than 1;
Division module obtains the corresponding N of each feature on face for carrying out grid dividings to the N two-dimension human face images A grid image;
Sampling module, for being sampled to the corresponding N number of grid image of each feature, so that the corresponding N number of net of each feature The pixel of grid image in table images is identical, and visual angle interval is equal;
Determining module determines the people for N number of grid image after the corresponding sampling of all features to be decomposed and compressed The surface optical field data of face;
Model obtains module, for obtaining three-dimensional face figure according to the surface optical field data and the N two-dimension human face images Picture.
15. a kind of generating means of three-dimensional face images characterized by comprising
Memory, for storing computer program;
Processor, for executing the computer program, to realize such as three-dimensional face of any of claims 1-13 The generation method of image.
16. a kind of generating means of three-dimensional face images characterized by comprising the camera and processor of communication connection,
The camera, for obtaining N two-dimension human face images of different perspectives, the N is the integer greater than 1;
It is corresponding to obtain each feature on face for carrying out grid dividing to the N two-dimension human face images for the processor N number of grid image, and the corresponding N number of grid image of each feature is sampled, so that the corresponding N number of grid chart of each feature The pixel of grid image as in is identical, and visual angle interval is equal;To N number of grid image after the corresponding sampling of all features into Row decomposes and compression, determines the surface optical field data of the face;According to the surface optical field data and the N two-dimension human faces Image obtains three-dimensional face images.
17. device according to claim 16, which is characterized in that the processor is also used to the same feature is corresponding N number of grid image is aligned.
18. device according to claim 16 or 17, which is characterized in that if same on the N two-dimension human face images A pixel is a feature of the face, then the processor is used for:
According to pixel, grid dividing is carried out to the N two dimensional images, obtains each grid image of the corresponding N of each pixel.
19. device according to claim 16 or 17, which is characterized in that if the face includes preset M feature, The processor is used for:
According to M feature of the face, grid dividing is carried out to the N two dimensional images, it is corresponding N number of to obtain each feature Grid image.
20. a kind of computer storage medium, which is characterized in that store computer program, the computer in the storage medium Program realizes the generation method such as three-dimensional face images of any of claims 1-13 when being executed.
CN201810968117.7A 2018-08-23 2018-08-23 Three-dimensional face image generation method and device Active CN109166176B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810968117.7A CN109166176B (en) 2018-08-23 2018-08-23 Three-dimensional face image generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810968117.7A CN109166176B (en) 2018-08-23 2018-08-23 Three-dimensional face image generation method and device

Publications (2)

Publication Number Publication Date
CN109166176A true CN109166176A (en) 2019-01-08
CN109166176B CN109166176B (en) 2020-07-07

Family

ID=64896514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810968117.7A Active CN109166176B (en) 2018-08-23 2018-08-23 Three-dimensional face image generation method and device

Country Status (1)

Country Link
CN (1) CN109166176B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110807836A (en) * 2020-01-08 2020-02-18 腾讯科技(深圳)有限公司 Three-dimensional face model generation method, device, equipment and medium
WO2021143284A1 (en) * 2020-01-15 2021-07-22 华为技术有限公司 Image processing method and apparatus, terminal and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102222363A (en) * 2011-07-19 2011-10-19 杭州实时数码科技有限公司 Method for fast constructing high-accuracy personalized face model on basis of facial images
CN104361630A (en) * 2014-10-21 2015-02-18 北京工业大学 Acquiring method of optical field of surface of human face
CN104599284A (en) * 2015-02-15 2015-05-06 四川川大智胜软件股份有限公司 Three-dimensional facial reconstruction method based on multi-view cellphone selfie pictures
CN106228507A (en) * 2016-07-11 2016-12-14 天津中科智能识别产业技术研究院有限公司 A kind of depth image processing method based on light field

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102222363A (en) * 2011-07-19 2011-10-19 杭州实时数码科技有限公司 Method for fast constructing high-accuracy personalized face model on basis of facial images
CN104361630A (en) * 2014-10-21 2015-02-18 北京工业大学 Acquiring method of optical field of surface of human face
CN104599284A (en) * 2015-02-15 2015-05-06 四川川大智胜软件股份有限公司 Three-dimensional facial reconstruction method based on multi-view cellphone selfie pictures
CN106228507A (en) * 2016-07-11 2016-12-14 天津中科智能识别产业技术研究院有限公司 A kind of depth image processing method based on light field

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
THABO BEELER 等: "High-Quality Single-Shot Capture of Facial Geometry", 《ACM TRANSACTIONS ON GRAPHICS》 *
程龙 等: "基于光场渲染的多视点视频编解码方法研究", 《中国科学技术大学学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110807836A (en) * 2020-01-08 2020-02-18 腾讯科技(深圳)有限公司 Three-dimensional face model generation method, device, equipment and medium
WO2021143284A1 (en) * 2020-01-15 2021-07-22 华为技术有限公司 Image processing method and apparatus, terminal and storage medium

Also Published As

Publication number Publication date
CN109166176B (en) 2020-07-07

Similar Documents

Publication Publication Date Title
CN108509848B (en) The real-time detection method and system of three-dimension object
CN111598998B (en) Three-dimensional virtual model reconstruction method, three-dimensional virtual model reconstruction device, computer equipment and storage medium
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
CN109801374B (en) Method, medium, and system for reconstructing three-dimensional model through multi-angle image set
JP2023549821A (en) Deformable neural radiance field
CN109242961A (en) A kind of face modeling method, apparatus, electronic equipment and computer-readable medium
CN113272713B (en) System and method for performing self-improved visual odometry
CN111784821A (en) Three-dimensional model generation method and device, computer equipment and storage medium
US20240046557A1 (en) Method, device, and non-transitory computer-readable storage medium for reconstructing a three-dimensional model
Argudo et al. Single-picture reconstruction and rendering of trees for plausible vegetation synthesis
CN110458924B (en) Three-dimensional face model establishing method and device and electronic equipment
US11967024B2 (en) Extracting triangular 3-D models, materials, and lighting from images
EP4165606A1 (en) Object reconstruction with texture parsing
WO2023024441A1 (en) Model reconstruction method and related apparatus, and electronic device and storage medium
US20220319231A1 (en) Facial synthesis for head turns in augmented reality content
CN111612882A (en) Image processing method, image processing device, computer storage medium and electronic equipment
CN111862278A (en) Animation obtaining method and device, electronic equipment and storage medium
CN109166176A (en) The generation method and device of three-dimensional face images
CN117333637B (en) Modeling and rendering method, device and equipment for three-dimensional scene
WO2021151380A1 (en) Method for rendering virtual object based on illumination estimation, method for training neural network, and related products
Dong et al. A time-critical adaptive approach for visualizing natural scenes on different devices
CN116958396A (en) Image relighting method and device and readable storage medium
CN107240149A (en) Object dimensional model building method based on image procossing
US10861174B2 (en) Selective 3D registration
Shim Faces as light probes for relighting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant