CN106295561A - The Compressed Facial Image characterized based on parametrization and details and method of reducing and system - Google Patents
The Compressed Facial Image characterized based on parametrization and details and method of reducing and system Download PDFInfo
- Publication number
- CN106295561A CN106295561A CN201610646843.8A CN201610646843A CN106295561A CN 106295561 A CN106295561 A CN 106295561A CN 201610646843 A CN201610646843 A CN 201610646843A CN 106295561 A CN106295561 A CN 106295561A
- Authority
- CN
- China
- Prior art keywords
- image
- facial image
- details
- texture
- benchmark
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
A kind of Compressed Facial Image characterized based on parametrization and details and method of reducing and system, it is described by facial image uses parametrization and grain details characterize, arbitrarily will be decomposed into benchmark image, expression template and parametric texture and preserve by facial image, thus realize being effectively compressed of facial image;The present invention can express and restore the content of facial image while effectively reducing the size of transmission data exactly, reduces the expression of face and detail textures feature vividly.
Description
Technical field
The present invention relates to the technology of a kind of image processing field, a kind of characterize based on parametrization and details
Compressed Facial Image and method of reducing and system.
Background technology
Compression of images refers to damage with less bit or nondestructively represent the technology of original picture element matrix.
Recently, owing to the use of facial image increases increasingly, the compression to facial image video becomes increasingly important.And pin
The compression method of facial image mainly there are JPEG2000, principal component analysis, vector quantization etc..
In 2008, OriBryt et al. was at " Journal of Visual Communication and Image
Representation " in paper " the Compression of facial images using the K SVD that delivers
Algorithm " in, introduce K svd algorithm one dictionary of training and remove to obtain the rarefaction representation of facial image, thus image is carried out
Compression.
But these compression methods do not make full use of the characteristic of facial image and the seriality of video, are entered by facial image
Row compression.
Tao Xiaoming in 2014 proposes to characterize in the location that transmitting terminal utilizes faceform to carry out face in video and parametrization,
Obtain by illumination parameter vector, attitude parameter vector sum shape and outward appearance combined parameters vector representation faceform's parameter to
Amount, calculates through face shape and through the face outward appearance of over recovery, obtains original image, remove face to greatest extent and regard in time domain
Redundancy in Pin.But the method have ignored human face expression and the change of detail textures feature, lost some of facial image
Details.
Summary of the invention
The present invention is directed to deficiencies of the prior art, propose a kind of face figure characterized based on parametrization and details
As compression and method of reducing and system, it is described by facial image uses parametrization and details characterize, it is possible to effectively
While reducing the size of transmission data, express and restore the content of facial image exactly, reduce face vividly
Expression and detail textures feature.
The present invention is achieved by the following technical solutions:
The present invention relates to a kind of Compressed Facial Image method characterized based on parametrization and details, by being made by facial image
Characterize by parametrization and grain details and be described, arbitrarily will be decomposed into benchmark image, expression template and texture by facial image
Parameter also preserves, thus realizes being effectively compressed of facial image.
Described parametrization refers to: extracts facial image and the characteristic point position of corresponding benchmark image thereof and calculates this face
Image, relative to the key point displacement of benchmark image, obtains the grain details of this facial image according to facial image and benchmark image
Characterize, it is characterized join according to the existing expression formwork calculation i.e. details of weight that template characterizes for grain details of respectively expressing one's feelings
Number.
Described characteristic point position, by by active apparent model (Active Appearance Model, AAM), based on
Previous active shape model (Active Shape Model, ASM) algorithm, make use of geological information and texture, the light of face
According to information, by principal component analysis (Principal Component Analysis, PCA) method, obtain face picture and base
The characteristic point position of quasi-picture.
Described characteristic point position, uses but is not limited to Edwards GJ, Taylor CJ, Cootes T
F.Interpreting FaceImages Using Active Appearance Models[C]//IEEE
International Conference on Automatic Faceand Gesture Recognition,
Mode described in 1998.Proceedings.1998:145 149. realizes.
Described grain details characterizes and refers to:Wherein: (u v) is facial image and benchmark image to D
Texture difference after alignment, A1(u v) is facial image, A0(u, v) is benchmark image, and u, v are the coordinates of image slices vegetarian refreshments.
After obtaining texture difference, each pixel of image is set up equation simultaneous, draws for this detail textures
Difference is respectively expressed one's feelings the proportion shared by template, it may be assumed that a1M1(u,v)+a2M2(u,v)+…+a6M6(u, v)=D (u, v), wherein: aiIt is
I-th expression template is for the weight shared by grain details sign namely the details characterization parameter of transmission, Mi(u v) is i-th
Expression template is at the texture difference of this pixel.
Described parametric texture includes: facial image is relative to the key point displacement of benchmark image, and template of respectively expressing one's feelings
For the proportion shared by the detail textures difference after image alignment i.e. details characterization parameter.
The present invention relates to a kind of facial image method of reducing based on above-mentioned compression, the benchmark image that compression is obtained according to
Characteristic point position carries out triangulation and obtains original triangle gridding, according to the key point displacement transmitted by the characteristic point of benchmark image
Move the triangle gridding after being updated, then obtain preliminary restored image through affine transformation and interpolation, mould of finally expressing one's feelings
Plate and parametric texture are added on preliminary restored image and realize reduction.
Described triangulation, by delaunay triangulation (Delaunay Triangulation) algorithm, according to base
The characteristic point position of quasi-image and the new feature point position obtained according to key point displacement thereof carry out triangulation respectively, to ensure
In network, any two limits are non-intersect, moreover it is possible to as far as possible " averagely " generate this network, make the angle of each interior angle of triangle
It is worth the most close.
Described affine transformation refers to:Wherein: u ', v ' are image slices vegetarian refreshments after affine transformation
Coordinate, x, y are the coordinates of image slices vegetarian refreshments before affine transformation,It is by key point displacement front-rear triangular grid top
The transformation matrix of the affine transformation of some position calculation.
Described Interpolation Process refers to: B0(u, v)=∑P∈Nabs((u-Px)(v-Py))B0(Px, Py), wherein: B0(u,v)
Being the color needing interpolation point, N is four points that this interpolation point is nearest, PxAnd PyIt is the coordinate of closest approach P respectively, B0(Px, Py)
It it is the color of closest approach P.
Described additive process refers to: B1(u, v)=D ' (u, v) * B0(u, v), wherein: B0(u is v) that benchmark image is affine
Result after conversion interpolation, B1(u v) is the final facial image restored.(u is v) in the detail textures according to transmission to D '
The facial image of characterization parameter calculating and benchmark image texture difference after alignment, i.e. D ' (u, v)=a1M1(u,v)+a2M2
(u,v)+…+a6M6(u, v), wherein: aiIt is that i-th expression template is for the weight shared by grain details sign namely transmission
Details characterization parameter, Mi(u v) is the i-th expression template texture difference at this pixel.
The present invention relates to a kind of system realizing said method, including: Compressed Facial Image module and facial image are multiple
Grand master pattern block, wherein: Compressed Facial Image module is restored with facial image and is connected and transmits benchmark image, facial image relative to base
The key point displacement of quasi-image, and each expression template is for the letter of the proportion shared by the detail textures difference after image alignment
Breath.
Described Compressed Facial Image module includes: characteristic point calculation unit, detail textures difference calculation units.
Described facial image restoration module includes: characteristic point calculation unit, characteristic point translation unit, triangulation list
Unit, affine transformation and interpolating unit, texture superpositing unit.
Technique effect
Compared with prior art, the technology of the present invention effect includes:
1, present invention utilizes the feature of facial image, image, feature on the basis of the picture breakdown arbitrarily expressed one's feelings by face
Point displacement and the expression template weight parameter of minutia, it is possible to be effectively compressed face picture, reduce redundancy
Information, reduces the size of transmission data.
2, by benchmark image being set up triangle gridding, key point displacement, minutia superposition, facial image is carried out multiple
Former, it is possible to obtain accurate facial image, the vivid expression that must reduce face and detail textures feature, it is ensured that face
The effectiveness of compression of images.
Accompanying drawing explanation
Fig. 1 is the flow chart of the present invention;
Fig. 2 is the schematic diagram of features of human face images;
Fig. 3 is the schematic diagram that face image detail characterizes;
Fig. 4 is the schematic diagram of facial image triangulation.
Detailed description of the invention
The present embodiment comprises the following steps:
A: extract facial image and the characteristic point position of benchmark image thereof and calculate this facial image relative to benchmark image
Key point displacement.
Described characteristic point position can be by by " active apparent model " (Active Appearance Model, AAM)
Obtain.This algorithm is proposed in 1998 by Cootes et al., based on previous " active shape model " (Active Shape
Model, ASM) algorithm, make use of geological information and texture, the Lighting information of face, by famous principal component analysis
(Principal Component Analysis, PCA) method, by training, obtains the feature of face picture and reference base picture
Point position.
B: carry out triangulation according to the characteristic point position of facial image and benchmark image thereof, and by two pictures according to it
The character pair point of triangle gridding aligns.
Described triangulation can use " delaunay triangulation " (Delaunay Triangulation) algorithm to obtain
Arrive.This algorithm is a kind of " closest to regularization " triangulation methodology, and it is possible not only to ensure that in network, any two limits are not
Intersect, moreover it is possible to as far as possible " averagely " generating this network, the angle value making each interior angle of triangle is the most close.
Described alignment procedure includes affine transformation and interpolation.
Described affine transformation (Affine Transformation) is to calculate a kind of very important place in graphics area
The means of reason several picture.Several picture is through affine transformation, it is possible to achieve arbitrary linear deformation.The form of affine transformation is many
Plant various, but always there are two universal character, i.e. grazing and collimation.Grazing refers to that a figure is through affine change
After changing, straight line remains straight line, and camber line remains camber line, and both will not inversion of phases mutually;Collimation refers in artwork parallel two
Bar straight line, is still able to maintain that parallel character after affine transformation.Affine transformation can resolve into several basic transformation mistake
Journey, (shear), translation (translation), scaling (scaling) and reflection are cut in rotation (rotation), mistake
(reflection), it may be assumed thatWherein: u ', v ' are the coordinates of image slices vegetarian refreshments after affine transformation, and x, y are
The coordinate of image slices vegetarian refreshments before affine transformation,It is to be counted by facial image triangle gridding summit corresponding with benchmark image
The transformation matrix of the affine transformation calculated.
Coordinate after described interpolation converts after being affine transformation falls the pixel in non-integer point position by week
Enclose the calculating of pixel color, obtain the color of this pixel, it may be assumed that B0(u, v)=∑P∈Nabs((u-Px)(v-Py))B0(Px,
Py), wherein: B0(u, v) is the color needing interpolation point, and N is four points that this interpolation point is nearest, PxAnd PyIt is closest approach P respectively
Coordinate, B0(Px, Py) it is the color of closest approach P.
C: the grain details obtaining this facial image according to the facial image after alignment and benchmark image characterizes, by its foundation
Respectively the express one's feelings weight of template of existing expression formwork calculation carries out parametrization.
Described grain details characterizes and refers to:
Wherein: (u v) is facial image and benchmark image texture difference after alignment to D.A1(u v) is facial image, A0
(u, v) is benchmark image, and u, v are the coordinates of image slices vegetarian refreshments.
After obtaining texture difference, each pixel of image is set up equation simultaneous, draws for this detail textures
Difference is respectively expressed one's feelings the proportion shared by template.Such as following formula:
a1M1(u,v)+a2M2(u,v)+…+a6M6(u, v)=D (u, v)
Wherein: aiIt is i-th expression weight shared by template namely the details characterization parameter of transmission, Mi(u v) is i-th
Expression template is at the texture difference of this pixel.
This is an over-determined systems, can be drawn optimal solution by method of least square.
D: key point displacement and details characterization parameter are transmitted.
E: according to characteristic point, benchmark image is carried out triangulation, obtains its triangle gridding.
Described triangulation can use " delaunay triangulation " (Delaunay Triangulation) algorithm to obtain
Arrive.
F: according to the key point displacement of transmission, the characteristic point of benchmark image is moved, and obtain new triangle gridding,
And carry out affine transformation and image that interpolation is tentatively restored.
Described affine transformation (Affine Transformation) is i.e.:Wherein: u ', v ' are
The coordinate of image slices vegetarian refreshments after affine transformation, x, y are the coordinates of image slices vegetarian refreshments before affine transformation,It is to pass through feature
The transformation matrix of the affine transformation that the mobile front-rear triangular grid vertex of point calculates.
Described interpolation is i.e.: B0(u, v)=∑P∈Nabs((u-Px)(v-Py))B0(Px, Py), wherein: B0(u v) is needs
The color of interpolation point, N is four points that this interpolation point is nearest, PxAnd PyIt is the coordinate of closest approach P respectively, B0(Px, Py) it is nearest
The color of some P.
G: according to the detail content of details characterization parameter restored image, be added on the image restored.
Described additive process refers to: B1(u, v)=D ' (u, v) * B0(u, v), wherein: B0(u is v) that benchmark image is affine
Result after conversion interpolation, B1(u v) is the final facial image restored.(u is v) in the detail textures according to transmission to D '
The facial image of characterization parameter calculating and benchmark image texture difference after alignment, it may be assumed that D ' (u, v)=a1M1(u,v)+a2M2
(u,v)+…+a6M6(u, v), wherein: aiIt is i-th expression weight shared by template namely the details characterization parameter of transmission, Mi
(u v) is the i-th expression template texture difference at this pixel.
Above-mentioned be embodied as can by those skilled in the art on the premise of without departing substantially from the principle of the invention and objective with difference
Mode it is carried out local directed complete set, protection scope of the present invention is as the criterion with claims and is not embodied as institute by above-mentioned
Limit, each implementation in the range of it is all by the constraint of the present invention.
Claims (9)
1. the Compressed Facial Image method characterized based on parametrization and details, it is characterised in that by facial image is made
Characterize by parametrization and grain details and be described, arbitrarily will be decomposed into benchmark image, expression template and texture by facial image
Parameter also preserves, thus realizes being effectively compressed of facial image;
Described parametrization refers to: extracts facial image and the characteristic point position of corresponding benchmark image thereof and calculates this facial image
Relative to the key point displacement of benchmark image, obtain the grain details table of this facial image according to facial image and benchmark image
Levy, it is respectively expressed one's feelings the weight i.e. details characterization parameter that template characterizes for grain details according to existing expression formwork calculation;
Described parametric texture includes: facial image relative to the key point displacement of benchmark image, and each expression template for
The proportion shared by detail textures difference after image alignment i.e. details characterization parameter.
Compression method the most according to claim 1, is characterized in that, described characteristic point position, by by actively apparent mould
Type, based on previous active shape model algorithm, make use of geological information and texture, the Lighting information of face, passes through main constituent
Analysis method, obtains the characteristic point position of face picture and reference base picture.
Compression method the most according to claim 1, is characterized in that, described grain details characterizes and refers to:Wherein: (u v) is facial image and benchmark image texture difference after alignment, A to D1(u v) is face
Image, A0(u, v) is benchmark image, and u, v are the coordinates of image slices vegetarian refreshments;
After obtaining texture difference, each pixel of image is set up equation simultaneous, draw for this detail textures difference
Respectively express one's feelings the proportion shared by template, it may be assumed that a1M1(u,v)+a2M2(u,v)+…+a6M6(u, v)=D (u, v), wherein: aiIt it is i-th
Expression template is for the weight shared by grain details sign namely the details characterization parameter of transmission, Mi(u v) is i-th expression mould
Plate is at the texture difference of this pixel.
4. a facial image method of reducing based on compression of images described in any of the above-described claim, it is characterised in that will pressure
The benchmark image that contracting obtains carries out triangulation according to characteristic point position and obtains original triangle gridding, according to the characteristic point position of transmission
Move and the characteristic point of benchmark image is moved the triangle gridding after being updated, then obtain the most multiple through affine transformation and interpolation
Original image, the preliminary restored image that finally expression template and parametric texture is added to realizes reduction.
Method of reducing the most according to claim 4, is characterized in that, described triangulation, by delaunay triangulation
Algorithm, carries out triangle respectively according to the characteristic point position of benchmark image and the new feature point position that obtains according to key point displacement thereof
Subdivision, to ensure that in network, any two limits are non-intersect, moreover it is possible to as far as possible " averagely " generate this network, make triangle each
The angle value of interior angle is the most close.
Method of reducing the most according to claim 4, is characterized in that, described affine transformation refers to:Wherein: u ', v ' are the coordinates of image slices vegetarian refreshments after affine transformation, and x, y are image slices before affine transformation
The coordinate of vegetarian refreshments,It it is the conversion square of the affine transformation calculated by key point displacement front-rear triangular grid vertex position
Battle array.
Method of reducing the most according to claim 4, is characterized in that, described Interpolation Process refers to: B0(u, v)=∑P∈ Nabs((u-Px)(v-Py))B0(Px,Py), wherein: B0(u, v) is the color needing interpolation point, and N is four that this interpolation point is nearest
Individual, PxAnd PyIt is the coordinate of closest approach P respectively, B0(Px,Py) it is the color of closest approach P.
Method of reducing the most according to claim 4, is characterized in that, described additive process refers to: B1(u, v)=D ' (u,
v)*B0(u, v), wherein: B0(u v) is the result after benchmark image affine transformation interpolation, B1(u v) is the final people restored
Face image.D ' (u, v) be according to transmission detail textures characterization parameter calculate facial image and benchmark image after alignment
Texture difference, i.e. D ' (u, v)=a1M1(u,v)+a2M2(u,v)+…+a6M6(u, v), wherein: aiIt it is i-th expression template pair
Weight shared by characterizing in grain details namely the details characterization parameter of transmission, Mi(u is v) that i-th expression template is in this pixel
The texture difference of point.
9. the system realizing above-mentioned compression method or method of reducing, it is characterised in that including: Compressed Facial Image module with
And facial image restoration module, wherein: Compressed Facial Image module is restored with facial image and is connected and transmits benchmark image, feature
Point displacement, facial image are relative to the key point displacement of benchmark image, and each expression template is for the details after image alignment
The information of the proportion shared by texture difference;
Described Compressed Facial Image module includes: characteristic point calculation unit, detail textures difference calculation units;
Described facial image restoration module includes: characteristic point calculation unit, characteristic point translation unit, triangulation unit, imitative
Penetrate conversion and interpolating unit, texture superpositing unit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610646843.8A CN106295561B (en) | 2016-08-09 | 2016-08-09 | Compressed Facial Image and restoring method and system based on parametrization and details characterization |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610646843.8A CN106295561B (en) | 2016-08-09 | 2016-08-09 | Compressed Facial Image and restoring method and system based on parametrization and details characterization |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106295561A true CN106295561A (en) | 2017-01-04 |
CN106295561B CN106295561B (en) | 2019-06-18 |
Family
ID=57667127
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610646843.8A Active CN106295561B (en) | 2016-08-09 | 2016-08-09 | Compressed Facial Image and restoring method and system based on parametrization and details characterization |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106295561B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108629410A (en) * | 2018-04-28 | 2018-10-09 | 中国科学院计算技术研究所 | Based on principal component analysis dimensionality reduction and/or rise the Processing with Neural Network method tieed up |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103577815A (en) * | 2013-11-29 | 2014-02-12 | 中国科学院计算技术研究所 | Face alignment method and system |
CN104023216A (en) * | 2014-05-28 | 2014-09-03 | 清华大学 | Face video compression method |
CN104917532A (en) * | 2015-05-06 | 2015-09-16 | 清华大学 | Face model compression method |
-
2016
- 2016-08-09 CN CN201610646843.8A patent/CN106295561B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103577815A (en) * | 2013-11-29 | 2014-02-12 | 中国科学院计算技术研究所 | Face alignment method and system |
CN104023216A (en) * | 2014-05-28 | 2014-09-03 | 清华大学 | Face video compression method |
CN104917532A (en) * | 2015-05-06 | 2015-09-16 | 清华大学 | Face model compression method |
Non-Patent Citations (4)
Title |
---|
DONG LIU 等: "Edge-based inpainting and texture synthesis for image compress", 《2007 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO》 * |
NIKOLA SPRLJAN 等: "NEW PERSPECTIVES ON IMAGE COMPRESSION USING A CARTOON - TEXTURE DECOMPOSITION MODEL", 《PROCEEDINGS EC-VIP-MC 2003. 4TH EURASIP CONFERENCE FOCUSED ON VIDEO/IMAGE PROCESSING AND MULTIMEDIA COMMUNICATIONS》 * |
代毅 等: "一种基于位平面的压缩域人脸识别算法", 《计算机工程与应用》 * |
於俊: "一种模型基人脸视频编码参数压缩算法", 《小型微型计算机系统》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108629410A (en) * | 2018-04-28 | 2018-10-09 | 中国科学院计算技术研究所 | Based on principal component analysis dimensionality reduction and/or rise the Processing with Neural Network method tieed up |
CN108629410B (en) * | 2018-04-28 | 2021-01-22 | 中国科学院计算技术研究所 | Neural network processing method based on principal component analysis dimension reduction and/or dimension increase |
Also Published As
Publication number | Publication date |
---|---|
CN106295561B (en) | 2019-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106067190B (en) | A kind of generation of fast face threedimensional model and transform method based on single image | |
CN104299250B (en) | Front face image synthetic method and system based on prior model | |
CN103268623B (en) | A kind of Static Human Face countenance synthesis method based on frequency-domain analysis | |
CN103593870B (en) | A kind of image processing apparatus based on face and method thereof | |
CN106920277A (en) | Simulation beauty and shaping effect visualizes the method and system of online scope of freedom carving | |
CN103268629B (en) | Unmarked some real time restoration method of 3 D human body form and attitude | |
CN105678702B (en) | A kind of the human face image sequence generation method and device of feature based tracking | |
CN104794722A (en) | Dressed human body three-dimensional bare body model calculation method through single Kinect | |
CN102393966B (en) | Self-adapting image compressive sampling method based on multi-dimension saliency map | |
CN102663361A (en) | Face image reversible geometric normalization method facing overall characteristics analysis | |
CN110717971B (en) | Substation three-dimensional simulation system database modeling system facing power grid training service | |
CN104036546A (en) | Method for carrying out face three-dimensional reconstruction at any viewing angle on basis of self-adaptive deformable model | |
CN103745209B (en) | A kind of face identification method and system | |
CN104954780A (en) | DIBR (depth image-based rendering) virtual image restoration method applicable to high-definition 2D/3D (two-dimensional/three-dimensional) conversion | |
CN103745206B (en) | A kind of face identification method and system | |
CN108564619B (en) | Realistic three-dimensional face reconstruction method based on two photos | |
CN110188667B (en) | Face rectification method based on three-party confrontation generation network | |
CN107194995A (en) | A kind of method of rapid build true three-dimension person model | |
CN106127818A (en) | A kind of material appearance based on single image obtains system and method | |
CN106295561B (en) | Compressed Facial Image and restoring method and system based on parametrization and details characterization | |
CN114283265A (en) | Unsupervised face correcting method based on 3D rotation modeling | |
CN104917532A (en) | Face model compression method | |
CN108062742A (en) | A kind of eyebrow replacing options using Digital Image Processing and deformation | |
KR102577135B1 (en) | A skeleton-based dynamic point cloud estimation system for sequence compression | |
Kong et al. | Effective 3d face depth estimation from a single 2d face image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |