CN106295561B - Compressed Facial Image and restoring method and system based on parametrization and details characterization - Google Patents

Compressed Facial Image and restoring method and system based on parametrization and details characterization Download PDF

Info

Publication number
CN106295561B
CN106295561B CN201610646843.8A CN201610646843A CN106295561B CN 106295561 B CN106295561 B CN 106295561B CN 201610646843 A CN201610646843 A CN 201610646843A CN 106295561 B CN106295561 B CN 106295561B
Authority
CN
China
Prior art keywords
image
facial image
characteristic point
benchmark
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610646843.8A
Other languages
Chinese (zh)
Other versions
CN106295561A (en
Inventor
林巍峣
王琰宁
张亿皓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201610646843.8A priority Critical patent/CN106295561B/en
Publication of CN106295561A publication Critical patent/CN106295561A/en
Application granted granted Critical
Publication of CN106295561B publication Critical patent/CN106295561B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

A kind of Compressed Facial Image and restoring method and system based on parametrization and details characterization, by the way that facial image is described using parametrization and grain details characterization, will any facial image be decomposed into benchmark image, expression template and parametric texture and saved, to realize being effectively compressed for facial image;The present invention can accurately express and restore the content of facial image while the size of transmission data is effectively reduced, restore the expression and detail textures feature of face vividly.

Description

Compressed Facial Image and restoring method and system based on parametrization and details characterization
Technical field
The present invention relates to a kind of technology of field of image processing, it is specifically a kind of based on parametrization and details characterization Compressed Facial Image and restoring method and system.
Background technique
Compression of images refers to the technology for damaging with less bit or nondestructively indicating original picture element matrix.
Recently, since the use of facial image increases increasingly, increasingly important is become to the compression of facial image video.And needle Mainly there are JPEG2000, principal component analysis, vector quantization etc. to the compression method of facial image.
In 2008, OriBryt et al. was in " Journal of Visual Communication and Image Representation " in paper " the Compression of facial images using the K-SVD that delivers In algorithm ", introduces K-SVD algorithm one dictionary of training and go to obtain the rarefaction representation of facial image, to be carried out to image Compression.
But these compression methods do not make full use of the characteristic of facial image and the continuity of video, by facial image into Row compression.
Tao Xiaoming in 2014 proposes to carry out the positioning of face in video using faceform in transmitting terminal and parametrization characterizes, Obtain from illumination parameter vector, attitude parameter vector sum shape and appearance combined parameters vector indicate faceform's parameter to Amount obtains original image by face shape calculating and the face appearance through over recovery, removes face view to greatest extent in the time domain Redundancy in frequency.However this method has ignored the variation of human face expression and detail textures feature, is lost some of facial image Details.
Summary of the invention
The present invention In view of the above shortcomings of the prior art, proposes a kind of face figure based on parametrization and details characterization As compression and restoring method and system can be effective by the way that facial image to be described using parametrization and details characterization While reducing the size of transmission data, the content of facial image is accurately expressed and restored, restores face vividly Expression and detail textures feature.
The present invention is achieved by the following technical solutions:
The Compressed Facial Image method based on parametrization and details characterization that the present invention relates to a kind of, by making facial image It is described with parametrization and grain details characterization, i.e., any facial image is decomposed into benchmark image, expression template and texture Parameter is simultaneously saved, to realize being effectively compressed for facial image.
The parametrization refers to: extracting the characteristic point position of facial image and its corresponding benchmark image and calculates the face Key point displacement of the image relative to benchmark image obtains the grain details of the facial image according to facial image and benchmark image Characterization, the weight that it characterizes grain details according to each expression template of existing expression formwork calculation, that is, details characterization ginseng Number.
The characteristic point position, by being based on by active apparent model (Active Appearance Model, AAM) Previous active shape model (Active Shape Model, ASM) algorithm, is utilized the geological information and texture, light of face Face picture and base are obtained by principal component analysis (Principal Component Analysis, PCA) method according to information The characteristic point position of quasi- picture.
The characteristic point position, using but be not limited to Edwards GJ, Taylor CJ, Cootes T F.Interpreting FaceImages Using Active Appearance Models[C]//IEEE International Conference on Automatic Faceand Gesture Recognition, The mode recorded in 1998.Proceedings.1998:145-149. is realized.
The grain details characterization refers to:Wherein: D (u, v) is facial image and benchmark image Texture difference after alignment, A1(u, v) is facial image, A0(u, v) is benchmark image, and u, v are the coordinates of image slices vegetarian refreshments.
After obtaining texture difference, equation and simultaneous are established to each pixel of image, obtained for the detail textures Specific gravity shared by each expression template of difference, it may be assumed that a1M1(u,v)+a2M2(u,v)+…+a6M6(u, v)=D (u, v), in which: aiIt is The details characterization parameter of i-th of expression template weight namely transmission shared for grain details characterization, Mi(u, v) is i-th Texture difference of the expression template in the pixel.
The parametric texture includes: key point displacement and each expression template of the facial image relative to benchmark image For specific gravity shared by the detail textures difference after image alignment, that is, details characterization parameter.
The present invention relates to a kind of facial image restoring method based on above-mentioned compression, benchmark image that compression is obtained according to Characteristic point position carries out triangulation and obtains original triangle gridding, according to the key point displacement of transmission by the characteristic point of benchmark image It is moved to obtain updated triangle gridding, then obtains preliminary restored image through affine transformation and interpolation, finally by expression mould Plate is added on preliminary restored image with parametric texture and realizes reduction.
The triangulation, by delaunay triangulation (Delaunay Triangulation) algorithm, according to base The characteristic point position of quasi- image and its new feature point position obtained according to key point displacement carry out triangulation respectively, to guarantee Any two sides are non-intersecting in network, moreover it is possible to " average " generate this network as far as possible, make the angle of each interior angle of triangle Value is close as far as possible.
The affine transformation refers to:Wherein: u ', v ' are image slices vegetarian refreshments after affine transformation Coordinate, x, y are the coordinates of image slices vegetarian refreshments before affine transformation,It is by key point displacement front-rear triangular grid The transformation matrix for the affine transformation that vertex position calculates.
The Interpolation Process refers to: B0(u, v)=∑P∈Nabs((u-Px)(v-Py))B0(Px, Py), in which: B0(u,v) It is the color for needing interpolation point, N is four nearest points of the interpolation point, PxAnd PyIt is the coordinate of closest approach P, B respectively0(Px, Py) It is the color of closest approach P.
The additive process refers to: B1(u, v)=D ' (u, v) * B0(u, v), in which: B0(u, v) is that benchmark image is affine Convert and interpolation after as a result, B1(u, v) is the facial image finally restored.D ' (u, v) is in the detail textures according to transmission The texture difference of the facial image and benchmark image that characterization parameter calculates after alignment, i.e. D ' (u, v)=a1M1(u,v)+a2M2 (u,v)+…+a6M6(u, v), in which: aiIt is i-th of expression template weight namely transmission shared for grain details characterization Details characterization parameter, Mi(u, v) is texture difference of i-th of expression template in the pixel.
The present invention relates to a kind of systems for realizing the above method, comprising: Compressed Facial Image module and facial image are multiple Former module, in which: Compressed Facial Image module is connected with facial image recovery and transmits benchmark image, facial image relative to base The letter of the key point displacement of quasi- image and each expression template for specific gravity shared by the detail textures difference after image alignment Breath.
The Compressed Facial Image module includes: characteristic point calculation unit, detail textures difference calculation units.
The facial image restoration module includes: characteristic point calculation unit, characteristic point translation unit, triangulation list Member, affine transformation and interpolating unit, texture superpositing unit.
Technical effect
Compared with prior art, the technology of the present invention effect includes:
1, the picture breakdown of any expression of face is benchmark image, feature by the characteristics of present invention utilizes facial images The expression template weight parameter of point displacement and minutia, can effectively compress face picture, reduce redundancy Information reduces the size of transmission data.
2, it is superimposed by establishing triangle gridding, key point displacement, minutia to benchmark image, facial image is answered Original can obtain accurate facial image, the vivid expression that must restore face and detail textures feature, guarantee face The validity of compression of images.
Detailed description of the invention
Fig. 1 is flow chart of the invention;
Fig. 2 is the schematic diagram of features of human face images;
Fig. 3 is the schematic diagram of facial image details table sign;
Fig. 4 is the schematic diagram of facial image triangulation.
Specific embodiment
The present embodiment the following steps are included:
A: it extracts the characteristic point position of facial image and its benchmark image and calculates the facial image relative to benchmark image Key point displacement.
The characteristic point position can be by by " active apparent model " (Active Appearance Model, AAM) It obtains.The algorithm is to be proposed by Cootes et al. in 1998, based on previous " active shape model " (Active Shape Model, ASM) algorithm, the geological information and texture, Lighting information of face is utilized, passes through famous principal component analysis (Principal Component Analysis, PCA) method obtains the feature of face picture and reference base picture by training Point position.
B: triangulation is carried out according to the characteristic point position of facial image and its benchmark image, and by two pictures according to it The character pair point of triangle gridding is aligned.
The triangulation can be used " delaunay triangulation " (Delaunay Triangulation) algorithm and obtain It arrives.The algorithm is a kind of " closest to regularization " triangulation methodology, it can not only guarantee that any two sides are not in network Intersection, moreover it is possible to " average " generate this network as far as possible, keep the angle value of each interior angle of triangle close as far as possible.
The alignment procedure includes affine transformation and interpolation.
The affine transformation (Affine Transformation) is to calculate a kind of very important place in graphics area Manage the means of several picture.Several picture passes through affine transformation, and arbitrary linear deformation may be implemented.The form of affine transformation is more Kind multiplicity, but always there are two universal properties, i.e. grazing and collimation for tool.Grazing refers to a figure by affine change After changing, straight line remains straight line, and camber line remains camber line, and the two will not mutual inversion of phases;Collimation refers in original image parallel two Straight line, is still able to maintain that parallel property after affine transformation.Affine transformation can resolve into several basic transformation mistakes Journey, rotation (rotation), mistake are cut (shear), translation (translation), scaling (scaling) and are reflected (reflection), it may be assumed thatWherein: u ', v ' are the coordinate of image slices vegetarian refreshments after affine transformation, x, y It is the coordinate of image slices vegetarian refreshments before affine transformation,It is that triangle gridding top is corresponded to by facial image and benchmark image The transformation matrix for the affine transformation that point calculates.
The interpolation is that the pixel that transformed coordinate falls in non-integer point position after affine transformation passed through to week The calculating for enclosing pixel color obtains the color of the pixel, it may be assumed that B0(u, v)=∑P∈Nabs((u-Px)(v-Py))B0(Px, Py), in which: B0(u, v) is the color for needing interpolation point, and N is four nearest points of the interpolation point, PxAnd PyIt is closest approach P respectively Coordinate, B0(Px, Py) be closest approach P color.
C: according to after alignment facial image and benchmark image obtain the facial image grain details characterize, by its foundation The weight of existing each expression template of expression formwork calculation is parameterized.
The grain details characterization refers to:
Wherein: D (u, v) is the texture difference of facial image and benchmark image after alignment.A1(u, v) is facial image, A0 (u, v) is benchmark image, and u, v are the coordinates of image slices vegetarian refreshments.
After obtaining texture difference, equation and simultaneous are established to each pixel of image, obtained for the detail textures Specific gravity shared by each expression template of difference.Such as following formula:
a1M1(u,v)+a2M2(u,v)+…+a6M6(u, v)=D (u, v)
Wherein: aiIt is the details characterization parameter of weight shared by i-th of expression template namely transmission, Mi(u, v) is i-th Texture difference of the expression template in the pixel.
This is an over-determined systems, can obtain optimal solution by least square method.
D: key point displacement and details characterization parameter are transmitted.
E: benchmark image is subjected to triangulation according to characteristic point, obtains its triangle gridding.
The triangulation can be used " delaunay triangulation " (Delaunay Triangulation) algorithm and obtain It arrives.
F: moving the characteristic point of benchmark image according to the key point displacement of transmission, and obtain new triangle gridding, And carry out the image that affine transformation and interpolation are tentatively restored.
The affine transformation (Affine Transformation) is i.e.:Wherein: u ', v ' It is the coordinate of image slices vegetarian refreshments after affine transformation, x, y are the coordinates of image slices vegetarian refreshments before affine transformation,It is to pass through The transformation matrix for the affine transformation that the mobile front-rear triangular grid vertex of characteristic point calculates.
The interpolation is i.e.: B0(u, v)=∑P∈Nabs((u-Px)(v-Py))B0(Px, Py), in which: B0(u, v) is desirable The color of interpolation point, N are four nearest points of the interpolation point, PxAnd PyIt is the coordinate of closest approach P, B respectively0(Px, Py) it is nearest The color of point P.
G: according to the detail content of details characterization parameter restored image, on the image for the recovery that is added to.
The additive process refers to: B1(u, v)=D ' (u, v) * B0(u, v), in which: B0(u, v) is that benchmark image is affine Convert and interpolation after as a result, B1(u, v) is the facial image finally restored.D ' (u, v) is in the detail textures according to transmission The texture difference of the facial image and benchmark image that characterization parameter calculates after alignment, it may be assumed that D ' (u, v)=a1M1(u,v)+a2M2 (u,v)+…+a6M6(u, v), in which: aiIt is the details characterization parameter of weight shared by i-th of expression template namely transmission, Mi (u, v) is texture difference of i-th of expression template in the pixel.
Above-mentioned specific implementation can by those skilled in the art under the premise of without departing substantially from the principle of the invention and objective with difference Mode carry out local directed complete set to it, protection scope of the present invention is subject to claims and not by above-mentioned specific implementation institute Limit, each implementation within its scope is by the constraint of the present invention.

Claims (8)

1. a kind of Compressed Facial Image method based on parametrization and details characterization, which is characterized in that by making facial image It is described with parametrization and grain details characterization, i.e., any facial image is decomposed into benchmark image, expression template and texture Parameter is simultaneously saved, to realize being effectively compressed for facial image;
The parametrization refers to: extracting the characteristic point position of facial image and its corresponding benchmark image and calculates the facial image Relative to the key point displacement of benchmark image, the grain details table of the facial image is obtained according to facial image and benchmark image Sign, weight, that is, details characterization parameter that it is characterized according to each expression template of existing expression formwork calculation for grain details;
The parametric texture include: facial image relative to the key point displacement of benchmark image and each expression template for Specific gravity shared by detail textures difference after image alignment, that is, details characterization parameter.
2. compression method according to claim 1, characterized in that the characteristic point position, by by actively apparent mould Type is utilized the geological information and texture, Lighting information of face, is passed through principal component based on previous active shape model algorithm Analysis method obtains the characteristic point position of face picture and reference base picture.
3. compression method according to claim 1, characterized in that the grain details characterization refers to:Wherein: D (u, v) is the texture difference of facial image and benchmark image after alignment, A1(u, v) is face Image, A0(u, v) is benchmark image, and u, v are the coordinates of image slices vegetarian refreshments;
After obtaining texture difference, equation and simultaneous are established to each pixel of image, obtained for the detail textures difference Specific gravity shared by each expression template, it may be assumed that a1M1(u,v)+a2M2(u,v)+…+a6M6(u, v)=D (u, v), in which: aiIt is i-th The details characterization parameter of expression the template weight namely transmission shared for grain details characterization, Mi(u, v) is i-th of expression mould Texture difference of the plate in the pixel.
4. a kind of facial image restoring method based on the compression of any of the above-described claim described image, which is characterized in that will press The obtained benchmark image that contracts carries out triangulation according to characteristic point position and obtains original triangle gridding, according to the feature point of transmission Shifting is moved the characteristic point of benchmark image to obtain updated triangle gridding, then is obtained through affine transformation and interpolation tentatively multiple Expression template and parametric texture are finally added on preliminary restored image and realize reduction by original image.
5. restoring method according to claim 4, characterized in that the triangulation passes through delaunay triangulation Algorithm carries out triangle according to the characteristic point position of benchmark image and its new feature point position obtained according to key point displacement respectively Subdivision, to guarantee that any two sides are non-intersecting in network, moreover it is possible to " average " generate this network as far as possible, keep triangle each The angle value of interior angle is close as far as possible.
6. restoring method according to claim 4, characterized in that the affine transformation refers to:Wherein: u ', v ' are the coordinates of image slices vegetarian refreshments after affine transformation,It is to pass through characteristic point It is displaced the transformation matrix for the affine transformation that front-rear triangular grid vertex position calculates.
7. restoring method according to claim 4, characterized in that the Interpolation Process refers to: B0(u, v)=∑P∈ Nabs((u-Px)(v-Py))B0(Px,Py), in which: B0(u, v) is the color for needing interpolation point, and N is nearest four of the interpolation point It is a, PxAnd PyIt is the coordinate of closest approach P, B respectively0(Px,Py) be closest approach P color.
8. a kind of system for realizing above-mentioned compression method or restoring method characterized by comprising Compressed Facial Image module with And facial image restoration module, in which: Compressed Facial Image module is connected with facial image recovery and transmits benchmark image, feature Point displacement, facial image are relative to the key point displacement of benchmark image and each expression template for the details after image alignment The information of specific gravity shared by texture difference;
The Compressed Facial Image module includes: characteristic point calculation unit, detail textures difference calculation units;
The facial image restoration module includes: characteristic point calculation unit, characteristic point translation unit, triangulation unit, imitates Penetrate transformation and interpolating unit, texture superpositing unit.
CN201610646843.8A 2016-08-09 2016-08-09 Compressed Facial Image and restoring method and system based on parametrization and details characterization Active CN106295561B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610646843.8A CN106295561B (en) 2016-08-09 2016-08-09 Compressed Facial Image and restoring method and system based on parametrization and details characterization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610646843.8A CN106295561B (en) 2016-08-09 2016-08-09 Compressed Facial Image and restoring method and system based on parametrization and details characterization

Publications (2)

Publication Number Publication Date
CN106295561A CN106295561A (en) 2017-01-04
CN106295561B true CN106295561B (en) 2019-06-18

Family

ID=57667127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610646843.8A Active CN106295561B (en) 2016-08-09 2016-08-09 Compressed Facial Image and restoring method and system based on parametrization and details characterization

Country Status (1)

Country Link
CN (1) CN106295561B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629410B (en) * 2018-04-28 2021-01-22 中国科学院计算技术研究所 Neural network processing method based on principal component analysis dimension reduction and/or dimension increase

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577815A (en) * 2013-11-29 2014-02-12 中国科学院计算技术研究所 Face alignment method and system
CN104023216A (en) * 2014-05-28 2014-09-03 清华大学 Face video compression method
CN104917532A (en) * 2015-05-06 2015-09-16 清华大学 Face model compression method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577815A (en) * 2013-11-29 2014-02-12 中国科学院计算技术研究所 Face alignment method and system
CN104023216A (en) * 2014-05-28 2014-09-03 清华大学 Face video compression method
CN104917532A (en) * 2015-05-06 2015-09-16 清华大学 Face model compression method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Edge-based inpainting and texture synthesis for image compress;Dong Liu 等;《2007 IEEE International Conference on Multimedia and Expo》;20070808;1443-1446
NEW PERSPECTIVES ON IMAGE COMPRESSION USING A CARTOON - TEXTURE DECOMPOSITION MODEL;Nikola Sprljan 等;《Proceedings EC-VIP-MC 2003. 4th EURASIP Conference focused on Video/Image Processing and Multimedia Communications》;20030811;359-368
一种基于位平面的压缩域人脸识别算法;代毅 等;《计算机工程与应用》;20100101;第46卷(第1期);140-142
一种模型基人脸视频编码参数压缩算法;於俊;《小型微型计算机系统》;20160715;第37卷(第7期);1562-1566

Also Published As

Publication number Publication date
CN106295561A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
CN110008915B (en) System and method for estimating dense human body posture based on mask-RCNN
CN104299250B (en) Front face image synthetic method and system based on prior model
CN105678702B (en) A kind of the human face image sequence generation method and device of feature based tracking
CN102663361B (en) Face image reversible geometric normalization method facing overall characteristics analysis
CN110189399B (en) Indoor three-dimensional layout reconstruction method and system
CN103268629B (en) Unmarked some real time restoration method of 3 D human body form and attitude
CN110717971B (en) Substation three-dimensional simulation system database modeling system facing power grid training service
CN102393966B (en) Self-adapting image compressive sampling method based on multi-dimension saliency map
KR102010161B1 (en) System, method, and program for predicing information
CN113327278A (en) Three-dimensional face reconstruction method, device, equipment and storage medium
CN104618721B (en) The ELF magnetic field human face video coding-decoding method of feature based modeling
CN107194995A (en) A kind of method of rapid build true three-dimension person model
CN106295561B (en) Compressed Facial Image and restoring method and system based on parametrization and details characterization
Nath et al. Deep generative adversarial network to enhance image quality for fast object detection in construction sites
CN102592309B (en) Modeling method of nonlinear three-dimensional face
CN104917532A (en) Face model compression method
CN102663453B (en) Human motion tracking method based on second generation Bandlet transform and top-speed learning machine
CN110751026B (en) Video processing method and related device
KR102577135B1 (en) A skeleton-based dynamic point cloud estimation system for sequence compression
JP2013008137A (en) Data transmission device for three-dimensional shape modeling, data reception device for three-dimensional shape modeling, three-dimensional shape modeling system, data transmission program for three-dimensional shape modeling, and data reception program for three-dimensional shape modeling
CN105118025A (en) Fast image super resolution method based on soft threshold coding
CN108924542A (en) Based on conspicuousness and sparsity without reference three-dimensional video quality evaluation method
CN114862716A (en) Image enhancement method, device and equipment for face image and storage medium
CN114332186A (en) Unsupervised single-view ship depth estimation method
CN109615688B (en) Real-time face three-dimensional reconstruction system and method on mobile equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant