CN108111868B - MMDA-based privacy protection method with unchangeable expression - Google Patents
MMDA-based privacy protection method with unchangeable expression Download PDFInfo
- Publication number
- CN108111868B CN108111868B CN201711148720.2A CN201711148720A CN108111868B CN 108111868 B CN108111868 B CN 108111868B CN 201711148720 A CN201711148720 A CN 201711148720A CN 108111868 B CN108111868 B CN 108111868B
- Authority
- CN
- China
- Prior art keywords
- expression
- face
- mmda
- feature
- attribute
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2621—Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Image Processing (AREA)
Abstract
The invention belongs to the technical field of video privacy protection, and discloses a privacy protection method with unchangeable expression based on MMDA (multimedia Mediad data acquisition), which decomposes a human face into independent subspaces with different attributes by using MMDA multi-mode discriminant analysis, changes the characteristics of the corresponding subspaces and keeps other subspaces unchanged, thereby realizing the protection of the privacy information of the human face under the condition of unchangeable expression of the human face. The method includes the steps that the expression information is kept while face privacy information is protected in live video broadcast, attribute features except for expressions are changed by establishing a multi-mode feature space, the expression features are kept unchanged, and original expressions are kept unchanged while face privacy protection is achieved. Different from the traditional method for directly protecting the whole face, the method decomposes the face into independent subspaces with different attributes through multi-mode discriminant analysis, changes the characteristics of the corresponding subspaces and keeps the characteristics of the subspaces unchanged, thereby protecting the privacy information of the face under the condition that the expression of the face is unchanged.
Description
Technical Field
The invention belongs to the technical field of video privacy protection, and particularly relates to a privacy protection method with unchangeable expression based on an MMDA (multimedia Media data acquisition).
Background
The rapid development of the internet, a video monitoring system and a live video application enables the video privacy protection to be more and more concerned by people. The method ensures the video application without revealing privacy information. The expression of the face can change after the face privacy protection technology is protected at present. The original facial expression information and the like of the face are all changed after the privacy protection is carried out on the face by technologies such as fuzzification, mosaic and the like, and the facial expression information is not stored. The expression information is also damaged in the process of averaging by using similar K human face images by using the K-Same and other related technologies, so that the human face expression after privacy protection may be changed.
In summary, the problems of the prior art are as follows: the expression of the face can change after the face privacy protection technology is protected at present. In the application of live video broadcast and the like, the privacy protection of the face is not only concerned about whether privacy information is protected, but also needs to ensure the naturalness and the integrity of the face in real-time video, so that the face privacy is protected while the expression is kept unchanged, and the problem to be solved in the field of live video broadcast and the like is solved.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a privacy protection method based on MMDA with unchangeable expression.
The invention is realized in such a way that the MMDA-based privacy protection method with invariable expression keeps expression information while protecting face privacy information in live video, decomposes a face into independent subspaces with different attributes by MMDA multi-mode discriminant analysis, establishes a multi-mode feature space, changes attribute features except the expression, keeps the expression features unchanged, and simultaneously keeps the original expression unchanged.
Further, the MMDA-based privacy protection method with unchangeable expression comprises the following steps:
step one, pretreatment and normalization:
(1) selecting 20 images with the size of 640 x 490 from a CK face library as a training sample set, wherein each image in the training sample set has three face attributes of gender, race and expression, the gender attribute comprises male and female, the race attribute comprises Africa and Europe, and the expression attribute comprises five classes of smile, anger, sadness, fear and surprise;
(2) acquiring a continuous static image sequence through a camera or other equipment in video live broadcast;
(3) selecting 68 key feature points from the face image, recording the coordinate position of each feature point, storing the coordinate positions in sequence to form a group of vector sets, and acquiring the shape information of the image;
(4) AAM modeling and normalization;
step two, MMDA decomposition and synthesis:
(1) respectively carrying out whitening pretreatment on the samples under each attribute characteristic;
(2) whitening data using Fish criterionEstablishing feature subspaces with different attributes, wherein each subspace is kept orthogonal to each other;
(3) after the independent feature subspace is obtained, decomposing the whitened data by using the feature space;
(4) and synthesizing the transformed parameter vectors to obtain the face vectors.
Further, the AAM modeling and normalization further comprises:
(1) after the shape vectors of all samples are obtained, the shape vectors are aligned, and for the ith sample, the recorded feature point coordinate vector is expressed as:
si={xi1,yi1,xi2,yi2,…,xi68,yi68}T;
for sample sjAnd siThe two samples are rotated, scaled and translated, the sizes and angles of the human faces in the two samples are the same, and the vector s is measured1And s2Carrying out alignment:
s1={x11,y11,x12,y12,…,x168,y168}T;
s2={x21,y21,x22,y22,…,x268,y268}T:
will s2Normalized to s1Then, thenBy Procrustes transformation, we get:
wherein θ is s1By an angle of counterclockwise rotation, s and t being s2And t is [ t ]x,ty,tx,ty,…,tx,ty]T;
wherein n is the number of feature points;
rotation s2Minimizing the sum of squares of distances of corresponding feature points, s1And s2The angles are consistent;
(2) after the shape model is obtained, triangulation is carried out by utilizing Delaunay, and a subdivided shape area is obtained; dividing each sample, calculating the position of each point in each sample, which corresponds to the corresponding point in the shape model, according to the position of each point of the triangle, replacing the value of the corresponding point in the shape model with the pixel value of the point to obtain n mapped models, and averaging the n models to obtain an average texture model;
(3) and carrying out weighted combination on the shape model and the texture model to obtain a combined model.
Further, the respectively whitening preprocessing the samples under each attribute feature specifically includes:
u and D are eachThe characteristic vector matrix and the characteristic value matrix of (4) and the non-zero characteristic value in D and the corresponding characteristic vector in U are reserved;
obtaining a transformation matrix P:
P=UD-1/2;
the matrix P is a whitening matrix, and whitening processing is carried out on the sample set X by using the whitening matrix to obtain whitened data
Further, the inter-class dispersion matrix of each attribute i is calculated respectivelyAnd intra-class discrete matrix
decomposing feature subspaces under each attribute by using Fisher discriminant criterion, and maximizing I for each attribute IF(Vi) To obtain the most discriminative subspace Vi:
Obtaining a feature subspace under each attribute;
for each feature vectorSubtracting the projection of the image on the basis of each subspace, and performing orthogonal normalization to obtain the residual space V0The feature space V is represented as:
V=[V1,V2,…Vk,V0]。
further, after the independent feature subspace is obtained, decomposing the whitened data by using the feature space:
PrThe whitening matrix P is transposed and then inverted.
Another object of the present invention is to provide a camera using the privacy protection method with unchangeable expression based on MMDA.
Another object of the present invention is to provide a video surveillance system using the MMDA-based privacy protection method with unchangeable expressions.
The method decomposes the face into independent subspaces with different attributes by MMDA multi-mode discriminant analysis, changes the characteristics of the corresponding subspaces and keeps the characteristics of the subspaces unchanged, thereby protecting the privacy information of the face under the condition that the expression of the face is unchanged. The method includes the steps that the expression information is kept while face privacy information is protected in live video broadcast, attribute features except for expressions are changed by establishing a multi-mode feature space, the expression features are kept unchanged, and original expressions are kept unchanged while face privacy protection is achieved.
The MMDA multi-mode discriminant analysis establishes mutually independent attribute subspaces by utilizing the Fisher discriminant criterion, and completely separates a plurality of attribute features, so that when one attribute feature is changed, other features cannot be influenced. With the scheme provided by the invention, when other characteristics except the expression attribute characteristics are selectively changed, each characteristic does not influence each other, and unchanged expression characteristics are still kept unchanged and are not changed. Meanwhile, the privacy information of the face is protected under the condition that a plurality of attribute features are changed, and the expression features are not changed, so that the face privacy protection with invariable expressions is realized.
The existing visual privacy protection technologies such as K-Same, K-Same-M, fuzzy and the like well protect the privacy information of the face, but do not well maintain the expression of the face. The Face + + Face recognizer and the expression classifier (smile) are used for carrying out experiments on 20 CK Face sample pictures, compared with the scheme of the invention and the prior art of K-Same, K-Same-M and fuzziness, when the strength sigma is changed to be 4, K is 10 and the fuzziness is 5, the Face recognition rate and the expression recognition rate are calculated to obtain the following data,
scheme of the invention | K-Same | K-Same-M | Blurring | |
Face recognition rate | 0.38 | 0.57 | 0.75 | 0.45 |
Expression recognition rate | 0.984 | 0.43 | 0.68 | 0.75 |
The table shows that the scheme of the invention can still keep higher expression recognition rate when the face recognition rate is lower, and other three schemes have higher face recognition rate and lower expression recognition rate, so that the scheme of the invention has better performance on face privacy protection and can keep the expression unchanged well.
Drawings
Fig. 1 is a flowchart of a privacy protection method based on MMDA with unchangeable expression according to an embodiment of the present invention.
Fig. 2 is a flowchart of an implementation of a privacy protection method based on MMDA with unchangeable expression according to an embodiment of the present invention.
Fig. 3 shows the Face recognition rate of a CK sample set detected by Face + + according to the embodiment of the present invention under different variation intensities σ in the scheme of the present invention.
Fig. 4 is an expression recognition rate of a CK sample set detected by Face + + according to the embodiment of the present invention under different change intensities σ in the scheme of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Different from the traditional method for directly protecting the whole face, the method decomposes the face into independent subspaces with different attributes through multi-mode discriminant analysis, changes the characteristics of the corresponding subspaces and keeps the characteristics of the subspaces unchanged, thereby protecting the privacy information of the face under the condition that the expression of the face is unchanged.
The following detailed description of the principles of the invention is provided in connection with the accompanying drawings.
As shown in fig. 1, the privacy protection method based on MMDA with unchangeable expression provided by the embodiment of the present invention includes the following steps:
s101: selecting 20 images with the size of 640 × 490 from a CK face library as a training sample set, wherein the training sample set comprises 20 face objects, and each image in the training sample set has three face attributes of gender, race and expression;
s102: acquiring a face image through camera shooting tools such as a camera in live video; carrying out feature point positioning on the face image, and selecting 68 feature points including key parts such as face contour, mouth, eyes and the like;
s103: carrying out AAM shape modeling on the human face to obtain a shape model; carrying out AAM texture modeling on the face image by using the shape model to obtain a texture model; combining the shape model and the texture model to obtain a mixed model, and realizing the normalization of the face image through the mixed model;
s104: whitening preprocessing, namely establishing independent subspaces with different attributes for the whitened features by utilizing a Fisher criterion; decomposing the whitening characteristics by using the obtained independent subspace to obtain parameters of corresponding characteristics; and changing corresponding parameters except the expression characteristics, only keeping the parameters under the expression characteristics unchanged, and synthesizing the parameters with the feature space by using the whitening transformation matrix to obtain a transformed face image.
The privacy protection method based on the MMDA with invariable expression provided by the embodiment of the invention specifically comprises the following steps:
1. pretreatment and normalization
(1) Establishing a training sample set:
and selecting 20 images with the size of 640 x 490 from a CK face library as a training sample set, wherein the training sample set comprises 20 face objects, and each image in the training sample set has three face attributes of gender, race and expression.
(2) Image acquisition:
the method comprises the steps of obtaining a human face image through camera shooting tools such as a camera in live video.
(3) Positioning face feature points:
the feature points of the face image are positioned, and 68 feature points including key parts such as face contour, mouth and eyes are selected.
(4) AAM modeling and normalization:
(4a) shape modeling: and carrying out AAM shape modeling on the human face to obtain a shape model.
(4b) Texture modeling: and carrying out AAM texture modeling on the face image by using the shape model to obtain a texture model.
(4c) And (3) combining the models: and combining the shape model and the texture model to obtain a mixed model, and realizing the normalization of the face image through the mixed model.
MMDA decomposition and Synthesis:
(1) whitening pretreatment: through whitening preprocessing, the correlation among the features is reduced, and the subsequent independent vector acquisition process is simplified.
(2) Establishing independent subspaces: and respectively establishing independent subspaces with different attributes for the whitened features by utilizing a Fisher criterion.
(3) Multi-mode decomposition: and decomposing the whitening characteristics by using the obtained independent subspace to obtain parameters of the corresponding characteristics.
(4) Transformation and synthesis: and changing corresponding parameters except the expression characteristics, only keeping the parameters under the expression characteristics unchanged, and synthesizing the parameters with the feature space by using the whitening transformation matrix to obtain a transformed face image.
The application of the principles of the present invention will now be described in further detail with reference to the accompanying drawings.
As shown in fig. 2, the privacy protection method based on MMDA with unchangeable expression provided by the embodiment of the present invention includes the following steps:
1. pretreatment and normalization
20 images with the size of 640 x 490, containing 20 objects, are selected from a CK face library as a training sample set, and each image in the training sample set has three face attributes of gender, race and expression, wherein the gender attribute comprises male and female, the race attribute comprises Africa and Europe, and the expression attribute comprises five classes of smile, anger, sadness, fear and surprise.
in video live broadcast, a continuous sequence of still images is obtained by a camera or other device.
68 key feature points including the positions of the outline, the mouth, the nose, the eyes and the like are selected for the face image, the coordinate position of each feature point is recorded and stored in sequence to form a group of vector sets, and therefore the shape information of the image can be obtained.
first, shape modeling. After the shape vectors of all samples are obtained, the shape vectors are aligned. For the ith sample, the coordinate vector of the feature point recorded by the ith sample is expressed as:
si={xi1,yi1,xi2,yi2,…,xi68,yi68}T;
for sample sjAnd siAnd rotating, scaling and translating the two samples to ensure that the sizes and the angles of the human faces in the two samples are the same. Hypothesis pair vector s1And s2Carrying out alignment:
s1={x11,y11,x12,y12,…,x168,y1e8}T;
s2={x21,y21,x22,y22,…,x268,y268}T;
will s2Normalized to s1Then, through Procrustes transformation, obtaining:
wherein θ is s1By an angle of counterclockwise rotation, s and t being s2And t is [ t ]x,ty,tx,ty,…,tx,ty]T;
wherein n is the number of feature points;
rotation s2Minimizing the sum of squares of the distances of the respective feature points so that s1And s2Are identical.
And performing Procrustes transformation on every two training samples, and finally moving the feature points of all the samples to fixed initial positions with consistent sizes and angles. Then averaging all the converted samples to obtain an average shape model;
after the shape model is obtained, PCA dimension reduction is carried out on parameters of the control model, data redundancy is reduced, and the shape model can be changed by changing the parameters.
And step two, texture modeling, namely after the shape model is obtained, triangulating by using Delaunay and obtaining a subdivided shape area. Dividing each sample, calculating the position of each point in each sample corresponding to the corresponding point in the shape model through the position of each point of the triangle, replacing the value of the corresponding point in the shape model with the pixel value of the point, mapping the texture of the sample to the shape model, repeating the operation, finally obtaining n mapped models, averaging the n models, and obtaining an average texture model;
and (3) carrying out dimensionality reduction on the texture model by utilizing PCA to obtain texture parameters after dimensionality reduction, and changing the texture model by changing the parameters.
And thirdly, combining the models to obtain a shape model and a texture model, and then performing weighted combination on the two models to obtain a combined model, namely an AAM model. At the same time, the parameters controlling the shape and texture are obtained, by means of which the AAM model can be changed. And (4) carrying out dimensionality reduction on the parameters by utilizing PCA to obtain final dimensionality-reduced parameters.
MMDA decomposition and Synthesis:
in order to reduce the correlation between the initialized training samples, whitening preprocessing is respectively carried out on the samples under each attribute characteristic;
u and D are eachThe eigenvector matrix and eigenvalue matrix of (D) are retained, with the non-zero eigenvalues in D and the corresponding eigenvectors in U.
The transformation matrix P can be derived:
P=UD-1/2;
the matrix P is a whitening matrix, and whitening processing is carried out on the sample set X by using the whitening matrix to obtain whitened data
Since the sample data of each attribute is from the same sample training set, only one whitening operation needs to be completed.
in order to distinguish different attribute features as much as possible and keep the distances of the same attribute features as close as possible, the whitening data is subjected to the Fish criterionEstablishing feature subspaces with different attributes, wherein each subspace is kept orthogonal to each other;
respectively calculating the inter-class dispersion matrix of each attribute iAnd intra-class discrete matrix
decomposing feature subspaces under each attribute by using Fisher discriminant criterion, and maximizing I for each attribute IF(Vi) To obtain the most discriminative subspace Vi:
Repeating the operation on each attribute to obtain a feature subspace under each attribute;
in addition to the subspace corresponding to the known attributes, there is a residual space V containing unknown features0。V0Can be obtained by Schmidt orthogonal transformation and is used for each feature vectorSubtracting the projection of the image on the basis of each subspace, and performing orthogonal normalization to obtain the residual space V0. The final feature space V can then be expressed as:
V=[V1,V2,…Vk,V0]。
and after the independent feature subspace is obtained, decomposing the whitened data by using the feature subspace.
the parameter vector includes parameters corresponding to each attribute, and values of the parameters are fixed. These parameters represent the average characteristics of different attributes such as male, african, smile, etc. by varying these parameters, the corresponding facial characteristics can be varied, while the degree of variation can also be controlled.
And 4, transformation and synthesis:
the parameter vector obtained through decomposition contains the average feature corresponding to each attribute feature, namely the average face corresponding to each attribute. Corresponding parameters representing sex and race are found from the parameter vector, and the parameters are changed, namely corresponding attribute characteristics are changed, such as changing the sex parameter from male to female and changing the race parameter from Africa to Europe. Meanwhile, the expression parameters are kept unchanged, namely the expression characteristics are not changed;
PrThe whitening matrix P is transposed and then inverted.
The above operations are repeated for each sample in the sample set, so that the faces in the entire sample set are transformed and synthesized. Finally, the gender and ethnicity characteristics of all face images are changed, so that the face cannot be correctly identified, and the privacy information is protected. Meanwhile, the expression of the face image is not changed, so that the invention protects the privacy information of the face under the condition of not changing the expression and ensures the integrity of the face.
According to the invention, through experimental simulation, Face + + detection is used for detecting the Face recognition rate and the expression recognition rate of the CK sample set under different changing strengths sigma in the scheme of the invention, and the experimental results are shown in the graphs 3 and 4.
The experimental result shows that the face recognition rate is gradually reduced along with the increase of the change intensity sigma, so that the privacy information of the face is gradually protected, and when the intensity is greater than 4.5, the face recognition rate is the lowest, so that the privacy information is fully protected. With the change of the sigma, the expression recognition rate is always kept at about 0.985, and the change amplitude tends to be gentle, which shows that the expression basically does not change no matter how other attribute features change. Therefore, the experimental result fully shows that the scheme of the invention can protect the face privacy information and keep the expression information unchanged.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
Claims (8)
1. A privacy protection method based on MMDA (MMDA) with invariable expression is characterized in that the privacy protection method based on MMDA keeps expression information while protecting face privacy information in live video, decomposes a face into independent subspaces with different attributes by utilizing MMDA (multimedia Mediad data acquisition) multi-mode discriminant analysis, establishes a multi-mode feature space, changes attribute features except the expression, keeps the expression features unchanged, and simultaneously keeps the original expression unchanged;
the privacy protection method based on the MMDA with unchangeable expression comprises the following steps:
step one, pretreatment and normalization:
(1) selecting 20 images with the size of 640 x 490 from a CK face library as a training sample set, wherein each image in the training sample set has three face attributes of gender, race and expression, the gender attribute comprises male and female, the race attribute comprises Africa and Europe, and the expression attribute comprises five classes of smile, anger, sadness, fear and surprise;
(2) acquiring a continuous static image sequence through a camera or other equipment in video live broadcast;
(3) selecting 68 key feature points from the face image, recording the coordinate position of each feature point, storing the coordinate positions in sequence to form a group of vector sets, and acquiring the shape information of the image;
(4) AAM modeling and normalization;
step two, MMDA decomposition and synthesis:
(1) respectively carrying out whitening pretreatment on the samples under each attribute characteristic;
(2) whitening data mining using Fish criterionEstablishing feature subspaces with different attributes, wherein each subspace is kept orthogonal to each other;
(3) after the independent feature subspace is obtained, decomposing the whitened data by using the feature space;
(4) and synthesizing the transformed parameter vectors to obtain the face vectors.
2. The MMDA-based expression invariant privacy preserving method of claim 1, wherein the AAM modeling and normalization further comprises:
(1) after the shape vectors of all samples are obtained, the shape vectors are aligned, and for the ith sample, the recorded feature point coordinate vector is expressed as:
si={x1,yi1,xi2,yi2,…,xi68,yi68}T:
for sample sjAnd siThe two samples are rotated, scaled and translated, the sizes and angles of the human faces in the two samples are the same, and the vector s is measured1And s2Carrying out alignment:
s1={x11,y11,x12,y12,…,x168,y168}T;
s2={x21,y21,x22,y22,…,x268,y268}T;
will s2Normalized to s1Then, through Procrustes transformation, obtaining:
wherein θ is s1By an angle of counterclockwise rotation, s and t being s2And t is [ t ]x,ty,tx,ty,…,tx,ty]T;
wherein n is the number of feature points;
rotation s2Minimizing the sum of squares of distances of corresponding feature points, s1And s2The angles are consistent;
(2) after the shape model is obtained, triangulation is carried out by utilizing Delaunay, and a subdivided shape area is obtained; dividing each sample, calculating the position of each point in each sample, which corresponds to the corresponding point in the shape model, according to the position of each point of the triangle, replacing the value of the corresponding point in the shape model with the pixel value of the point to obtain n mapped models, and averaging the n models to obtain an average texture model;
(3) and carrying out weighted combination on the shape model and the texture model to obtain a combined model.
3. The MMDA-based expression invariant privacy preserving method of claim 1, wherein the whitening preprocessing of the samples under each attribute feature comprises:
u and D are eachThe characteristic vector matrix and the characteristic value matrix of (4) and the non-zero characteristic value in D and the corresponding characteristic vector in U are reserved;
obtaining a transformation matrix P:
P=UD-1/2;
the matrix P is a whitening matrix, and whitening processing is carried out on the sample set X by using the whitening matrix to obtain whitened data
4. The method of claim 1The privacy protection method with unchangeable expression based on MMDA is characterized in that the inter-class dispersion matrix of each attribute i is calculated respectivelyAnd intra-class discrete matrix
decomposing feature subspaces under each attribute by using Fisher discriminant criterion, and maximizing J for each attribute iF(Vi) To obtain the most discriminative subspace Vi:
Obtaining a feature subspace under each attribute;
for each feature vectorSubtracting the projection of the image on the basis of each subspace, and performing orthogonal normalization to obtain the residual space V0The feature space V is represented as:
V=[V1,V2,…Vk,V0]。
5. the MMDA-based privacy preserving method with invariant expressions according to claim 1, wherein after obtaining the independent feature subspace, the whitening data is decomposed using the feature space:
7. A camera using the MMDA-based privacy protection method of any one of claims 1-6.
8. A video monitoring system using the MMDA-based privacy protection method with unchanged expression according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711148720.2A CN108111868B (en) | 2017-11-17 | 2017-11-17 | MMDA-based privacy protection method with unchangeable expression |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711148720.2A CN108111868B (en) | 2017-11-17 | 2017-11-17 | MMDA-based privacy protection method with unchangeable expression |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108111868A CN108111868A (en) | 2018-06-01 |
CN108111868B true CN108111868B (en) | 2020-06-09 |
Family
ID=62206556
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711148720.2A Active CN108111868B (en) | 2017-11-17 | 2017-11-17 | MMDA-based privacy protection method with unchangeable expression |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108111868B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109344643B (en) * | 2018-09-03 | 2022-03-29 | 华中科技大学 | Privacy protection method and system for triangle data release in facing graph |
CN110932946A (en) * | 2019-11-25 | 2020-03-27 | 广州富港万嘉智能科技有限公司 | User meaning expression real-time judgment system with privacy protection and intelligent living room system |
CN113160348A (en) * | 2021-05-20 | 2021-07-23 | 深圳文达智通技术有限公司 | Recoverable face image privacy protection method, device, equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103514442A (en) * | 2013-09-26 | 2014-01-15 | 华南理工大学 | Video sequence face identification method based on AAM model |
CN104573714A (en) * | 2014-12-31 | 2015-04-29 | 南京理工大学 | Self-adaptation parameter-free feature extraction method |
CN106303233A (en) * | 2016-08-08 | 2017-01-04 | 西安电子科技大学 | A kind of video method for secret protection merged based on expression |
US9547763B1 (en) * | 2015-03-31 | 2017-01-17 | EMC IP Holding Company LLC | Authentication using facial recognition |
CN106453385A (en) * | 2016-11-01 | 2017-02-22 | 西安电子科技大学 | Fine-granularity face privacy protection method in social network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5880182B2 (en) * | 2012-03-19 | 2016-03-08 | カシオ計算機株式会社 | Image generating apparatus, image generating method, and program |
-
2017
- 2017-11-17 CN CN201711148720.2A patent/CN108111868B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103514442A (en) * | 2013-09-26 | 2014-01-15 | 华南理工大学 | Video sequence face identification method based on AAM model |
CN104573714A (en) * | 2014-12-31 | 2015-04-29 | 南京理工大学 | Self-adaptation parameter-free feature extraction method |
US9547763B1 (en) * | 2015-03-31 | 2017-01-17 | EMC IP Holding Company LLC | Authentication using facial recognition |
CN106303233A (en) * | 2016-08-08 | 2017-01-04 | 西安电子科技大学 | A kind of video method for secret protection merged based on expression |
CN106453385A (en) * | 2016-11-01 | 2017-02-22 | 西安电子科技大学 | Fine-granularity face privacy protection method in social network |
Non-Patent Citations (2)
Title |
---|
Facial expression preserving privacy protection using image melding;Yuta Nakashima ET AL;《2015 IEEE International Conference on Multimedia and Expo (ICME)》;20150806;全文 * |
Simultaneous and orthogonal decomposition of data using Multimodal Discriminant Analysis;Terence Sim ET AL;《2009 IEEE 12th International Conference on Computer Vision》;20100506;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN108111868A (en) | 2018-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107506717B (en) | Face recognition method based on depth transformation learning in unconstrained scene | |
Krull et al. | Learning analysis-by-synthesis for 6D pose estimation in RGB-D images | |
Jeni et al. | Dense 3D face alignment from 2D videos in real-time | |
CN108629336B (en) | Face characteristic point identification-based color value calculation method | |
Min et al. | Efficient detection of occlusion prior to robust face recognition | |
US9152847B2 (en) | Facial landmark localization by exemplar-based graph matching | |
CN107871098B (en) | Method and device for acquiring human face characteristic points | |
CN108111868B (en) | MMDA-based privacy protection method with unchangeable expression | |
EP1496466B1 (en) | Face shape recognition from stereo images | |
Shamai et al. | Synthesizing facial photometries and corresponding geometries using generative adversarial networks | |
Kiani Galoogahi et al. | Face photo retrieval by sketch example | |
Kittler et al. | Conformal mapping of a 3D face representation onto a 2D image for CNN based face recognition | |
Chen et al. | Unconstrained face verification using fisher vectors computed from frontalized faces | |
Song et al. | Robust 3D face landmark localization based on local coordinate coding | |
Kaur et al. | Photo-realistic facial texture transfer | |
Mao et al. | Classroom micro-expression recognition algorithms based on multi-feature fusion | |
Kakumanu et al. | A local-global graph approach for facial expression recognition | |
JP3729581B2 (en) | Pattern recognition / collation device | |
Bourbakis et al. | Skin-based face detection-extraction and recognition of facial expressions | |
CN108288034B (en) | A kind of method for evaluating quality and system of game design | |
Gottumukkal et al. | Real time face detection from color video stream based on PCA method | |
Jida et al. | Face segmentation and detection using Voronoi diagram and 2D histogram | |
Behera et al. | Rotation axis focused attention network (rafa-net) for estimating head pose | |
CN115131853A (en) | Face key point positioning method and device, electronic equipment and storage medium | |
Hamsici et al. | Active appearance models with rotation invariant kernels |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |