CN109035171A - A kind of reticulate pattern facial image restorative procedure - Google Patents
A kind of reticulate pattern facial image restorative procedure Download PDFInfo
- Publication number
- CN109035171A CN109035171A CN201810867459.XA CN201810867459A CN109035171A CN 109035171 A CN109035171 A CN 109035171A CN 201810867459 A CN201810867459 A CN 201810867459A CN 109035171 A CN109035171 A CN 109035171A
- Authority
- CN
- China
- Prior art keywords
- reticulate pattern
- pixel value
- pixel
- value
- mesh
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000001815 facial effect Effects 0.000 title claims abstract description 56
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000000605 extraction Methods 0.000 claims abstract description 14
- 239000000284 extract Substances 0.000 claims abstract description 7
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 6
- HUTDUHSNJYTCAR-UHFFFAOYSA-N ancymidol Chemical compound C1=CC(OC)=CC=C1C(O)(C=1C=NC=NC=1)C1CC1 HUTDUHSNJYTCAR-UHFFFAOYSA-N 0.000 claims description 5
- 238000010801 machine learning Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 3
- 230000008439 repair process Effects 0.000 abstract description 3
- 238000004458 analytical method Methods 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 abstract description 2
- 230000000007 visual effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000000903 blocking effect Effects 0.000 description 2
- 238000003475 lamination Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Classifications
-
- G06T5/77—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/68—Analysis of geometric attributes of symmetry
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of reticulate pattern facial image restorative procedures, and the performance of recognition of face is promoted its object is to repair reticulate pattern facial image.Clear face image is recovered from reticulate pattern facial image using image processing techniques realization, key problem in technology is that (1) realizes the extraction to 68 key points of face using the library Dlib;(2) fitting of facial symmetry axis is realized using least square;(3) image is split according to the Pixel Information of image;(4) according to the distributed intelligence of background reticulate pattern, analysis is obtainedValue and as priori knowledge extract entire image reticulate pattern;(5) the facial symmetry axis obtained using fitting and the reticulate pattern extracted carry out face reparation.The method applied in the present invention realizes input one and throws the net line facial image, and output is the clear face image of descreening.The present invention not only improves the visual effect of image, also improves the recognition accuracy of reticulate pattern facial image.
Description
Technical field
The invention belongs to field of image processing more particularly to a kind of reticulate pattern facial image restorative procedures.
Background technique
With the arrival of big data era and the development of deep learning, face recognition technology achieves important breakthrough.Face
Identification has been widely used in the fields such as public security protection, bank finance, police criminal detection, social media.Due to facial image
It comprising personal identity information and is easy to obtain, criminal easily pretends to be using facial image or steal the identity of user to believe
Breath.Therefore, protecting human face image information not to be abused becomes an increasingly important social privacy concern and scientific research is asked
Topic.
Reticulate pattern face (MeshFace) provides the simple and cheap mode of one kind and protects face information, it is clear
Add random reticulate pattern striped on facial image, the position of these reticulate pattern stripeds, waveform, the depth are all random.It is this with
The reticulate pattern of machine can destroy the partial information of facial image, to achieve the purpose that protect face identity.Reticulate pattern facial image is
Financial field is widely applied to for protecting subscriber identity information.Although reticulate pattern facial image has the protection of face identity
Important function, but it also greatly affected the performance of recognition of face.Therefore, reticulate pattern facial image is repaired for recognition of face
Performance have huge real value.
Reticulate pattern facial image can be seen as one kind and block facial image, but ask compared to general face supplement of blocking
Topic, the blocking position of reticulate pattern facial image is unknown and completely random, and algorithm needs to detect and repairs the region being blocked.Net
The research history of line facial image is not long.In order to solve the problems, such as the identification of reticulate pattern facial image, researcher proposes a system
The method of column: Zhang Shu, impressively, Tan Tieniu et al. propose a kind of reticulate pattern facial image identification side based on full convolutional neural networks
Method, in conjunction with the reconstruction loss and the reconstruction of face characteristic rank loss of pixel scale, use space conversion module of arranging in pairs or groups is in net
The accurate extraction of human face region precisely aligned to realize human face region feature is carried out in network;Zhang Ning, 5 bottles of brightness et al. propose quilt
The restorative procedure of the face picture of reticulate pattern covering finally fills up reticulate pattern simultaneously being removed reticulate pattern by first extracting reticulate pattern edge
Entire image is smoothed, achievees the purpose that restore face;Li Pingli, Geng Yiting et al. propose that a kind of image reticulate pattern is gone
Except method and system, by combining reticulate pattern image reliability to be removed measurement to make relative weighting adaptive change, to reach more
The effect of image reticulate pattern is removed well;Outstanding et al. the method and system for proposing reticulate pattern in removal image of Yuan Meng, is utilized reticulate pattern
The structure feature of periodic distribution, it is thus possible to preferably removal reticulate pattern, and the complete of original image details is kept as far as possible
Property.
Although reticulate pattern facial image restorative procedure there are many, there are still following problems in practical application:
(1) the reticulate pattern position on reticulate pattern facial image, waveform, the depth are all random, and the corresponding reticulate pattern of every image is all
It is not identical.
(2) the reticulate pattern pixel value for being covered on face different zones has at very big difference, such as hair and skin, if made
It is split with same threshold value, effect is poor.
Summary of the invention
It is an object of the invention to improve accuracy rate of the reticulate pattern facial image for recognition of face when, a kind of reticulate pattern is proposed
Facial image restorative procedure.Using image processing techniques, analysis obtains the depth and width of reticulate pattern lines, complete according to priori knowledge
The reparation of pairs of reticulate pattern facial image, to achieve the effect that improve face recognition accuracy rate.
The technical solution adopted by the present invention is that:
A kind of reticulate pattern facial image restorative procedure, comprising the following steps:
Step 1: face key point is extracted.The extraction of 68 key points of face is realized using the library Dlib, the Dlib is one
A C++ Open-Source Tools packet comprising machine learning algorithm, and it is free.Have in the library Dlib based on the people for returning tree algorithm training
Face Critical point model.The input of face Critical point model is a facial image, exports the position letter for 68 key points of face
Cease ((x1, y1), (x2, y2)…(x68, y68))。
Step 2: the fitting of facial symmetry axis.The the 37th, 46,32,36,49 in 68 key points obtained by step 1 is chosen,
55 this six click-through pedestrian's face symmetry axis fittings, the symmetry axis straight line L of fitting is indicated using equation y=Ax+B.
The formula of two the parameters A, B of equation y=Ax+B are found out using least square method are as follows:
Wherein,
Step 3: the segmentation of reticulate pattern facial image.Reticulate pattern facial image I is divided by background I using deep neural network1, head
Send out I2, skin I3, clothes I4.The deep neural network is full convolutional network, which includes 7 convolutional layers, is rolled up to the 7th
The output of lamination is up-sampled, it is made to be restored to size identical with input picture, so as to generate to each pixel
One prediction, while the spatial information in original input picture is remained, finally in the characteristic pattern of up-sampling to former reticulate pattern face
Each pixel on image is classified;
Step 4: the distribution of reticulate pattern grayscale information is extracted.The information of required extraction is μ1, σ1, W, wherein μ1Indicate I1' regional network
The mean value of line pixel value, σ1Indicate I1The standard deviation of ' region reticulate pattern pixel value, W indicate the width of P ' regional network streakline item;
(4a)μ1And σ1Calculation method are as follows: by I1Be converted to grayscale image I1', using maximum distance method selected threshold between class
T calculates the pixel mean μ that gray level is less than T1And standard deviation sigma1。
The value step of the threshold value T are as follows:
1. giving an initial threshold T=T0, by I1' it is divided into C0And C1Two classes;
2. respectively according to formulaCalculate the gray average μ in two classes0, μ1,
Wherein i, j are respectively the abscissa and ordinate of pixel,For the number of pixels in t class.
The distance metric value S 3. calculating accompanies:
4. selecting optimal threshold T=T*So that image is divided into C according to the threshold value0And C1After two classes, meet S |T=T*=max
{S}。
It is described to seek μ1And σ1Formula are as follows:
The calculation method of (4b) W are as follows: to I1' progress binaryzation obtains Mesh, the extraction formula of Mesh are as follows:
Mesh=I1' (i, j)=1, I1′(i,j)∈[μ1-σ1,μ1+σ1] (7)
It is successively read the pixel value of Mesh, it is horizontal in connected domain where counting the pixel when reading pixel value is 1
The number that pixel value is 1 on direction is denoted as w1;The number that pixel value is 1 in vertical direction in connected domain where counting the pixel
It is denoted as w2;The number that pixel value is 1 on oblique 45 ° of directions in connected domain where counting the pixel is denoted as w3;Count the pixel institute
The number that pixel value is 1 on oblique 135 ° of directions in connected domain is denoted as w4, choose w1,w2,w3, w4In minimum value be denoted as the pixel
The reticulate pattern width W at placek.It is the reticulate pattern width W estimated that mean value is taken after traversal Mesh.
The value algorithm of the W are as follows:
Wk=min (w1,w2,w3,w4) (9)
Wherein m is the number that pixel value is 1 in Mesh, k ∈ [1, m].
Step 5: reticulate pattern extracts.Step 4 (4a) is described in utilization asks threshold value T and mean μ1Method obtains I2, I3, I4It is corresponding
μ2, μ3, μ4, and σ4=σ3=σ2=σ1.In order to improve face recognition accuracy rate, to μ2It is optimized.
The μ2Optimize formula are as follows:
μ2'=λ μ1+(1-λ)μ2, wherein λ is weight factor (11)
Final I2, I3, I4Partial reticulate pattern extraction algorithm are as follows:
Mesh2=I2(i, j)=1, I2(i,j)∈[μ2′-σ2,μ2′+σ2] (12)
Mesh3=I3(i, j)=1, I3(i,j)∈[μ3-σ3,μ3+σ3] (14)
Mesh4=I4(i, j)=1, I4(i,j)∈[μ4-σ4,μ4+σ4] (16)
It is successively read Mesh1,Mesh2,Mesh3,Mesh4Pixel value, whenWhen, pixel value is at 1
Region be not altered,When, pixel value is that the pixel value in the region at 1 is 0.
Step 6: facial image reparation.Facial symmetry axis is obtained according to step 2 and step 5 extracts obtained reticulate pattern to people
Whether face image is repaired, judge at symmetry axis or so same distance l to be reticulate pattern overlay area.
(6a): at symmetry axis the right and left same distance, the left side is reticulate pattern overlay area, and the right is the non-reticulate pattern area of coverage
When domain, the pixel value on the left side is made to be equal to the pixel value on the right.
The assignment method are as follows:
I (i, j-l)=I (i, j+l) (18)
Wherein, I (i, j-l) is the pixel value of left side reticulate pattern overlay area, and I (i, j+l) is the non-reticulate pattern overlay area in the right
Pixel value
(6b): at symmetry axis the right and left same distance, the right is reticulate pattern overlay area, and the left side is the non-reticulate pattern area of coverage
When domain, the pixel value on the right is made to be equal to the pixel value on the left side.
Further, the assignment method are as follows:
I (i, j+l)=I (i, j-l) (19)
Wherein, I (i, j-l) is the pixel value of the non-reticulate pattern overlay area in the left side, and I (i, j+l) is the right reticulate pattern overlay area
Pixel value
(6c): it when at symmetry axis the right and left same distance being all non-reticulate pattern overlay area, does not make any changes.
(6d): when at symmetry axis the right and left same distance being all reticulate pattern overlay area, 8 around Searching I (i, j-l)
The non-reticulate pattern pixel value of arest neighbors, is denoted as p1、p2、p3、p4、p5、p6、p7、p8, pixel I (i, j-l) and p1、p2、p3、p4、p5、p6、
p7、p8Distance be denoted as d1、d2、d3、d4、d5、d6、d7、d8;The non-reticulate pattern pixel value of 8 arest neighbors, is denoted as around I (i, j+l)
p9、p10、p11、p12、p13、p14、p15、p16, pixel I (i, j+l) and p9、p10、p11、p12、p13、p14、p15、p16Distance is denoted as d9、
d10、d11、d12、d13、d14、d15、d16。
WhenWhen,
WhenWhen,
Detailed description of the invention
Below in conjunction with attached drawing, a specific embodiment of the invention is described in further detail.
Fig. 1 is the full convolutional network that image segmentation is used for described in a kind of reticulate pattern facial image restorative procedure of the invention;
Fig. 2 is facial symmetry axis approximating method schematic diagram described in a kind of reticulate pattern facial image restorative procedure of the invention;
Fig. 3 is reticulate pattern line thickness W estimation method schematic diagram described in a kind of reticulate pattern facial image restorative procedure of the invention;
Fig. 4 is that face described in a kind of reticulate pattern facial image restorative procedure of the invention repairs schematic diagram.
Specific embodiment
The invention discloses a kind of reticulate pattern facial image restorative procedures, with reference to the accompanying drawing to specific embodiment party of the invention
Formula is described in detail.
Step 1: face key point is extracted.The extraction of 68 key points of face is realized using the library Dlib, the Dlib is one
A C++ Open-Source Tools packet comprising machine learning algorithm, and it is free.Have in the library Dlib based on the people for returning tree algorithm training
Face Critical point model.The input of face Critical point model is a facial image, exports the position letter for 68 key points of face
Cease ((x1, y1), (x2, y2)…(x68, y68))。
Step 2: the fitting of facial symmetry axis.The the 37th, 46,32,36,49 in 68 key points obtained by step 1 is chosen,
55 this six click-through pedestrian's face symmetry axis fittings, the symmetry axis straight line L of fitting is indicated using equation y=Ax+B.
The formula of two the parameters A, B of equation y=Ax+B are found out using least square method are as follows:
Wherein,
Step 3: the segmentation of reticulate pattern facial image.Reticulate pattern facial image I is divided by background I using deep neural network1, head
Send out I2, skin I3, clothes I4.The deep neural network is full convolutional network, which includes 7 convolutional layers, is rolled up to the 7th
The output of lamination is up-sampled, it is made to be restored to size identical with input picture, so as to generate to each pixel
One prediction, while the spatial information in original input picture is remained, finally in the characteristic pattern of up-sampling to former reticulate pattern face
Each pixel on image is classified.
Step 4: the distribution of reticulate pattern grayscale information is extracted.The information of required extraction is μ1, σ1, W, wherein μ1Indicate I1' regional network
The mean value of line pixel value, σ1Indicate I1The standard deviation of ' region reticulate pattern pixel value, W indicate the width of P ' regional network streakline item.
(4a)μ1And σ1Calculation method are as follows: by I1Be converted to grayscale image I1', using maximum distance method selected threshold between class
T calculates the pixel mean μ that gray level is less than T1And standard deviation sigma1。
The value step of the threshold value T are as follows:
1. giving an initial threshold T=T0, by I1' it is divided into C0And C1Two classes;
2. respectively according to formulaCalculate the gray average μ in two classes0, μ1,
Wherein i, j are respectively the abscissa and ordinate of pixel,For the number of pixels in t class.
The distance metric value S 3. calculating accompanies:
4. selecting optimal threshold T=T*So that image is divided into C according to the threshold value0And C1After two classes, meet S |T=T*=max
{S}。
It is described to seek μ1And σ1Formula are as follows:
The calculation method of (4b) W are as follows: to I1' progress binaryzation obtains Mesh, the extraction formula of Mesh are as follows:
Mesh=I1' (i, j)=1, I1′(i,j)∈[μ1-σ1,μ1+σ1] (28)
It is successively read the pixel value of Mesh, it is horizontal in connected domain where counting the pixel when reading pixel value is 1
The number that pixel value is 1 on direction is denoted as w1;The number that pixel value is 1 in vertical direction in connected domain where counting the pixel
It is denoted as w2;The number that pixel value is 1 on oblique 45 ° of directions in connected domain where counting the pixel is denoted as w3;Count the pixel institute
The number that pixel value is 1 on oblique 135 ° of directions in connected domain is denoted as w4, choose w1,w2,w3, w4In minimum value be denoted as the pixel
The reticulate pattern width W at placek.It is the reticulate pattern width W estimated that mean value is taken after traversal Mesh.
The value algorithm of the W are as follows:
Wk=min (w1,w2,w3,w4) (30)
Wherein m is the number that pixel value is 1 in Mesh, k ∈ [1, m].
Step 5: reticulate pattern extracts.Step 4 (4a) is described in utilization asks threshold value T and mean μ1Method obtains I2, I3, I4It is corresponding
μ2, μ3, μ4, and σ4=σ3=σ2=σ1.In order to improve face recognition accuracy rate, to μ2It is optimized.
The μ2Optimize formula are as follows:
μ2'=λ μ1+(1-λ)μ2, wherein λ is weight factor (32)
Final I2, I3, I4Partial reticulate pattern extraction algorithm are as follows:
Mesh2=I2(i, j)=1, I2(i,j)∈[μ2′-σ2,μ2′+σ2] (33)
Mesh3=I3(i, j)=1, I3(i,j)∈[μ3-σ3,μ3+σ3] (35)
Mesh4=I4(i, j)=1, I4(i,j)∈[μ4-σ4,μ4+σ4] (37
It is successively read Mesh1,Mesh2,Mesh3,Mesh4Pixel value, whenWhen, pixel value is at 1
Region be not altered,When, pixel value is that the pixel value in the region at 1 is 0.
Step 6: facial image reparation.Facial symmetry axis is obtained according to step 2 and step 5 extracts obtained reticulate pattern to people
Whether face image is repaired, judge at symmetry axis or so same distance l to be reticulate pattern overlay area.
(6a): at symmetry axis the right and left same distance, the left side is reticulate pattern overlay area, and the right is the non-reticulate pattern area of coverage
When domain, the pixel value on the left side is made to be equal to the pixel value on the right.
The assignment method are as follows:
I (i, j-l)=I (i, j+l) (39)
Wherein, I (i, j-l) is the pixel value of left side reticulate pattern overlay area, and I (i, j+l) is the non-reticulate pattern overlay area in the right
Pixel value
(6b): at symmetry axis the right and left same distance, the right is reticulate pattern overlay area, and the left side is the non-reticulate pattern area of coverage
When domain, the pixel value on the right is made to be equal to the pixel value on the left side.
Further, the assignment method are as follows:
I (i, j+l)=I (i, j-l) (40)
Wherein, I (i, j-l) is the pixel value of the non-reticulate pattern overlay area in the left side, and I (i, j+l) is the right reticulate pattern overlay area
Pixel value
(6c): it when at symmetry axis the right and left same distance being all non-reticulate pattern overlay area, does not make any changes.
(6d): when at symmetry axis the right and left same distance being all reticulate pattern overlay area, 8 around Searching I (i, j-l)
The non-reticulate pattern pixel value of arest neighbors, is denoted as p1、p2、p3、p4、p5、p6、p7、p8, pixel I (i, j-l) and p1、p2、p3、p4、p5、p6、
p7、p8Distance be denoted as d1、d2、d3、d4、d5、d6、d7、d8;The non-reticulate pattern pixel value of 8 arest neighbors, is denoted as around I (i, j+l)
p9、p10、p11、p12、p13、p14、p15、p16, pixel I (i, j+l) and p9、p10、p11、p12、p13、p14、p15、p16Distance is denoted as d9、
d10、d11、d12、d13、d14、d15、d16。
WhenWhen,
WhenWhen,
Claims (2)
1. a kind of reticulate pattern facial image restorative procedure, it is characterised in that include the following steps:
Step 1: face key point is extracted: the extraction of 68 key points of face is realized using the library Dlib, the Dlib is a packet
C++ Open-Source Tools packet containing machine learning algorithm, and it is free;Have in the library Dlib and is closed based on the face for returning tree algorithm training
Key point model;The input of face Critical point model is a facial image, is exported as the location information of 68 key points of face
((x1, y1), (x2, y2)…(x68, y68));
Step 2: the fitting of facial symmetry axis: choose in 68 key points obtained by step 1 the 37th, 46,32,36,49,55 this
Six click-through pedestrian's face symmetry axis fittings, the symmetry axis straight line L of fitting is indicated using equation y=Ax+B;
The formula of two the parameters A, B of equation y=Ax+B are found out using least square method are as follows:
Wherein,
Step 3: reticulate pattern facial image I the segmentation of reticulate pattern facial image: being divided by background I using deep neural network1, hair I2,
Skin I3, clothes I4;The deep neural network is full convolutional network, which includes 7 convolutional layers, to the 7th convolutional layer
Output is up-sampled, it is made to be restored to size identical with input picture, so as to generate one in advance to each pixel
It surveys, while remaining the spatial information in original input picture, finally in the characteristic pattern of up-sampling on former reticulate pattern facial image
Each pixel classify;
Step 4: the distribution of reticulate pattern grayscale information is extracted: the information of required extraction is μ1, σ1, W, wherein μ1Indicate I1' region reticulate pattern picture
The mean value of element value, σ1Indicate I1The standard deviation of ' region reticulate pattern pixel value, W indicate the width of P ' regional network streakline item;
(4a)μ1And σ1Calculation method are as follows: by I1Be converted to grayscale image I1', using maximum distance method selected threshold T between class, calculate
Gray level is less than the pixel mean μ of T1And standard deviation sigma1;
The value step of the threshold value T are as follows:
1. giving an initial threshold T=T0, by I1' it is divided into C0And C1Two classes;
2. respectively according to formulaCalculate the gray average μ in two classes0, μ1, wherein
I, j are respectively the abscissa and ordinate of pixel,For the number of pixels in t class;
The distance metric value S 3. calculating accompanies:
4. selecting optimal threshold T=T*So that image is divided into C according to the threshold value0And C1After two classes, meet
It is described to seek μ1And σ1Formula are as follows:
The calculation method of (4b) W are as follows: to I1' progress binaryzation obtains Mesh, the extraction formula of Mesh are as follows:
Mesh=I1' (i, j)=1, I1′(i,j)∈[μ1-σ1,μ1+σ1] (7)
It is successively read the pixel value of Mesh, when reading pixel value is 1, horizontal direction in connected domain where counting the pixel
The number that upper pixel value is 1 is denoted as w1;The number that pixel value is 1 in vertical direction in connected domain where counting the pixel is denoted as
w2;The number that pixel value is 1 on oblique 45 ° of directions in connected domain where counting the pixel is denoted as w3;Pixel place is counted to connect
The number that pixel value is 1 on oblique 135 ° of directions in logical domain is denoted as w4, choose w1,w2,w3, w4In minimum value be denoted as at the pixel
Reticulate pattern width Wk;It is the reticulate pattern width W estimated that mean value is taken after traversal Mesh;
The value algorithm of the W are as follows:
Wk=min (w1,w2,w3,w4) (9)
Wherein m is the number that pixel value is 1 in Mesh, k ∈ [1, m];
Step 5: reticulate pattern extracts: step 4 (4a) is described in utilization asks threshold value T and mean μ1Method obtains I2, I3, I4Corresponding μ2,
μ3, μ4, and σ4=σ3=σ2=σ1;In order to improve face recognition accuracy rate, to μ2It is optimized;
The μ2Optimize formula are as follows:
μ2'=λ μ1+(1-λ)μ2, wherein λ is weight factor (11)
Final I2, I3, I4Partial reticulate pattern extraction algorithm are as follows:
Mesh2=I2(i, j)=1, I2(i,j)∈[μ2′-σ2,μ2′+σ2] (12)
Mesh3=I3(i, j)=1, I3(i,j)∈[μ3-σ3,μ3+σ3] (14)
Mesh4=I4(i, j)=1, I4(i,j)∈[μ4-σ4,μ4+σ4] (16)
It is successively read Mesh1,Mesh2,Mesh3,Mesh4Pixel value, whenWhen, pixel value is the area at 1
Domain is not altered,When, pixel value is that the pixel value in the region at 1 is 0;
Step 6: facial image reparation: facial symmetry axis being obtained according to step 2 and step 5 extracts obtained reticulate pattern to face figure
As being repaired, judge at symmetry axis or so same distance l whether to be reticulate pattern overlay area.
2. reticulate pattern facial image restorative procedure as described in claim 1, it is characterised in that: step 6 specifically comprises the following steps:
(6a): at symmetry axis the right and left same distance, the left side is reticulate pattern overlay area, and the right is non-reticulate pattern overlay area
When, so that the pixel value on the left side is equal to the pixel value on the right;
The assignment method are as follows:
I (i, j-l)=I (i, j+l) (18)
Wherein, I (i, j-l) is the pixel value of left side reticulate pattern overlay area, and I (i, j+l) is the picture of the non-reticulate pattern overlay area in the right
Element value;
(6b): at symmetry axis the right and left same distance, the right is reticulate pattern overlay area, and the left side is non-reticulate pattern overlay area
When, so that the pixel value on the right is equal to the pixel value on the left side;
Further, the assignment method are as follows:
I (i, j+l)=I (i, j-l) (19)
Wherein, I (i, j-l) is the pixel value of the non-reticulate pattern overlay area in the left side, and I (i, j+l) is the picture of the right reticulate pattern overlay area
Element value;
(6c): it when at symmetry axis the right and left same distance being all non-reticulate pattern overlay area, does not make any changes;
(6d): 8 recently when at symmetry axis the right and left same distance being all reticulate pattern overlay area, around Searching I (i, j-l)
Adjacent non-reticulate pattern pixel value, is denoted as p1、p2、p3、p4、p5、p6、p7、p8, pixel I (i, j-l) and p1、p2、p3、p4、p5、p6、p7、p8
Distance be denoted as d1、d2、d3、d4、d5、d6、d7、d8;The non-reticulate pattern pixel value of 8 arest neighbors, is denoted as p around I (i, j+l)9、p10、
p11、p12、p13、p14、p15、p16, pixel I (i, j+l) and p9、p10、p11、p12、p13、p14、p15、p16Distance is denoted as d9、d10、d11、
d12、d13、d14、d15、d16;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810867459.XA CN109035171B (en) | 2018-08-01 | 2018-08-01 | Reticulate pattern face image restoration method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810867459.XA CN109035171B (en) | 2018-08-01 | 2018-08-01 | Reticulate pattern face image restoration method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109035171A true CN109035171A (en) | 2018-12-18 |
CN109035171B CN109035171B (en) | 2021-06-15 |
Family
ID=64648782
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810867459.XA Active CN109035171B (en) | 2018-08-01 | 2018-08-01 | Reticulate pattern face image restoration method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109035171B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111612798A (en) * | 2020-05-15 | 2020-09-01 | 中南大学 | Method, system and medium for repairing complete human face reticulate pattern facing human face data |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102572201A (en) * | 2010-12-31 | 2012-07-11 | 北京大学 | Method and system for removing overlapped curves from image |
CN102567957A (en) * | 2010-12-30 | 2012-07-11 | 北京大学 | Method and system for removing reticulate pattern from image |
CN104952111A (en) * | 2014-03-31 | 2015-09-30 | 特里库比奇有限公司 | Method and apparatus for obtaining 3D face model using portable camera |
CN107016657A (en) * | 2017-04-07 | 2017-08-04 | 河北工业大学 | The restorative procedure of the face picture covered by reticulate pattern |
CN107958444A (en) * | 2017-12-28 | 2018-04-24 | 江西高创保安服务技术有限公司 | A kind of face super-resolution reconstruction method based on deep learning |
CN108121978A (en) * | 2018-01-10 | 2018-06-05 | 马上消费金融股份有限公司 | A kind of face image processing process, system and equipment and storage medium |
CN108122197A (en) * | 2017-10-27 | 2018-06-05 | 江西高创保安服务技术有限公司 | A kind of image super-resolution rebuilding method based on deep learning |
US10002286B1 (en) * | 2015-04-28 | 2018-06-19 | Carnegie Mellon University | System and method for face recognition robust to multiple degradations |
-
2018
- 2018-08-01 CN CN201810867459.XA patent/CN109035171B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102567957A (en) * | 2010-12-30 | 2012-07-11 | 北京大学 | Method and system for removing reticulate pattern from image |
CN102572201A (en) * | 2010-12-31 | 2012-07-11 | 北京大学 | Method and system for removing overlapped curves from image |
CN104952111A (en) * | 2014-03-31 | 2015-09-30 | 特里库比奇有限公司 | Method and apparatus for obtaining 3D face model using portable camera |
US10002286B1 (en) * | 2015-04-28 | 2018-06-19 | Carnegie Mellon University | System and method for face recognition robust to multiple degradations |
CN107016657A (en) * | 2017-04-07 | 2017-08-04 | 河北工业大学 | The restorative procedure of the face picture covered by reticulate pattern |
CN108122197A (en) * | 2017-10-27 | 2018-06-05 | 江西高创保安服务技术有限公司 | A kind of image super-resolution rebuilding method based on deep learning |
CN107958444A (en) * | 2017-12-28 | 2018-04-24 | 江西高创保安服务技术有限公司 | A kind of face super-resolution reconstruction method based on deep learning |
CN108121978A (en) * | 2018-01-10 | 2018-06-05 | 马上消费金融股份有限公司 | A kind of face image processing process, system and equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
罗彬.: "半色调图像去网方法的研究", 《中国优秀硕士学位论文全文数据库(电子期刊)》 * |
陈飞.: "扫描半色调图像去网纹算法研究", 《中国优秀硕士学位论文全文数据库(电子期刊)》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111612798A (en) * | 2020-05-15 | 2020-09-01 | 中南大学 | Method, system and medium for repairing complete human face reticulate pattern facing human face data |
Also Published As
Publication number | Publication date |
---|---|
CN109035171B (en) | 2021-06-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhou et al. | IRFR-Net: Interactive recursive feature-reshaping network for detecting salient objects in RGB-D images | |
CN107230267B (en) | Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method | |
CN108564040B (en) | Fingerprint activity detection method based on deep convolution characteristics | |
AU2019100349A4 (en) | Face - Password Certification Based on Convolutional Neural Network | |
Wang et al. | Fingerprint pore extraction using U-Net based fully convolutional network | |
Chattopadhyay et al. | Information fusion from multiple cameras for gait‐based re‐identification and recognition | |
CN109872279A (en) | One kind intelligent cloud platform recognition of face neural network based and local cypher method | |
Wang et al. | Suspect multifocus image fusion based on sparse denoising autoencoder neural network for police multimodal big data analysis | |
Wu et al. | Comparative analysis and application of LBP face image recognition algorithms | |
CN117437522B (en) | Face recognition model training method, face recognition method and device | |
Guo | Impact on biometric identification systems of COVID-19 | |
CN109035171A (en) | A kind of reticulate pattern facial image restorative procedure | |
Yu et al. | MV-ReID: 3D Multi-view Transformation Network for Occluded Person Re-Identification | |
US11514715B2 (en) | Deepfake video detection system and method | |
Yu et al. | Machine learning and signal processing for human pose recovery and behavior analysis | |
Rao et al. | Amalgamation Biometric Deep Features in Smart City-ITS Authentication | |
Patil et al. | 3D-DWT and CNN based face recognition with feature extraction using depth information and contour map | |
Nguyen et al. | Enhanced age estimation by considering the areas of non-skin and the non-uniform illumination of visible light camera sensor | |
Li et al. | Face recognition algorithm based on sparse representation of DAE convolution neural network | |
Shylaja et al. | Illumination Invariant Novel Approaches for Face Recognition | |
Liu et al. | Multifeature fusion detection method for fake face attack in identity authentication | |
Nyajowi et al. | CNN Real-Time Detection of Vandalism Using a Hybrid-LSTM Deep Learning Neural Networks | |
Jiashu | Performance analysis of facial recognition: A critical review through glass factor | |
Hanawa et al. | Face Image De-identification Based on Feature Embedding for Privacy Protection | |
Rusia et al. | Deep architecture-based face spoofing identification in real-time application |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |