CN114359113A - Detection method and application system of face image reconstruction and restoration method - Google Patents

Detection method and application system of face image reconstruction and restoration method Download PDF

Info

Publication number
CN114359113A
CN114359113A CN202210250302.9A CN202210250302A CN114359113A CN 114359113 A CN114359113 A CN 114359113A CN 202210250302 A CN202210250302 A CN 202210250302A CN 114359113 A CN114359113 A CN 114359113A
Authority
CN
China
Prior art keywords
face image
sample
definition
face
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210250302.9A
Other languages
Chinese (zh)
Inventor
李刚
潘宁
张楠楠
付太
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Electronic Computer Research Institute Co ltd
Original Assignee
Tianjin Electronic Computer Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Electronic Computer Research Institute Co ltd filed Critical Tianjin Electronic Computer Research Institute Co ltd
Priority to CN202210250302.9A priority Critical patent/CN114359113A/en
Publication of CN114359113A publication Critical patent/CN114359113A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a detection method of a facial image reconstruction and restoration method, which comprises the steps of preparing a high-definition facial image as a training library, and taking the training library as a training model; carrying out fuzzy and reconstruction recovery training on the training model; calculating the similarity capability of the reconstructed and restored portrait of the reconstructed and restored training model or the hit rate capability of the reconstructed and restored portrait; the invention also provides an application system of the detection method of the face image reconstruction and restoration method; the invention breaks through the forward rule in the traditional human face recognition operation principle, and a human face database with relatively high accuracy can be obtained through the method, so that the accuracy of human face recognition is improved; the method can meet the actual requirements of different occasions, such as the fields of public security criminal investigation, intelligent security and the like; the method has the advantages of low cost, simple operation, high reliability and strong reusability.

Description

Detection method and application system of face image reconstruction and restoration method
Technical Field
The application relates to the technical field of facial image restoration, in particular to a detection method and an application system of a facial image reconstruction restoration method.
Background
At the present stage, when people analyze videos, people often repeatedly check information in the monitored videos and repeatedly observe key parts, and a face image is often one of the key information in the videos, because the face in the monitored videos is often far and occupies a small proportion, when the distance of a camera is far, the resolution of the face image is often low; in many images or videos, high-definition faces often bear important information and value, along with large-scale popularization in the fields of road monitoring, vehicle data recorders, safety monitoring and the like, the clear faces are more and more emphasized in the aspects of monitoring videos and images, and face images have important application in the fields of identity authentication, crowd analysis, human body tracking and the like, so that the definition of the face images is particularly important; in the prior art, a method of direct interpolation amplification is generally adopted for analyzing the face with insufficient video definition, the interpolation method is a rapid and widely applied method, but due to the poor amplification quality, high-frequency information of an image can be damaged, image blurring is caused, and a plurality of difficulties are brought to the identification and recovery of the face image in the video; the applicant has searched the prior art as comprehensively as possible before the patent application, but the prior art has not searched a method and a system for preprocessing the face database to improve the accuracy of face detection.
Therefore, a new technical solution is needed to solve the above technical problems.
Disclosure of Invention
The application provides a detection method of a face image reconstruction restoration method, which can realize the effect of restoring one or more low-resolution face images into clear and high-resolution face images through an algorithm.
A detection method of a face image reconstruction restoration method comprises the following steps:
s1: preparing a high-definition face image as a training library, and taking the training library as a training model;
s2: randomly extracting N high-definition face images from a training library to serve as a sample library;
s3: randomly extracting a high-definition face image A from a sample library for fuzzification processing to obtain a blurred sample A1;
s4: reconstructing and restoring the blurred sample A1 to generate a high-resolution face image sample A2;
s5: carrying out face image similarity calculation on the high-resolution face image sample A2 and the high-definition face image A, and calculating to obtain the similarity percentage between the high-resolution face image sample A2 and the high-definition face image sample A;
s6: and repeating the steps S1-S5, calculating the similarity of the N high-definition face images in the sample library one by one, and accumulating and averaging to obtain the similarity capability of the reconstructed and restored face image of the training model.
Preferably, the following steps are further included between S1 and S2:
s11: purifying high-definition face images in a training library;
s12: and randomly extracting M high-definition face images from the purified training library to serve as a high-quality face database.
As a preferable scheme, in S2, N high-definition face images are randomly extracted from a training library as a sample library, and N high-definition face images are randomly extracted from a high-quality face database as a sample library.
As a preferred solution, the calculation formula of the similarity capability of the reconstructed human images of the training model in S6 is as follows:
Figure 100002_DEST_PATH_IMAGE001
=
Figure DEST_PATH_IMAGE002
wherein,
Figure 677807DEST_PATH_IMAGE001
representing the similarity capability of the reconstructed human images; representing N high-definition face images in total;
Figure 100002_DEST_PATH_IMAGE003
express N high definition peopleSumming the similarity of the face images.
The application provides another detection method of a face image reconstruction and restoration method, which comprises the following steps:
d1: preparing a high-definition face image as a training library, and taking the training library as a training model;
d2: randomly extracting M high-definition face images from a training library to serve as a high-quality face database;
d3: randomly extracting N high-definition face images from a high-quality face database to serve as a sample library;
d4: randomly extracting a high-definition face image A from a sample library for fuzzification processing to obtain a blurred sample A1;
d5: reconstructing and restoring the blurred sample A1 to generate a high-resolution face image sample A2;
d6: the reconstructed high-resolution face image sample A2 is placed in M high-quality face databases for similarity retrieval, and is arranged according to the similarity to form a face image set;
d7: and calculating the hit rate capability of the reconstructed and restored human images according to the human face image set.
Preferably, the method further comprises the following steps between D1 and D2:
d11: and purifying the high-definition face image of the training library.
Preferably, D7 is: taking whether the high-definition face image sample A extracted in the step D4 exists in the face image set as a judgment standard, if the high-definition face image sample A exists in the face image set, the high-definition face image sample A is hit, otherwise, the high-definition face image sample A is not hit; and accumulating the number of the hit high-definition face image samples A and carrying out quotient calculation on N high-definition face images in the sample library to obtain the hit rate capability of the reconstructed and restored face image.
As a preferred solution, the hit rate capability of the reconstructed portrait is calculated by the formula:
Figure DEST_PATH_IMAGE004
=
Figure 100002_DEST_PATH_IMAGE005
wherein,
Figure 204734DEST_PATH_IMAGE004
representing hit rate capability for reconstructing the reconstructed portrait;
Figure DEST_PATH_IMAGE006
representing the sum of the number of all hit high-definition face image samples A;
Figure 100002_DEST_PATH_IMAGE007
a total of N high definition face images are represented.
The application also provides an application system of the face image reconstruction, restoration and detection method, which comprises the following steps:
a training module: the human face image processing method comprises the steps of training a training model, fuzzifying the training model to obtain a blurred sample A1, reconstructing and restoring the blurred sample A1 to generate a high-resolution human face image sample A2;
a processing module: calculating the similarity capability of the reconstructed and restored human images or the hit rate capability of the reconstructed and restored human images of the reconstructed and restored high-resolution human face image sample A2;
a server: storing and sorting the high-definition face images as training models and transmitting data;
a client: used for displaying the processing result and sending a request to the server;
the server is respectively connected with the client and the processing module, and the processing module is connected with the training module.
As a preferred scheme, the application system of the face image reconstruction restoration detection method further comprises a storage module, wherein the storage module is used for storing and backing up data in the server; the storage module is connected with the server.
The invention aims to overcome the defects of the existing face image recognition technology, and aims to solve the problem of large biological recognition error caused by unclear imaging of low-resolution face images in a biological recognition system, a training model is trained in the application, firstly, the training model is blurred, then, super-resolution reconstruction restoration work is carried out on the training model, and after the face reconstruction restoration work is finished, the high-resolution face image after restoration and reconstruction is subjected to calculation of the similarity capability of reconstruction restoration portrait or the hit rate capability of reconstruction restoration portrait: the core principle of the invention is as follows: reconstructing a high-definition face image to enable the high-definition face image to be similar to the provided fuzzy image after being fuzzy, thereby simulating and displaying the approximate high-definition face image of the provided fuzzy image, the algorithm is different from the traditional algorithms such as face recognition, the forward-pushing rule in the traditional face recognition operation principle is broken through, a brand-new approach is provided on solving and thinking, the image reconstructed by the algorithm is an optimal solution close to the answer, and the method has the advantages of low cost, simple operation, high reliability and strong reusability, and can meet the actual requirements of different occasions; the stronger the similarity capability of the reconstructed human image or the hit rate capability of the reconstructed human image obtained by calculation, the higher the accuracy and the feasibility of the face super-resolution reconstruction restoration method are, and the method can be suitable for the fields of different scenes, public security criminal investigation, intelligent security and the like.
Drawings
FIG. 1 is a calculation process of the similarity capability of reconstructed faces;
FIG. 2 is a calculation process of hit rate capability for reconstructing a reconstructed portrait;
FIG. 3 is a hit rate trend graph according to the fourth embodiment;
FIG. 4 is a hit rate trend graph for the sixth example;
FIG. 5 is a schematic structural diagram of embodiment seven of the present application;
1. training module 2, processing module 3, server 4 and client
5. And a storage module.
Detailed Description
The following detailed description of the embodiments of the present invention is provided with reference to the accompanying drawings; it should be noted that the specific embodiments described herein are only for illustrating and explaining the present invention and are not to be construed as limiting the present invention.
The first embodiment is as follows:
the embodiment provides a detection method of a face image reconstruction and restoration method, which comprises the following steps:
s1: preparing a high-quality high-definition face image as a training library, and taking the whole training library as a training model; the high-quality high-definition face image comprises a face image which is high in pixel and the image of which needs to be real, so that non-real face images such as animals and cartoons are avoided; such as: adopting 12 ten thousand high-quality face pictures with 1024 pixels as a training library;
s11: the high-definition face images in the training library are purified, and because an excellent data set needs a good data training source, the high-definition face images in the known training library are subjected to pattern average distribution processing to ensure the average distribution of training models; such as male and female proportion, age distribution proportion, accessory proportion, etc.; in the step, a human face living body detection technology is used, so that non-living body sources in a human face library are directly removed, human face attribute analysis is adopted to ensure the homogenization of a training proportion, then human face similarity comparison is carried out, only one human face image is required to be taken for a group of human faces with high similarity, repeated human face image data is avoided, and the quality of high-definition human face images in the training library is improved;
s12: randomly extracting M high-definition face images from the purified training library to serve as a high-quality face database; this high quality face database is used as the basic experimental library, such as: m is 60000, namely 60000 high-definition face images are randomly extracted from a purified face database data source to serve as a high-quality face database;
s2: randomly extracting N high-definition face images from a high-quality face database to serve as a sample library; randomly extracting N high-definition face images from a high-quality face database to serve as a sample library; such as: n is 300, namely 300 high-definition face images are randomly extracted from a high-quality face database to serve as a sample library;
s3: randomly extracting a high-definition face image A from a sample library for fuzzification processing to obtain a blurred sample A1; the fuzzification processing can adopt a method of direct interpolation amplification in the prior art and the like, the specific method is not limited, and technicians can select the fuzzification processing according to specific conditions; such as: the blurred sample a1 has a pixel value of 32 × 32 pixels; the value of N is 300;
s4: reconstructing and restoring the blurred sample A1 to generate a high-resolution face image sample A2; such as: the pixel value of the reconstructed high-definition face image sample A2 is 1024 x 1024 pixels; the reconstruction restoration of the blurred sample A1 is carried out by adopting a face super-resolution reconstruction restoration method in the prior art, the calculation process for realizing the face super-resolution reconstruction restoration work can realize face alignment through dlib and opencv to detect and align picture resources, if the detection is successful, the next step is carried out, restoration is carried out through PULSE, and the result similarity is compared through a hundred-degree interface and the hit rate is calculated; the method can also be used for carrying out face super-resolution reconstruction restoration work by adopting other calculation methods in the prior art, and technicians can carry out corresponding selection according to specific conditions; in S1, the antagonism of two neural networks is used as a training criterion and can be trained by using back propagation, a Markov chain method with low efficiency is not needed in the training process, and various approximate reasoning is not needed, so that the training difficulty and the training efficiency of a generative model are greatly improved; in the face construction process, in order to ensure the "correspondence" between the output image and the input image, a "downscaling loss" method is applied in the model, and the result also includes parameters related to the multinomial pitch value after the face construction, such as: skin color, age, gender, hairstyle, posture, expression and the like, and the constructed restored picture can be subjected to value adjustment setting by multiple parameters, so that fine adjustment is provided for the restored picture, and the requirements of customers can be met to the greatest extent.
When the generation network of the model proposes a clear image as output, the discrimination network reduces the resolution of the clear image to a level equal to that of the input image; then, the discrimination network compares the similarity between the downscaling loss image and the input image, and only when the similarity between the downscaling loss image and the input image is higher, the discrimination network determines that a clear picture proposed by the network can be output;
s5: carrying out face image similarity calculation on the high-resolution face image sample A2 and the high-definition face image A, and calculating to obtain the similarity percentage between the high-resolution face image sample A2 and the high-definition face image sample A;
s6: and repeating the steps S1-S5, calculating the similarity of the N high-definition face images in the sample library one by one, and accumulating and averaging to obtain the similarity capability of the reconstructed and restored face image of the training model.
In the above steps, the higher the similarity capability of the reconstructed face image obtained by calculation is, the stronger the accuracy and feasibility of the face super-resolution reconstruction restoration method are, that is, the higher the accuracy of the obtained face database is.
As a preferred solution, the calculation formula of the similarity capability of the reconstructed human images of the training model in S6 is as follows:
Figure DEST_PATH_IMAGE008
wherein,
Figure 30477DEST_PATH_IMAGE001
representing the similarity capability of the reconstructed human images; n represents N high-definition face images in total;
Figure DEST_PATH_IMAGE009
and representing the summation of the similarity of the N high-definition face images.
In this embodiment, can realize that one or more low resolution facial image resumes the effect for clear and high resolution's facial image, this application can carry out the characteristic engineering automatically, and same model trains through using the data under the different scenes, alright adapt to different scenes, like fields such as policeman criminal investigation, wisdom security protection.
Example two:
the embodiment provides another detection method for a face image reconstruction and restoration method, which comprises the following steps:
d1: preparing a high-quality high-definition face image as a training library, and taking the training library as a training model; the high-quality high-definition face image comprises a face image which is high in pixels and the image of which is required to be real, so that unreal face images such as animals and cartoons are avoided; such as: adopting 12 ten thousand high-quality face pictures with 1024 pixels as a training library;
d11: the high-definition face images in the training library are purified, and because an excellent data set needs a good data training source, the high-definition face images in the known training library are subjected to pattern average distribution processing to ensure the average distribution of training models; such as male and female proportion, age distribution proportion, accessory proportion, etc.; in the step, a human face living body detection technology is used, so that non-living body sources in a human face library are directly removed, human face attribute analysis is adopted to ensure the homogenization of a training proportion, then human face similarity comparison is carried out, only one human face image is required to be taken for a group of human faces with high similarity, repeated human face image data is avoided, and the quality of high-definition human face images in the training library is improved;
d2: randomly extracting M high-definition face images from the purified training library to serve as a high-quality face database; this high quality face database is used as the basic experimental library, such as: m is 60000, namely 60000 high-definition face images are randomly extracted from a purified face database data source to serve as a high-quality face database;
d3: randomly extracting N high-definition face images from a high-quality face database to serve as a sample library; randomly extracting N high-definition face images from a high-quality face database to serve as a sample library; such as: n is 300, namely 300 high-definition face images are randomly extracted from a high-quality face database to serve as a sample library;
d4: randomly extracting a high-definition face image A from a sample library for fuzzification processing to obtain a blurred sample A1; the fuzzification processing can adopt a method of direct interpolation amplification in the prior art and the like, the specific method is not limited, and technicians can select the fuzzification processing according to specific conditions; such as: the blurred sample a1 has a pixel value of 32 × 32 pixels; the value of N is 300;
d5: reconstructing and restoring the blurred sample A1 to generate a high-resolution face image sample A2; such as: the pixel value of the reconstructed high-definition face image sample A2 is 1024 x 1024 pixels; the reconstruction restoration of the blurred sample A1 is carried out by adopting a face super-resolution reconstruction restoration method in the prior art, the calculation process for realizing the face super-resolution reconstruction restoration work can realize face alignment through dlib and opencv to detect and align picture resources, if the detection is successful, the next step is carried out, restoration is carried out through PULSE, and the result similarity is compared through a hundred-degree interface and the hit rate is calculated; other calculation methods in the prior art can also be adopted to perform the work of reconstruction and restoration of the super-resolution of the face, and technicians can perform corresponding selection according to specific conditions, the main invention point of the application is to calculate the similarity and hit rate of a high-resolution face image set, and specific details about reconstruction and restoration of the super-resolution of the face are not repeated herein; in S1, the antagonism of two neural networks is used as a training criterion and can be trained by using back propagation, a Markov chain method with low efficiency is not needed in the training process, and various approximate reasoning is not needed, so that the training difficulty and the training efficiency of a generative model are greatly improved; in the face construction process, in order to ensure the "correspondence" between the output image and the input image, a "downscaling loss" method is applied in the model, and the result also includes parameters related to the multinomial pitch value after the face construction, such as: skin color, age, gender, hairstyle, posture, expression and the like, the constructed restored picture can be subjected to value adjustment setting by multiple parameters, fine adjustment is provided for the restored picture, and the requirements of customers can be met to the greatest extent;
when the generation network of the model proposes a clear image as output, the discrimination network reduces the resolution of the clear image to a level equal to that of the input image; then, the discrimination network compares the similarity between the downscaling loss image and the input image, and only when the similarity between the downscaling loss image and the input image is higher, the discrimination network determines that a clear picture proposed by the network can be output;
d6: the reconstructed high-resolution face image sample A2 is placed in M high-quality face databases for similarity retrieval, and is arranged according to the similarity to form a face image set;
the set of facial images may include 5 gradients, such as: top1, Top5, Top10, Top20, Top50 and Top1 indicate that the M high-definition face images in the high-quality face database include 1 sample with high similarity to the high-resolution face image sample a2, and similarly, Top5 indicates that the M high-definition face images in the high-quality face database include 5 samples with high similarity to the high-resolution face image sample a2, and the following Top10, Top20 and Top50 refer to Top1 and Top5, which are not specifically explained.
The face image set formed by arranging the face image sets according to the similarity can adopt high-definition face image sets of 1,5,10,20 and 50 before arrangement, and technicians can set the high-definition face image sets according to specific requirements.
D7: calculating the hit rate capability of reconstructing a restored portrait according to the facial image set; the D7 is specifically as follows: taking whether the high-definition face image sample A extracted in the step D4 exists in the face image set as a judgment standard, if the high-definition face image sample A exists in the face image set, the high-definition face image sample A is hit, otherwise, the high-definition face image sample A is not hit; accumulating the number of the hit high-definition face image samples A and carrying out quotient calculation on N high-definition face images in a sample library to obtain the hit rate capability of the reconstructed and restored face image; namely, the high-definition face image sample A is taken as a molecule, the molecule of the hit high-definition face image sample A is added with 1, and the molecule of the missed high-definition face image sample A is added with 0, namely, the missed high-definition face image sample is not subjected to the summation calculation of the molecules.
In the above steps, the higher the hit rate capability of the reconstructed and restored portrait obtained by calculation is, the stronger the accuracy and feasibility of the face super-resolution reconstruction and restoration method are, that is, the higher the accuracy of the obtained face database is.
As a preferred solution, the hit rate capability of the reconstructed portrait is calculated by the formula:
Figure DEST_PATH_IMAGE010
wherein,
Figure 284741DEST_PATH_IMAGE004
representing hit rate capability for reconstructing the reconstructed portrait;
Figure 381004DEST_PATH_IMAGE006
representing the sum of the number of all hit high-definition face image samples A;
Figure 203466DEST_PATH_IMAGE007
a total of N high definition face images are represented.
Example three:
in this embodiment, a face database with a data volume of 60000 after purification is selected, and 300 face images are extracted from the face database and used as a sample library to perform similarity calculation for reconstructing and restoring the face images, specifically:
a detection method of a face image reconstruction restoration method comprises the following steps:
step 1, randomly extracting a high-definition face image (sample A) from 300 high-definition face image sample libraries, and performing fuzzification processing by using a face image processing technology to obtain a blurred sample (sample A1), wherein the pixel value of the blurred sample is 32 × 32 pixels; the face image processing technology adopts an interpolation method and the like in the prior art;
step 2, constructing the blurred face image (sample A1) by a face image construction technology for reconstruction and restoration, and generating a high-resolution face image (A2); reconstructing the restored high-definition face image sample pixel values to 1024 x 1024 pixels; the face image construction technology adopts a face image super-resolution reconstruction restoration method in the prior art and the like;
step 3, carrying out face image similarity calculation on the high-definition face image (A2) and the high-definition face image (sample A) which is extracted at random at the beginning, and calculating to obtain the similarity percentage between the face image (A2) and the high-definition face image (A);
step 4, according to the steps 1 to 3, the similarity of 300 samples in the sample library one by one can be obtained through cyclic calculation, and then the average similarity is obtained through accumulation, so that the similarity capability of the reconstructed and restored portrait of the training model is obtained;
and step 5, calculating that the face database is 60000, and the similarity capability of the reconstructed human image of the person with the sample number of 300 is 0.810777078244766 and is equal to about 81.08%.
Example four:
in this embodiment, a face database with a data volume of 60000 after purification is selected, and 300 face images are extracted from the face database as a sample library to perform hit rate calculation, specifically:
step 1, randomly extracting a high-definition face image (sample A) from 300 high-definition face image sample libraries, and performing fuzzification processing by using a face image processing technology to obtain a blurred sample (sample A1), wherein the pixel value of the blurred sample is 32 × 32 pixels; the face image processing technology adopts an interpolation method and the like in the prior art;
step 2, constructing the blurred face image (sample A1) by a face image construction technology for reconstruction and restoration, and generating a high-resolution face image (A2); reconstructing the restored high-definition face image sample pixel values to 1024 x 1024 pixels; the face image construction technology adopts a face image super-resolution reconstruction restoration method in the prior art and the like;
step 3, reconstructing the restored high-definition face image (A2) and placing the reconstructed high-definition face image into 60000 face database for similarity retrieval to form a face image array which is sorted according to the similarity height; the length of the face image array comprises 5 gradients, namely Top1, Top5, Top10, Top20 and Top50, wherein the face image set is arranged according to the similarity to form a face image set, and a high-definition face image set with 1,5,10,20 and 50 names before arrangement is adopted;
step 4, calculating the hit rate capability of the reconstructed and restored face image according to the face image set, taking whether the high-definition face image (sample A) extracted in the step 1 exists in a face image array formed by returning and retrieving as a judgment standard, and if the high-definition face image (sample A) exists in the face image set, determining that the face image is hit, otherwise, determining that the face image is not hit; accumulating the number of the hit high-definition face image samples A and performing quotient calculation on 300 high-definition face images in a sample library to obtain the hit rate capability of the reconstructed and restored face image;
through calculation, the face database is 6000, the face recovery hit rate with the sample number of 300 is shown in the following table 1, and a hit rate trend graph is shown in fig. 3:
gear position Top1 Top5 Top10 Top20 Top50
Hit rate
59% 76% 80% 85% 91%
TABLE 1
Example five:
in this embodiment, a face database with a purified data size of 10000 is selected, and 50 face images are extracted from the face database and used as a sample library to perform similarity calculation for reconstructing and restoring the face images.
A detection method of a face image reconstruction restoration method comprises the following steps:
step 1: randomly extracting a high-definition face image (sample A) from 50 high-definition face image sample libraries, and performing fuzzification processing by using a face image processing technology to obtain a blurred sample (sample A1), wherein the pixel value of the blurred sample is 32 × 32 pixels; the face image processing technology adopts an interpolation method and the like in the prior art;
step 2: constructing the blurred face image (sample A1) by a face image construction technology for reconstruction and restoration to generate a high-resolution face image (A2), wherein the pixel value of the reconstructed high-definition face image sample is 1024 by 1024 pixels; the face image construction technology adopts a face image super-resolution reconstruction restoration method in the prior art and the like;
and step 3: carrying out face image similarity calculation on the high-definition face image (A2) and a high-definition face image (sample A) which is initially randomly extracted, and calculating to obtain the similarity percentage between the face image (A2) and the high-definition face image (A);
and 4, step 4: according to the steps 1 to 3, the similarity of 50 samples in the sample library one by one can be obtained through cyclic calculation, and then the average similarity is obtained through accumulation, so that the reconstruction and reconstruction portrait similarity capability of the training model is obtained;
and 5: through calculation, the face database is 10000, and the similarity capability of the reconstructed restored face image with the sample number of 50 is 0.672143508139999 and is approximately equal to 67.21%.
Example six:
in this embodiment, a face database with a purified data size of 10000 is selected, and 50 face images are extracted from the face database and used as a sample library to calculate the hit rate capability of reconstructed and restored face images.
A detection method of a face image reconstruction restoration method comprises the following steps:
step 1: randomly extracting a high-definition face image (sample A) from 50 high-definition face image sample libraries, and performing fuzzification processing on the high-definition face image through a face image processing technology. Processing to obtain a blurred sample (sample A1), wherein the pixel value of the blurred sample is 32 × 32 pixels; the face image processing technology adopts an interpolation method and the like in the prior art;
step 2: constructing the blurred face image (sample A1) by a face image construction technology for reconstruction and restoration to generate a high-resolution face image (A2), wherein the pixel value of the reconstructed high-definition face image sample is 1024 by 1024 pixels; the face image construction technology adopts a face image super-resolution reconstruction restoration method in the prior art and the like;
and step 3: the reconstructed high-definition face image (A2) is placed in 10000 face database for similarity retrieval, and a face image array which is sequenced according to the similarity height is formed; the returned length of the face image array comprises 5 gradients, namely Top1, Top5, Top10, Top20 and Top50, and 1,5,10,20 and 50 high-definition face image sets in the front of the face image array are respectively returned;
and 4, step 4: calculating the hit rate capability of the reconstructed face image according to the face image array, taking whether the high-definition face image (sample A) extracted in the step 1 exists in the returned face image set as a judgment standard, if the high-definition face image (sample A) exists in the face image set, determining that the face image is hit, otherwise, determining that the face image is not hit; accumulating the number of the hit high-definition face image samples A and carrying out quotient calculation on 20 high-definition face images in a sample library to obtain the hit rate capability of the reconstructed and restored face image;
through calculation, the face database is 10000, the face recovery hit rate with the sample number of 50 is shown in table 2 below, and the hit rate trend graph is shown in fig. 4:
gear position Top1 Top5 Top10 Top20 Top50
Hit rate
40% 60% 66% 72% 84%
TABLE 2
Example seven:
the embodiment provides an application system of a face image reconstruction, restoration and detection method, which comprises the following steps:
training module 1: the human face image processing method comprises the steps of training a training model, fuzzifying the training model to obtain a blurred sample A1, reconstructing and restoring the blurred sample A1 to generate a high-resolution human face image sample A2; more specifically: the system is used for training a training model which is a training library, randomly extracting N high-definition face images from the training library to serve as a sample library, randomly extracting one high-definition face image A from the sample library to perform fuzzification processing to obtain a blurred sample A1, reconstructing and restoring the blurred sample A1, and generating a high-resolution face image sample A2;
the processing module 2: calculating the similarity capability of the reconstructed and restored human images or the hit rate capability of the reconstructed and restored human images of the reconstructed and restored high-resolution human face image sample A2;
the server 3: storing and sorting the high-definition face images as training models and transmitting data;
the client 4: used for displaying the processing result and sending a request to the server;
the server is connected with the client and the processing module in a signal connection mode, an electric connection mode and the like, the processing module is connected with the training module, and the processing module is connected with the training module in a signal connection mode and the like.
Preferably, the application system of the face image reconstruction restoration detection method further comprises a storage module 5, which is used for storing and backing up data in the server 3; the storage module is connected with the server, and the storage module is connected with the server in a signal connection mode, an electric connection mode and the like.
In this embodiment, the training module 1 trains the training model, completes the fuzzification processing and the recovery processing of the training module, and transmits the training result to the processing module 2, and the processing module 2 calculates the similarity or hit rate of the high-resolution face image sample a2 reconstructed and recovered in the training module 1; then, transmitting the calculation result to a server for storage; and the server 3 transmits the calculation result to the client 4 for display, so that the staff can check the calculation result.
In summary, due to the adoption of the technical scheme, the invention aims to overcome the defects of the existing face image recognition technology, and in order to solve the problem of large biological recognition error caused by unclear imaging of the low-resolution face image in the biological recognition system, the repeated restoration in the application adopts the method in the prior art
A face image super-resolution reconstruction restoration method is characterized in that a neural network confrontation model is utilized, Gaussian noise is added randomly, and random details are generated for a generator by introducing noise; the method comprises the steps that an autonomous learning model is used, the input of each level is modified respectively, visual features represented by the level are controlled under the condition that other levels are not influenced, and finally face super-resolution reconstruction restoration work is achieved through multi-level calculation; the above-mentioned super-resolution reconstruction restoration method for face images is the prior art, and the above-mentioned description is only made for simplicity, and is not described in detail; after the face reconstruction and restoration work is finished, the restored and reconstructed high-resolution face image is placed in a face image library for similarity comparison one by one, and the similarity capability of the reconstructed and restored face image is calculated. Hit rate capability of the reconstructed portrait can also be obtained; the core principle of the invention is as follows: reconstructing a high-definition face image to enable the high-definition face image to be similar to the provided fuzzy image after being fuzzy, thereby simulating and displaying the approximate high-definition face image of the provided fuzzy image, the algorithm is different from the traditional algorithms such as face recognition, the forward-pushing rule in the traditional face recognition operation principle is broken through, a brand-new approach is provided on solving and thinking, the image reconstructed by the algorithm is an optimal solution close to the answer, and the method has the advantages of low cost, simple operation, high reliability and strong reusability, and can meet the actual requirements of different occasions; the higher the similarity capability of the reconstructed portrait or the hit rate capability of the reconstructed portrait obtained by calculation, the stronger the accuracy and feasibility of the face super-resolution reconstruction restoration method are, the higher the accuracy of the finally obtained face database is, and the method can be suitable for the fields of different scenes, public security criminal investigation, intelligent security and the like.
The above-described preferred embodiments according to the present invention are intended to teach those skilled in the art that various changes and modifications can be made without departing from the scope of the invention.
The preferred embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the details of the above embodiments, and various simple modifications can be made to the technical solution of the present invention within the technical idea of the present invention, and these simple modifications are included in the scope of protection of the present invention.
It should be noted that, in the foregoing embodiments, various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various combinations that are possible in the present application will not be described separately.
In addition, any combination of the various embodiments of the present application can be made, and the present application should be considered as disclosed in the present application as long as the combination does not depart from the spirit of the present application.

Claims (10)

1. A detection method for a face image reconstruction restoration method is characterized by comprising the following steps:
s1: preparing a high-definition face image as a training library, and taking the training library as a training model;
s2: randomly extracting N high-definition face images from a training library to serve as a sample library;
s3: randomly extracting a high-definition face image A from a sample library for fuzzification processing to obtain a blurred sample A1;
s4: reconstructing and restoring the blurred sample A1 to generate a high-resolution face image sample A2;
s5: carrying out face image similarity calculation on the high-resolution face image sample A2 and the high-definition face image A, and calculating to obtain the similarity percentage between the high-resolution face image sample A2 and the high-definition face image sample A;
s6: and repeating the steps S1-S5, calculating the similarity of the N high-definition face images in the sample library one by one, and accumulating and averaging to obtain the similarity capability of the reconstructed and restored face image of the training model.
2. The method for detecting the reconstruction restoration method of the human face image as claimed in claim 1, further comprising the following steps between the steps of S1 and S2:
s11: purifying high-definition face images in a training library;
s12: and randomly extracting M high-definition face images from the purified training library to serve as a high-quality face database.
3. The method for detecting the reconstruction and restoration method of the facial image as claimed in claim 2, wherein in step S2, the randomly extracting N high definition facial images from the training library as the sample library is performed by randomly extracting N high definition facial images from the high quality facial database as the sample library.
4. The method for detecting the reconstruction and reconstruction method of the human face image as claimed in claim 1, wherein the calculation formula of the similarity capability of the reconstructed and reconstructed human face of the training model in S6 is as follows:
Figure DEST_PATH_IMAGE001
=
Figure 327603DEST_PATH_IMAGE002
wherein,
Figure 673133DEST_PATH_IMAGE001
representing the similarity capability of the reconstructed human images;
Figure DEST_PATH_IMAGE003
representing N high-definition face images in total;
Figure 698858DEST_PATH_IMAGE004
and representing the summation of the similarity of the N high-definition face images.
5. A detection method for a face image reconstruction restoration method is characterized by comprising the following steps:
d1: preparing a high-definition face image as a training library, and taking the training library as a training model;
d2: randomly extracting M high-definition face images from a training library to serve as a high-quality face database;
d3: randomly extracting N high-definition face images from a high-quality face database to serve as a sample library;
d4: randomly extracting a high-definition face image A from a sample library for fuzzification processing to obtain a blurred sample A1;
d5: reconstructing and restoring the blurred sample A1 to generate a high-resolution face image sample A2;
d6: the reconstructed and restored high-resolution face image sample A2 is placed in M high-quality face databases for similarity retrieval, and is arranged according to the similarity to form a face image set;
d7: and calculating the hit rate capability of the reconstructed and restored human images according to the human face image set.
6. The detection method of human face image reconstruction restoration method according to claim 5,
the method also comprises the following steps between D1 and D2:
d11: and purifying the high-definition face image of the training library.
7. The detection method of human face image reconstruction restoration method according to claim 5,
the D7 is: taking whether the high-definition face image sample A extracted in the step D4 exists in the face image set as a judgment standard, if the high-definition face image sample A exists in the face image set, the high-definition face image sample A is hit, otherwise, the high-definition face image sample A is not hit; and accumulating the number of the hit high-definition face image samples A and carrying out quotient calculation on N high-definition face images in the sample library to obtain the hit rate capability of the reconstructed and restored face image.
8. The method according to claim 7, wherein the hit rate capability of the reconstructed human image is calculated as:
Figure DEST_PATH_IMAGE005
=
Figure 878474DEST_PATH_IMAGE006
wherein,
Figure 483899DEST_PATH_IMAGE005
representing hit rate capability for reconstructing the reconstructed portrait;
Figure DEST_PATH_IMAGE007
representing the sum of the number of all hit high-definition face image samples A;
Figure 949515DEST_PATH_IMAGE003
a total of N high definition face images are represented.
9. An application system of a face image reconstruction restoration detection method is characterized by comprising the following steps:
a training module: the human face image processing method comprises the steps of training a training model, fuzzifying the training model to obtain a blurred sample A1, reconstructing and restoring the blurred sample A1 to generate a high-resolution human face image sample A2;
a processing module: calculating the similarity capability of the reconstructed and restored human images or the hit rate capability of the reconstructed and restored human images of the reconstructed and restored high-resolution human face image sample A2;
a server: storing and sorting the high-definition face images as training models and transmitting data;
a client: used for displaying the processing result and sending a request to the server;
the server is respectively connected with the client and the processing module, and the processing module is connected with the training module.
10. The application system of the facial image reconstruction restoration detection method according to claim 9, further comprising a storage module, wherein the storage module is used for storing and backing up data in the server; the storage module is connected with the server.
CN202210250302.9A 2022-03-15 2022-03-15 Detection method and application system of face image reconstruction and restoration method Pending CN114359113A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210250302.9A CN114359113A (en) 2022-03-15 2022-03-15 Detection method and application system of face image reconstruction and restoration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210250302.9A CN114359113A (en) 2022-03-15 2022-03-15 Detection method and application system of face image reconstruction and restoration method

Publications (1)

Publication Number Publication Date
CN114359113A true CN114359113A (en) 2022-04-15

Family

ID=81094831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210250302.9A Pending CN114359113A (en) 2022-03-15 2022-03-15 Detection method and application system of face image reconstruction and restoration method

Country Status (1)

Country Link
CN (1) CN114359113A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612211A (en) * 2023-05-08 2023-08-18 山东省人工智能研究院 Face image identity synthesis method based on GAN and 3D coefficient reconstruction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334847A (en) * 2018-02-06 2018-07-27 哈尔滨工业大学 A kind of face identification method based on deep learning under real scene
CN111489290A (en) * 2019-04-02 2020-08-04 同观科技(深圳)有限公司 Face image super-resolution reconstruction method and device and terminal equipment
CN111967408A (en) * 2020-08-20 2020-11-20 中科人工智能创新技术研究院(青岛)有限公司 Low-resolution pedestrian re-identification method and system based on prediction-recovery-identification
CN112507617A (en) * 2020-12-03 2021-03-16 青岛海纳云科技控股有限公司 Training method of SRFlow super-resolution model and face recognition method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334847A (en) * 2018-02-06 2018-07-27 哈尔滨工业大学 A kind of face identification method based on deep learning under real scene
CN111489290A (en) * 2019-04-02 2020-08-04 同观科技(深圳)有限公司 Face image super-resolution reconstruction method and device and terminal equipment
CN111967408A (en) * 2020-08-20 2020-11-20 中科人工智能创新技术研究院(青岛)有限公司 Low-resolution pedestrian re-identification method and system based on prediction-recovery-identification
CN112507617A (en) * 2020-12-03 2021-03-16 青岛海纳云科技控股有限公司 Training method of SRFlow super-resolution model and face recognition method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SACHIT MENON ETAL.: ""PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models"", 《CVPR》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612211A (en) * 2023-05-08 2023-08-18 山东省人工智能研究院 Face image identity synthesis method based on GAN and 3D coefficient reconstruction
CN116612211B (en) * 2023-05-08 2024-02-02 山东省人工智能研究院 Face image identity synthesis method based on GAN and 3D coefficient reconstruction

Similar Documents

Publication Publication Date Title
CN109615582B (en) Face image super-resolution reconstruction method for generating countermeasure network based on attribute description
CN109919031B (en) Human behavior recognition method based on deep neural network
CN110287805B (en) Micro-expression identification method and system based on three-stream convolutional neural network
CN111079655B (en) Method for recognizing human body behaviors in video based on fusion neural network
CN108960059A (en) A kind of video actions recognition methods and device
CN112699786B (en) Video behavior identification method and system based on space enhancement module
CN112560810B (en) Micro-expression recognition method based on multi-scale space-time characteristic neural network
CN110728209A (en) Gesture recognition method and device, electronic equipment and storage medium
CN111814661A (en) Human behavior identification method based on residual error-recurrent neural network
CN109801232A (en) A kind of single image to the fog method based on deep learning
CN110751018A (en) Group pedestrian re-identification method based on mixed attention mechanism
CN106295501A (en) The degree of depth based on lip movement study personal identification method
CN112906493A (en) Cross-modal pedestrian re-identification method based on cross-correlation attention mechanism
CN111368734B (en) Micro expression recognition method based on normal expression assistance
CN109961407A (en) Facial image restorative procedure based on face similitude
CN111160356A (en) Image segmentation and classification method and device
CN113379597A (en) Face super-resolution reconstruction method
CN114359113A (en) Detection method and application system of face image reconstruction and restoration method
Liu et al. A multi-stream convolutional neural network for micro-expression recognition using optical flow and evm
Saealal et al. Three-Dimensional Convolutional Approaches for the Verification of Deepfake Videos: The Effect of Image Depth Size on Authentication Performance
Wang et al. Meta-Auxiliary Learning for Micro-Expression Recognition
CN113420776B (en) Multi-side joint detection article classification method based on model fusion
Zhang et al. Seal: A framework for systematic evaluation of real-world super-resolution
CN114036553A (en) K-anonymity-combined pedestrian identity privacy protection method
CN117392729A (en) End-to-end micro expression recognition method based on pre-training action extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220415