CN111144352A - Safe transmission and recognition method for intelligent induction of face image - Google Patents

Safe transmission and recognition method for intelligent induction of face image Download PDF

Info

Publication number
CN111144352A
CN111144352A CN201911400109.3A CN201911400109A CN111144352A CN 111144352 A CN111144352 A CN 111144352A CN 201911400109 A CN201911400109 A CN 201911400109A CN 111144352 A CN111144352 A CN 111144352A
Authority
CN
China
Prior art keywords
face
image
face image
user
end server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911400109.3A
Other languages
Chinese (zh)
Other versions
CN111144352B (en
Inventor
李运发
涂逸飞
王云超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201911400109.3A priority Critical patent/CN111144352B/en
Publication of CN111144352A publication Critical patent/CN111144352A/en
Application granted granted Critical
Publication of CN111144352B publication Critical patent/CN111144352B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/44Secrecy systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • G06V40/53Measures to keep reference information secret, e.g. cancellable biometrics

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Collating Specific Patterns (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a safe transmission and identification method facing to intelligent induction of a face image, which comprises the following steps: firstly, constructing an acquisition and feature construction algorithm of a face image of a legal user at a server side; secondly, constructing a human face image sensing and encryption algorithm of the user with unknown identity at the image acquisition front end of the intelligent sensor; thirdly, constructing a face image receiving and decrypting algorithm of the user with unknown identity at the server side; and finally, constructing a safety recognition algorithm of the face image of the user with unknown identity at the server side. The invention utilizes the chaos sequence of the face image to generate an image position mapping matrix, and then utilizes a certain algorithm to scramble the pixel position of the face image so as to cover the true value of the face image. The new encryption algorithm can effectively resist image statistics attack and infinite attack, and has good safety protection performance.

Description

Safe transmission and recognition method for intelligent induction of face image
Technical Field
The invention belongs to the field of safe transmission and identification of intelligent induction of a face image in the Internet of things, and aims to provide a safe identification method for intelligent induction of the face image in the Internet of things. The method relates to an algorithm for acquiring and constructing characteristics of face images of legal users, an algorithm for sensing and encrypting the face images of users with unknown identities, an algorithm for receiving and decrypting the face images of the users with unknown identities and a safety identification algorithm for the face images of the users with unknown identities.
Background
With the rapid development of the internet of things, various intelligent sensor devices are widely applied in daily life. The intelligent face image collector can be widely used in banks, public security, court, army, government departments, industrial and mining enterprises and other units, so that the face image collector is rapidly developed. In the internet of things, because the image information collected by the intelligent face image collector is transmitted wirelessly, many safety problems are usually faced in the transmission process, such as: tampered, stolen, disturbed, etc. In order to avoid the security threat of the intelligent acquisition information of the face image, a safe and effective identification method is needed.
In recent years, face image recognition has shifted from traditional comparison techniques to the intelligent level. The intelligent recognition of the face image is to recognize and judge the face image by an intelligent acquisition technology, computer graphics, neural network science, digital image processing, mode recognition and wireless sensing technology and combining the unique facial features, outlines, physiology and behavior characteristics of the face. The safe identification of the face image is to encrypt the face image by modern cryptographic technology, computer graphics, digital image processing, mode identification and wireless sensing technology, then to transmit by network communication technology, and finally to decrypt and authenticate the received image at the destination.
At present, the safety identification of people to face images is basically an identity identification mode. The method mainly comprises the following steps: (1) and (4) an identity authentication scheme of the static password. In the scheme, the user uses a fixed password in the whole identity authentication process, and the password is not replaced in the midway. The advantages of this solution are: the authentication process is simple, complex key calculation and communication do not exist, and the authentication process is easy to realize. The disadvantages are that: the loss of the security of the static password system is easily caused, and lawless persons can easily acquire corresponding passwords by guessing, stealing, monitoring and other modes. (2) And (3) an identity authentication scheme of the dynamic password. In the scheme, a user encrypts, transmits, decrypts and authenticates information according to time and a password which changes constantly through a dynamic use token. The advantages of this solution are: the authentication security is high, and lawless persons are difficult to obtain the dynamic password by guessing, stealing, monitoring and other modes. The disadvantages are that: the calculation of the dynamic change of the password is complex, the encryption process is complex, and the key negotiation and transmission are frequent. In addition, if the information transmission end and the information receiving end cannot keep the synchronization of the time and the token, the information receiving end may not be able to receive or decrypt the ciphertext information.
From the above analysis it follows that: in the internet of things, both an identity authentication scheme using a static password and an identity authentication scheme using a dynamic password can improve the security of the system to a certain extent, and both have certain advantages. However, since the two authentication schemes also have certain disadvantages, certain security problems are also faced in the internet of things. Especially, in the wireless transmission process of the internet of things, the security problem will be greater. Under the condition, a safe transmission and recognition method facing intelligent induction of the face image is designed in the Internet of things. The method comprises the steps of firstly constructing an acquisition and feature construction algorithm of a face image of a legal user at a server side. And then, constructing a human face image sensing and encryption algorithm of the user with unknown identity at the image acquisition front end of the intelligent sensor. On the basis, a face image receiving and decrypting algorithm of the user with unknown identity is constructed at the server side. And finally, constructing a safety recognition algorithm of the face image of the user with unknown identity at the server side.
Disclosure of Invention
In view of the above technical problems in the prior art, the present invention is directed to: in the Internet of things, a (1) acquisition and feature construction algorithm of a face image of a legal user is constructed; (2) the human face image sensing and encryption algorithm of the user with unknown identity; (3) receiving and decrypting algorithms of face images of users with unknown identities; (4) and (4) a safety identification algorithm of the face image of the user with unknown identity. Through the four algorithms, the safety recognition of intelligent face acquisition is realized in the Internet of things.
In order to solve the problems, in the acquisition and feature construction algorithm of the face image of the legal user, based on the intelligent acquisition requirement of the face image in the internet of things, the advantages and the disadvantages of an identity authentication scheme of a static password or an identity authentication scheme of a dynamic password are considered, and on one hand, the acquired face image is grayed. Then, the grayed face image information is corresponding to the identity information to construct an information base of the face image of the legal user, so that the face image of the unknown identity user can be conveniently retrieved. On the other hand, a face gray image is constructed to form a sample set and a characteristic face space, and the difference face vector of the collected face gray image and the average face of each legal user is projected to the characteristic face space, so that the face image of the user with an unknown identity can be conveniently identified.
In order to solve the problems, in the algorithm for sensing and encrypting the face image of the unknown identity user, on one hand, a symmetric encryption key is adopted to participate in fusion calculation, so that the face image of the unknown identity user is ensured to be simple in communication in network transmission and free of complex key calculation, and the subsequent image identification and authentication are facilitated. On the other hand, the face image of the user with the unknown identity is fused with the random image, so that the face image of the user with the unknown identity is not exposed in network transmission, and the safety of the face image of the user with the unknown identity in network transmission is maintained.
In order to solve the above problems, in the algorithm for receiving and decrypting a face image of an unidentified user according to the present invention, on one hand, an encrypted image is received, and on the other hand, a symmetric key is used to decrypt and iterate the received encrypted image, thereby calculating the face image and a random image of the unidentified user. And the next step is the safe identification service of the face image of the user with unknown identity.
In order to solve the problems, in the security identification algorithm of the facial image of the unidentified user, the influence of image fusion, encryption and decryption on the original image is fully considered, and the Euclidean distance between the feature space of the decrypted facial image and the feature space of the facial image of the legal user is calculated by adopting an Euler paradigm. By classifying the rule of the Euclidean distance, the face image and the identity of the user with unknown identity are safely identified, and safety support is provided for other application services of the system.
In a word, the safe transmission and identification method for intelligent face image acquisition in the internet of things has the following advantages and effects:
1. adopts a new acquisition and feature construction algorithm of a legal user face image
The construction algorithm fully considers the safety problem of image information in the transmission process of the Internet of things, and firstly judges whether the face image of a legal user needs to be acquired according to the advantages and disadvantages of an identity authentication scheme combining a static password or an identity authentication scheme combining a dynamic password. Then, graying each pixel point of the collected face image of the legal user and covering the true value. And meanwhile, the identity information and the database of the legal user are constructed. On the basis, a face gray level image is constructed to form a sample set and a characteristic face space, and the difference face vector of the collected face gray level image of each legal user and the average face is projected to the characteristic face space. By the aid of the construction algorithm, on one hand, authenticity of the image of the legal user can be safely protected and hidden, on the other hand, a characteristic face space of the legal user is constructed, and safe identification of the face image of the unidentified user is facilitated.
2. Adopts a new algorithm of human face image induction and encryption of unknown identity users
In order to solve the problems, in the algorithm for sensing and encrypting the face image of the user with unknown identity, firstly, the face image is sensed through the intelligent face image sensor. On the basis, random image fusion is carried out on the induced face image to obtain an encrypted image. On one hand, the algorithm adopts a static encryption scheme, so that the encryption process is simpler, complex key calculation and communication do not exist, and the algorithm is easier to realize. On the other hand, the safety of the face image in transmission is ensured. In the algorithm, the face images are constructed into chaotic sequences with the same size through encryption calculation. Through the chaos sequence, enough key space is provided for the encryption space, the traditional encryption mode and method are changed, and the method has more leading-edge novelty and creativity.
3. Face image receiving and decrypting algorithm adopting new unidentified user
In order to solve the problems, in the face image receiving and decrypting algorithm of the unidentified user, iterative computation is carried out on the encrypted image according to the sensitivity and the synchronous characteristic of the chaotic sequence to the initial value, and the replacement and restoration of the pixel position of the encrypted image are realized, so that the decryption of the encrypted image is realized. The new face image receiving and decrypting algorithm of the user with unknown identity improves the robustness of the system, so that the whole encryption system has good user experience.
4. Safety recognition algorithm adopting new face image of unknown identity user
In order to solve the problems, in the security identification algorithm of the face image of the unidentified unknown user, firstly, the received and decrypted face image of the unidentified unknown user is subjected to gray processing, and then the face image P of the unidentified unknown user to be identified is subjected to gray processing*(Un) And projecting the difference value of the average face to a feature space to construct a feature vector of the image. On the basis, a threshold value is defined, and the Euclidean distance between the feature space of the decrypted face image and the feature space of the face image of the legal user is calculated by adopting an Euler normal form. And constructing a classification rule of face recognition through the distance, thereby safely recognizing the face image and the identity of the user with unknown identity and providing safety support for other application services of the system.
5. Good robust performance
The intelligent sensing safe transmission and recognition method for the face image combines the advantages of the identity authentication scheme of the static password on one hand, and uses the gray level extraction of image pixel points and image fusion in the multilayer encryption of the face image on the other hand, so that the face image is constructed into a chaotic sequence with the same size. Through the chaos sequence, enough key space is provided for the encryption space, the traditional encryption mode and method are changed, and the method has the leading novelty and creativity and also has good robustness.
6. Good safety protection
The intelligent sensing safe transmission and identification method for the face image actually uses a new face image encryption algorithm, and the main idea is to generate an image position mapping matrix by using a chaos sequence of the face image, and then to scramble the pixel position of the face image by using a certain algorithm to cover the true value. The new encryption algorithm can effectively resist image statistics attack and infinite attack, and has good safety protection performance.
Drawings
Fig. 1 is an architecture diagram of a security transmission and recognition method for intelligent sensing of a human face image.
Detailed Description
The invention will be further explained with reference to the drawings.
From the perspective of a safe transmission and identification method of intelligent induction of a face image, the method comprises the following steps: (1) collecting face images of legal users and constructing a characteristic algorithm; (2) and (3) an algorithm for sensing and encrypting the face image of the user with unknown identity. (3) And receiving and decrypting the face image of the user with unknown identity. (4) And (4) a safety identification algorithm of the face image of the user with unknown identity.
Algorithm 1: acquisition and feature construction algorithm for face image of legal user
The first step is as follows: and the back-end face image collector judges whether the face image of a legal user needs to be collected according to the command of the back-end server. If yes, turning to the second step; if not, turning to the twentieth step;
the second step is that: a rear-end face image collector collects a face image P (U) of a legal user U and randomly selects two relatively large integers M and N;
the third step: the rear-end face image collector divides the face image P (U) into M x N blocks with the same image size and stores pixel RGB values, namely (R, G, B) values, of each block of the face image P (U).
The fourth step: the rear-end face image collector transmits the pixel RGB value of each block of the collected face image P (U) and the collected face image P (U) to a rear-end server;
the fifth step: the back-end server receives the face image P (U) and the pixel RGB value of each block of the face image P (U) sent by the back-end face image collector. Then, it is determined whether the RGB values of the pixels of each block of the face image p (u) of the legitimate user are grayed. If yes, turning to the sixth step; otherwise, go to the eighth step.
And a sixth step: the back-end server will use a grayscale algorithm: calculating Gray value of the block (R0.3 + G0.59 + B0.11);
the seventh step: the back-end server judges whether the pixel RGB value of each block of the face image P (U) of the legal user U is calculated. If the calculation is finished, the eighth step is carried out; otherwise, turning to the next pixel block without calculating the RGB value, and then turning to the sixth step;
eighth step: after the gray level of each pixel block of the face image P (U) of the legal user U is finished, the back-end server records the gray level image of each block of the face image P (U) of the legal user U as Pi,j(U), (i 1,2, 3.., M, j 1,2, 3.., N) and inputting the identity information id (U) of the legitimate user U;
the ninth step: the back-end server makes the gray image P of each block of the face image P (U) of the legal user Ui,j(U), (i 1,2, 3.., M, j 1,2, 3.., N) and an identity information id (U) are saved to a database.
The tenth step: according to the gray level image P of each block of the face image P (U) of the legal user U in the databasei,j(U), (i 1,2, 3.. said., M, j 1,2, 3.. said., N) and identity information id (U), the back-end server constructs a face grayscale image sample set of the legitimate user U, and the sample set is composed of grayscale images of each block of the legitimate user U.
The eleventh step: according to the face gray level image sample set of the legal user U, the back-end server constructs a sample matrix of the face gray level image of the legal user U: x ═ X1(U),X2(U),X3(U),...,Xi(U)...,XM(U)]TWherein vector Xi(U) is the gray image vector of all blocks corresponding to the ith row after the face image of the legal user U is divided into M rows and N columns of blocks, namely Xi(U)=[Pi,1(U),Pi,2(U),Pi,3(U),...,Pi,N(U)];
The twelfth step: the back-end server passes the followingFormula calculation of X in legal user Ui(U) average face value of the corresponding face image block:
Figure BDA0002345515670000061
the thirteenth step: the back-end server calculates X in the legal user U by the following formulai(U) gray difference between face and average face:
di=Xi(U)-Ψi,i=1,2,...,M。 (2)
the fourteenth step is that: the back-end server constructs a covariance matrix of the face gray level image:
A=(d1,d2,...,dM) (3)
Figure BDA0002345515670000062
the fifteenth step: the back-end server calculates A by adopting a singular value decomposition methodTThe eigenvalue and eigenvector of A;
sixteenth, step: determining AA according to the characteristic value and the characteristic vector obtained in the fourteenth stepTThe eigenvalues and eigenvectors.
Seventeenth step: according to the characteristic value and the characteristic vector obtained in the fourteenth step, the back-end server pair ATPerforming standard orthogonalization on the eigenvector corresponding to each eigenvalue lambada i in the A to obtain an orthogonalized standard eigenvector Vi
And eighteenth step: and selecting the first K maximum eigenvalues and corresponding eigenvectors according to the contribution rate of the eigenvalues. Wherein the contribution ratio refers to the sum of the selected eigenvalues and the sum of all eigenvalues. Here, the contribution rate of the eigenvalues is assumed to be
Figure BDA0002345515670000071
Then:
Figure BDA0002345515670000072
where b is a constant (5) determined by the system
The nineteenth step: the back-end server selects b to be 99%, that is, the orthogonal projection of the eigenvector corresponding to the first K maximum eigenvalues of the gray level image sample is ensured to occupy the whole ATThe normalized orthogonal eigenvector corresponding to the A eigenvalue has 99%, and the eigenvector of the original covariance matrix satisfying this condition is obtained
Figure BDA0002345515670000073
The calculation is as follows:
Figure BDA0002345515670000074
the twentieth step: covariance matrix AA for constructing face gray level imageTThe result of the feature face space satisfying the contribution rate of more than 99% is:
Figure BDA0002345515670000075
the twentieth step: the back-end server stores w and transmits corresponding integers M and N to the front-end face intelligent sensor in a secret mode;
the twentieth step: end up
And 2, algorithm: human face image sensing and encryption algorithm for user with unknown identity
The first step is as follows: intelligent front-end face sensor randomly and intelligently senses unknown user u with certain identityΘFace image P of1And other non-face images P2
The second step is that: randomly selecting a number in (0,1) of front-end face intelligent sensors as a0(ii) a In [3.57,4 ]]Randomly selecting a number which is set as β, and setting the initial iteration number t as 0;
the third step: the front-end face intelligent sensor randomly selects an image according to encryption requirements, and the image is set as Pr
The fourth step: the front-end face intelligent inductor receives the transmission from the back-end serverAnd each of P and N, and separately1、P2And PrDividing the image into M x N blocks with the same size;
the fifth step: front-end face intelligent sensor according to a0β and t is 0, and the chaos sequence { a is calculated by the following formulaiTherein 0<ai<1,i=1,2,...M。
ai+1=β*ai*(1-ai) (8)
And a sixth step: the front-end face intelligent sensor judges whether t is larger than 15, if yes, the step is shifted to the fifteenth step, and if not, the step is shifted to the seventh step;
the seventh step: i is 1;
eighth step: the front-end face intelligent sensor judges whether i is larger than M, if so, the fourteenth step is carried out, and if not, the ninth step is carried out;
the ninth step: j is 1;
the tenth step: the front-end face intelligent sensor judges whether j is larger than N, if so, the step goes to the tenth step, and if not, the step goes to the tenth step;
the eleventh step: the front-end face intelligent inductor performs calculation through the following formula
Figure BDA0002345515670000081
The twelfth step: j equals j +1, go to the tenth step;
the thirteenth step: turning to the eighth step when i is i + 1;
the fourteenth step is that: turning to the sixth step when t is t + 1;
the fifteenth step: user u with unknown defined identityΘThe face encryption image of (1) is E, wherein E (i, j) is uΘThe gray value P of the face encryption image in the image coordinate (i, j) block1(i, j) is the user image P1Grey value, P, at block of image coordinates (i, j)2(i, j) is other non-face image P2Gray values at the image coordinate (i, j) blocks;
sixteenth, step: the front-end face intelligent inductor is connected with the human face intelligent inductor in a secret modeInformation ξ ═ a0||β||PrThe data is transmitted to a back-end server;
seventeenth step: front-end face intelligent sensor for unknown user uΘThe face encryption image E is transmitted to a back-end server;
and eighteenth step: and (6) ending.
Algorithm 3: face image receiving and decrypting algorithm for user with unknown identity
The first step is that the back-end server receives information ξ ═ a sent by the front-end face intelligent sensor0||β||Pr};
The second step is that: the back-end server receives the user u with unknown identity sent by the front-end face intelligent sensorΘThe face encryption image E;
thirdly, the back-end server sends ξ ═ { a ═ according to the intelligent sensor of the front-end face0||β||PrCalculating a chaos sequence { a ] by the following formula with the face encryption image EiTherein 0<ai<1,i=1,2,...M。
ai+1=β*ai*(1-ai) (10)
The fourth step: the back-end server judges whether t is larger than 15, if yes, the step goes to the thirteenth step, and if not, the step goes to the fifth step;
the fifth step: i is 1;
and a sixth step: the back-end server judges whether i is larger than M, if yes, the step goes to the twelfth step, and if not, the step goes to the seventh step;
the seventh step: j is 1;
eighth step: the back-end server judges whether j is larger than N, if yes, the tenth step is carried out, and if not, the ninth step is carried out;
the ninth step: the back-end server performs calculation through the following formula
Figure BDA0002345515670000091
The tenth step: j equals j +1, go to the eighth step;
the eleventh step: turning to the sixth step when i is i + 1;
the twelfth step: turning to the fourth step when t is t + 1;
the thirteenth step: the back-end server sets i to 1;
the fourteenth step is that: the back-end server judges whether i is larger than M, if yes, the twentieth step is carried out, and if not, the fifteenth step is carried out;
the fifteenth step: j is 1;
sixteenth, step: the back-end server judges whether j is larger than N, if yes, the step goes to the nineteenth step, and if not, the step goes to the seventeenth step;
seventeenth step: the back-end server performs calculation through the following formula
Figure BDA0002345515670000101
And eighteenth step: j equals j +1, go to the sixteenth step;
the nineteenth step: turning to the fourteenth step when i is i + 1;
the twentieth step: the back-end server stores the decrypted face image as CP1And CP2(ii) a Wherein CP1(i, j) is the decrypted face image CP1Grey value, CP, at block of image coordinates (i, j)2(i, j) is other non-face image CP after decryption2Gray values at the image coordinate (i, j) blocks;
the twentieth step: and (6) ending.
And algorithm 4: safety recognition algorithm for face image of user with unknown identity
The first step is as follows: the back-end server selects the face image CP of the user with unknown identity to be identified after decryption from the decrypted face images according to the requirement1(or other non-face images CP2);
The second step is that: the back-end server selects the face image CP of the user with unknown identity to be identified1(or other non-face images CP2) The face image CP to be recognized1(or other non-face images CP2) Division into equal size images M NBlocking and saving face image CP1Pixel RGB values, i.e., (R, G, B) values, for each block of (a);
the third step: the back-end server calculates the face image CP of the user with unknown identity according to the seventh step to the twentieth step in the algorithm 11(or other non-face images CP2) Constructing a characteristic face space of the gray level image under the condition that the contribution rate is more than 99% (the result is assumed to be w)Θ) Feature vector A corresponding to the feature valueΘVΘ(Here V)Θ=[V1 Θ,V2 Θ,...,VM Θ]) The value difference A between the feature vector corresponding to the feature value and the corresponding average vectorΘVΘΘ(here Ψ)Θ=[Ψ1 Θ2 Θ,...,ΨM Θ]):
The fourth step: the back-end server calculates the face image CP of the user with unknown identity1(or other non-face images CP2) When the projection characteristic face space meeting the condition that the contribution rate is more than 99 percent, the calculation formula is as follows:
Figure BDA0002345515670000102
the fifth step: the backend server calculates w and AV from Algorithm 1iiCalculating the projection characteristic face space of the pixel value of the legal user U under the condition that the contribution rate is more than 99%, wherein the calculation formula is as follows:
ΩP(U)=wT(AV-Ψ) (14)
and a sixth step: the back-end server calculates the image threshold of the legal user: namely, it is
Figure BDA0002345515670000111
The seventh step: the back-end server calculates omega by adopting Euclidean distanceP(U)Distance from decrypted imageiNamely:
Figure BDA0002345515670000112
eighth step: the back-end server identifies and classifies the faces by adopting the following rules:
1) if all epsiloni≥θ2(where i is 1, 2.. times.m), the image to be recognized is not a face image;
2) if all epsiloni<θ2(where i is 1, 2.. times.m), the image to be recognized is a legal user gray-scale image;
3) if a moiety i is present (where 1. ltoreq. i.ltoreq.M), so that εi≥θ2Also present is part i (here)
I is not less than 1 but not more than M) such that epsiloni<θ2If the image to be identified is not a legal user gray scale image;
the ninth step: the back-end server stores and displays the result of the face recognition;
the tenth step: and (6) ending.

Claims (5)

1. A safe transmission and identification method facing to intelligent induction of face images is characterized by comprising the following steps: firstly, constructing an acquisition and feature construction algorithm of a face image of a legal user at a server side; secondly, constructing a human face image sensing and encryption algorithm of the user with unknown identity at the image acquisition front end of the intelligent sensor; thirdly, constructing a face image receiving and decrypting algorithm of the user with unknown identity at the server side; and finally, constructing a safety recognition algorithm of the face image of the user with unknown identity at the server side.
2. The intelligent human face image sensing-oriented secure transmission and recognition method as claimed in claim 1, wherein: the algorithm for acquiring the face image of the legal user and constructing the characteristics comprises the following steps:
the first step is as follows: and the back-end face image collector judges whether the face image of a legal user needs to be collected according to the command of the back-end server. If yes, turning to the second step; if not, turning to the twentieth step;
the second step is that: a rear-end face image collector collects a face image P (U) of a legal user U and randomly selects two relatively large integers M and N;
the third step: the rear-end face image collector divides the face image P (U) into M x N blocks with the same image size and stores pixel RGB values, namely (R, G, B) values, of each block of the face image P (U).
The fourth step: the rear-end face image collector transmits the pixel RGB value of each block of the collected face image P (U) and the collected face image P (U) to a rear-end server;
the fifth step: the back-end server receives the face image P (U) and the pixel RGB value of each block of the face image P (U) sent by the back-end face image collector. Then, it is determined whether the RGB values of the pixels of each block of the face image p (u) of the legitimate user are grayed. If yes, turning to the sixth step; otherwise, go to the eighth step.
And a sixth step: the back-end server will use a grayscale algorithm: calculating Gray value of the block (R0.3 + G0.59 + B0.11);
the seventh step: the back-end server judges whether the pixel RGB value of each block of the face image P (U) of the legal user U is calculated. If the calculation is finished, the eighth step is carried out; otherwise, turning to the next pixel block without calculating the RGB value, and then turning to the sixth step;
eighth step: after the gray level of each pixel block of the face image P (U) of the legal user U is finished, the back-end server records the gray level image of each block of the face image P (U) of the legal user U as Pi,j(U), (i 1,2, 3.., M, j 1,2, 3.., N) and inputting the identity information id (U) of the legitimate user U;
the ninth step: the back-end server makes the gray image P of each block of the face image P (U) of the legal user Ui,j(U), (i 1,2, 3.., M, j 1,2, 3.., N) and an identity information id (U) are saved to a database.
The tenth step: according to the gray level image P of each block of the face image P (U) of the legal user U in the databasei,j(U), (i 1,2, 3.. said., M, j 1,2, 3.. said., N) and identity information id (U), the back-end server constructs a face grayscale image sample set of the legal user U, and the sample set is totally composed of the legal user UThe grayscale image of each block of U.
The eleventh step: according to the face gray level image sample set of the legal user U, the back-end server constructs a sample matrix of the face gray level image of the legal user U: x ═ X1(U),X2(U),X3(U),...,Xi(U)...,XM(U)]TWherein vector Xi(U) is the gray image vector of all blocks corresponding to the ith row after the face image of the legal user U is divided into M rows and N columns of blocks, namely Xi(U)=[Pi,1(U),Pi,2(U),Pi,3(U),...,Pi,N(U)];
The twelfth step: the back-end server calculates X in the legal user U by the following formulai(U) average face value of the corresponding face image block:
Figure FDA0002345515660000021
the thirteenth step: the back-end server calculates X in the legal user U by the following formulai(U) gray difference between face and average face:
di=Xi(U)-Ψi,i=1,2,...,M (2)
the fourteenth step is that: the back-end server constructs a covariance matrix of the face gray level image:
A=(d1,d2,...,dM)(3)
Figure FDA0002345515660000022
the fifteenth step: the back-end server calculates A by adopting a singular value decomposition methodTThe eigenvalue and eigenvector of A;
sixteenth, step: determining AA according to the characteristic value and the characteristic vector obtained in the fourteenth stepTThe eigenvalues and eigenvectors.
Seventeenth step: according to the characteristic value and the characteristic vector obtained in the fourteenth step, the back-end server pair ATThe characteristic corresponding to each characteristic value lambada i in AThe eigenvector is subjected to standard orthogonalization to obtain an orthogonalized standard eigenvector Vi
And eighteenth step: and selecting the first K maximum eigenvalues and corresponding eigenvectors according to the contribution rate of the eigenvalues. Wherein the contribution ratio refers to the sum of the selected eigenvalues and the sum of all eigenvalues. Here, the contribution rate of the eigenvalues is assumed to be
Figure FDA0002345515660000023
Then:
Figure FDA0002345515660000031
where b is a constant (5) determined by the system
The nineteenth step: the back-end server selects b to be 99%, that is, the orthogonal projection of the eigenvector corresponding to the first K maximum eigenvalues of the gray level image sample is ensured to occupy the whole ATThe normalized orthogonal eigenvector corresponding to the A eigenvalue has 99%, and the eigenvector of the original covariance matrix satisfying this condition is obtained
Figure FDA0002345515660000032
The calculation is as follows:
Figure FDA0002345515660000033
the twentieth step: covariance matrix AA for constructing face gray level imageTThe result of the feature face space satisfying the contribution rate of more than 99% is:
Figure FDA0002345515660000034
the twentieth step: the back-end server stores w and transmits corresponding integers M and N to the front-end face intelligent sensor in a secret mode;
the twentieth step: and (6) ending.
3. The intelligent human face image sensing-oriented secure transmission and recognition method as claimed in claim 2, wherein: the algorithm for sensing and encrypting the face image of the user with unknown identity comprises the following steps:
the first step is as follows: intelligent front-end face sensor randomly and intelligently senses unknown user u with certain identityΘFace image P of1And other non-face images P2
The second step is that: randomly selecting a number in (0,1) of front-end face intelligent sensors as a0(ii) a In [3.57,4 ]]Randomly selecting a number which is set as β, and setting the initial iteration number t as 0;
the third step: the front-end face intelligent sensor randomly selects an image according to encryption requirements, and the image is set as Pr
The fourth step: the front-end face intelligent sensor receives integers M and N sent by a rear-end server and respectively sends P to the front-end face intelligent sensor1、P2And PrDividing the image into M x N blocks with the same size;
the fifth step: front-end face intelligent sensor according to a0β and t is 0, and the chaos sequence { a is calculated by the following formulaiTherein 0<ai<1,i=1,2,...M。
ai+1=β*ai*(1-ai) (8)
And a sixth step: the front-end face intelligent sensor judges whether t is larger than 15, if yes, the step is shifted to the fifteenth step, and if not, the step is shifted to the seventh step;
the seventh step: i is 1;
eighth step: the front-end face intelligent sensor judges whether i is larger than M, if so, the fourteenth step is carried out, and if not, the ninth step is carried out;
the ninth step: j is 1;
the tenth step: the front-end face intelligent sensor judges whether j is larger than N, if so, the step goes to the tenth step, and if not, the step goes to the tenth step;
the eleventh step: the front-end face intelligent inductor performs calculation through the following formula
Figure FDA0002345515660000041
The twelfth step: j equals j +1, go to the tenth step;
the thirteenth step: turning to the eighth step when i is i + 1;
the fourteenth step is that: turning to the sixth step when t is t + 1;
the fifteenth step: user u with unknown defined identityΘThe face encryption image of (1) is E, wherein E (i, j) is uΘThe gray value P of the face encryption image in the image coordinate (i, j) block1(i, j) is the user image P1Grey value, P, at block of image coordinates (i, j)2(i, j) is other non-face image P2Gray values at the image coordinate (i, j) blocks;
sixthly, the front-end face intelligent inductor converts the information ξ to { a ═ a ] in a secret way0||β||PrThe data is transmitted to a back-end server;
seventeenth step: front-end face intelligent sensor for unknown user uΘThe face encryption image E is transmitted to a back-end server;
and eighteenth step: and (6) ending.
4. The intelligent human face image sensing-oriented secure transmission and recognition method as claimed in claim 3, wherein: the face image receiving and decrypting algorithm of the user with unknown identity comprises the following steps:
the first step is that the back-end server receives information ξ ═ a sent by the front-end face intelligent sensor0||β||Pr};
The second step is that: the back-end server receives the user u with unknown identity sent by the front-end face intelligent sensorΘThe face encryption image E;
thirdly, the back-end server sends ξ ═ { a ═ according to the intelligent sensor of the front-end face0||β||PrCalculating a chaos sequence { a ] by the following formula with the face encryption image EiTherein 0<ai<1,i=1,2,...M。
ai+1=β*ai*(1-ai) (10)
The fourth step: the back-end server judges whether t is larger than 15, if yes, the step goes to the thirteenth step, and if not, the step goes to the fifth step;
the fifth step: i is 1;
and a sixth step: the back-end server judges whether i is larger than M, if yes, the step goes to the twelfth step, and if not, the step goes to the seventh step;
the seventh step: j is 1;
eighth step: the back-end server judges whether j is larger than N, if yes, the tenth step is carried out, and if not, the ninth step is carried out;
the ninth step: the back-end server performs calculation through the following formula
Figure FDA0002345515660000051
The tenth step: j equals j +1, go to the eighth step;
the eleventh step: turning to the sixth step when i is i + 1;
the twelfth step: turning to the fourth step when t is t + 1;
the thirteenth step: the back-end server sets i to 1;
the fourteenth step is that: the back-end server judges whether i is larger than M, if yes, the twentieth step is carried out, and if not, the fifteenth step is carried out;
the fifteenth step: j is 1;
sixteenth, step: the back-end server judges whether j is larger than N, if yes, the step goes to the nineteenth step, and if not, the step goes to the seventeenth step;
seventeenth step: the back-end server performs calculation through the following formula
Figure FDA0002345515660000052
And eighteenth step: j equals j +1, go to the sixteenth step;
the nineteenth step: turning to the fourteenth step when i is i + 1;
the twentieth step: back endThe server stores the decrypted face image as CP1And CP2(ii) a Wherein CP1(i, j) is the decrypted face image CP1Grey value, CP, at block of image coordinates (i, j)2(i, j) is other non-face image CP after decryption2Gray values at the image coordinate (i, j) blocks;
the twentieth step: and (6) ending.
5. The intelligent sensing-oriented safe transmission and recognition method for the face images as claimed in claim 4, wherein: the safety identification algorithm of the face image of the user with unknown identity comprises the following steps:
the first step is as follows: the back-end server selects the face image CP of the user with unknown identity to be identified after decryption from the decrypted face images according to the requirement1(or other non-face images CP2);
The second step is that: the back-end server selects the face image CP of the user with unknown identity to be identified1(or other non-face images CP2) The face image CP to be recognized1(or other non-face images CP2) Dividing the image into M x N blocks with the same size and storing the face image CP1Pixel RGB values, i.e., (R, G, B) values, for each block of (a);
the third step: the back-end server calculates the face image CP of the user with unknown identity according to the seventh step to the twentieth step in the algorithm 11(or other non-face images CP2) Constructing a characteristic face space of the gray level image under the condition that the contribution rate is more than 99% (the result is assumed to be w)Θ) Feature vector A corresponding to the feature valueΘVΘ(Here V)Θ=[V1 Θ,V2 Θ,...,VM Θ]) The value difference A between the feature vector corresponding to the feature value and the corresponding average vectorΘVΘΘ(here Ψ)Θ=[Ψ1 Θ2 Θ,...,ΨM Θ]):
The fourth step: the back-end server calculates the face image CP of the user with unknown identity1(or other non-face images CP2) When the projection characteristic face space meeting the condition that the contribution rate is more than 99 percent, the calculation formula is as follows:
Figure FDA0002345515660000061
the fifth step: the backend server calculates w and AV from Algorithm 1iiCalculating the projection characteristic face space of the pixel value of the legal user U under the condition that the contribution rate is more than 99%, wherein the calculation formula is as follows:
ΩP(U)=wT(AV-Ψ) (14)
and a sixth step: the back-end server calculates the image threshold of the legal user: namely, it is
Figure FDA0002345515660000062
The seventh step: the back-end server calculates omega by adopting Euclidean distanceP(U)Distance from decrypted imageiNamely:
Figure FDA0002345515660000071
eighth step: the back-end server identifies and classifies the faces by adopting the following rules:
1) if all epsiloni≥θ2(where i is 1, 2.. times.m), the image to be recognized is not a face image;
2) if all epsiloni<θ2(where i is 1, 2.. times.m), the image to be recognized is a legal user gray-scale image;
3) if a moiety i is present, where 1. ltoreq. i.ltoreq.M, such that εi≥θ2Also present is a moiety i, where 1. ltoreq. i.ltoreq.M, such that εi<θ2If the image to be identified is not a legal user gray scale image;
the ninth step: the back-end server stores and displays the result of the face recognition;
the tenth step: and (6) ending.
CN201911400109.3A 2019-12-30 2019-12-30 Intelligent sensing-oriented safe transmission and identification method for face images Active CN111144352B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911400109.3A CN111144352B (en) 2019-12-30 2019-12-30 Intelligent sensing-oriented safe transmission and identification method for face images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911400109.3A CN111144352B (en) 2019-12-30 2019-12-30 Intelligent sensing-oriented safe transmission and identification method for face images

Publications (2)

Publication Number Publication Date
CN111144352A true CN111144352A (en) 2020-05-12
CN111144352B CN111144352B (en) 2023-05-05

Family

ID=70522151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911400109.3A Active CN111144352B (en) 2019-12-30 2019-12-30 Intelligent sensing-oriented safe transmission and identification method for face images

Country Status (1)

Country Link
CN (1) CN111144352B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152083A (en) * 2023-08-31 2023-12-01 哈尔滨工业大学 Ground penetrating radar road disease image prediction visualization method based on category activation mapping

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080219515A1 (en) * 2007-03-09 2008-09-11 Jiris Usa, Inc. Iris recognition system, a method thereof, and an encryption system using the same
CN103886235A (en) * 2014-03-03 2014-06-25 杭州电子科技大学 Face image biological key generating method
CN107862282A (en) * 2017-11-07 2018-03-30 深圳市金城保密技术有限公司 A kind of finger vena identification and safety certifying method and its terminal and system
CN110336776A (en) * 2019-04-28 2019-10-15 杭州电子科技大学 A kind of multi-point cooperative Verification System and method based on user images intelligent acquisition
CN110458091A (en) * 2019-08-08 2019-11-15 北京阿拉丁智慧科技有限公司 Recognition of face 1 based on position screening is than N algorithm optimization method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080219515A1 (en) * 2007-03-09 2008-09-11 Jiris Usa, Inc. Iris recognition system, a method thereof, and an encryption system using the same
CN103886235A (en) * 2014-03-03 2014-06-25 杭州电子科技大学 Face image biological key generating method
CN107862282A (en) * 2017-11-07 2018-03-30 深圳市金城保密技术有限公司 A kind of finger vena identification and safety certifying method and its terminal and system
CN110336776A (en) * 2019-04-28 2019-10-15 杭州电子科技大学 A kind of multi-point cooperative Verification System and method based on user images intelligent acquisition
CN110458091A (en) * 2019-08-08 2019-11-15 北京阿拉丁智慧科技有限公司 Recognition of face 1 based on position screening is than N algorithm optimization method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
盛家伦;姚智童;王云涛;付绍静;: "一种基于人脸识别的WLAN安全通信系统的设计与实现" *
符艳军;程咏梅;董淑福;王晓东;: "结合人脸特征和密码技术的网络身份认证系统" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152083A (en) * 2023-08-31 2023-12-01 哈尔滨工业大学 Ground penetrating radar road disease image prediction visualization method based on category activation mapping
CN117152083B (en) * 2023-08-31 2024-04-09 哈尔滨工业大学 Ground penetrating radar road disease image prediction visualization method based on category activation mapping

Also Published As

Publication number Publication date
CN111144352B (en) 2023-05-05

Similar Documents

Publication Publication Date Title
Leng et al. Dual-key-binding cancelable palmprint cryptosystem for palmprint protection and information security
Panchal et al. A novel approach to fingerprint biometric-based cryptographic key generation and its applications to storage security
Revenkar et al. Secure iris authentication using visual cryptography
Guo et al. Towards efficient privacy-preserving face recognition in the cloud
CN104823203A (en) Biometric template security and key generation
Bhatnagar et al. Biometric inspired multimedia encryption based on dual parameter fractional fourier transform
Arunachalam et al. AES Based Multimodal Biometric Authentication using Cryptographic Level Fusion with Fingerprint and Finger Knuckle Print.
Jacob et al. Biometric template security using DNA codec based transformation
Lin et al. A face-recognition approach based on secret sharing for user authentication in public-transportation security
Helmy et al. A hybrid encryption framework based on Rubik’s cube for cancelable biometric cyber security applications
Selvaraju et al. A method to improve the security level of ATM banking systems using AES algorithm
Baghel et al. Generation of secure fingerprint template using dft for consumer electronics devices
CN111144352B (en) Intelligent sensing-oriented safe transmission and identification method for face images
Shanthini et al. Multimodal biometric-based secured authentication system using steganography
Evangelin et al. Securing recognized multimodal biometric images using cryptographic model
Helmy et al. A novel cancellable biometric recognition system based on Rubik’s cube technique for cyber-security applications
Selimović et al. Authentication based on the image encryption using delaunay triangulation and catalan objects
CN114065169B (en) Privacy protection biometric authentication method and device and electronic equipment
Lalithamani et al. Dual encryption algorithm to improve security in hand vein and palm vein-based biometric recognition
Alghamdi et al. A secure iris image encryption technique using bio-chaotic algorithm
Marimuthu et al. Dual fingerprints fusion for cryptographic key generation
Eid et al. A secure multimodal authentication system based on chaos cryptography and fuzzy fusion of iris and face
Li et al. A High Performance and Secure Palmprint Template Protection Scheme.
Barkathunisha et al. Secure transmission of medical information using IRIS recognition and steganography
Ghazali et al. Security performance evaluation of biometric lightweight encryption for fingerprint template protection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant