CN108446687B - Self-adaptive face vision authentication method based on interconnection of mobile terminal and background - Google Patents

Self-adaptive face vision authentication method based on interconnection of mobile terminal and background Download PDF

Info

Publication number
CN108446687B
CN108446687B CN201810523005.0A CN201810523005A CN108446687B CN 108446687 B CN108446687 B CN 108446687B CN 201810523005 A CN201810523005 A CN 201810523005A CN 108446687 B CN108446687 B CN 108446687B
Authority
CN
China
Prior art keywords
module
face
output
influence
judgment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810523005.0A
Other languages
Chinese (zh)
Other versions
CN108446687A (en
Inventor
叶腾琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weisi e-commerce (Shenzhen) Co.,Ltd.
Original Assignee
Weisi E Commerce Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weisi E Commerce Shenzhen Co ltd filed Critical Weisi E Commerce Shenzhen Co ltd
Priority to CN201810523005.0A priority Critical patent/CN108446687B/en
Publication of CN108446687A publication Critical patent/CN108446687A/en
Application granted granted Critical
Publication of CN108446687B publication Critical patent/CN108446687B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a human face visual authentication self-adaptive system, which comprises a mobile terminal and a background, wherein the mobile terminal is in communication connection with the background; the background comprises a receiving module, a model module, a pre-influence module and a judgment output module, wherein the input end of the receiving module is connected with the moving end, the output end of the receiving module is connected with the input end of the model module, the model module is in two-way communication connection with the pre-influence module, the output end of the pre-influence module is also connected with the judgment output module, and the output end of the judgment output module is connected with the moving end. A model unit is arranged in a model module in a background, the background is built by a deep depth convolution network, the model module in the background is trained in advance by utilizing the existing data, and the feature vector of the face image can be extracted automatically. The background is also provided with a pre-influence module, and the model module is continuously corrected by the pre-influence module, so that the system automatically adapts to new face images subsequently added into the system, the accuracy and precision of the system are continuously improved, and the performance of the system is improved.

Description

Self-adaptive face vision authentication method based on interconnection of mobile terminal and background
Technical Field
The invention belongs to the technical field of face recognition, and particularly relates to a self-adaptive face vision authentication method based on interconnection of a mobile terminal and a background.
Background
Face recognition is a biometric technology for identity recognition based on facial feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to capture an image or video stream containing a face with a camera or a video camera, automatically detect and track the face in the image, and then perform face recognition on the detected face. With the continuous development of face recognition technology, face recognition technology has developed to a mature level nowadays, and a system compares an input face image with a template in a known library and determines an identity.
For example, patent application No. 201611227125.3 discloses a face recognition method for identifying counterfeit photo fraud, which belongs to the field of digital image processing and pattern recognition. The identification method realizes the identification of the counterfeit photo deception of the human face by analyzing the imaging difference of real and false human face images and adopting the characteristics of image color distribution, reflection ratio and fuzziness. Firstly, converting a color image into an HSV color space and extracting color distribution characteristics; converting the color image into a YUV color space image and extracting specular reflection characteristics; and extracting ambiguity features by using a gray level co-occurrence matrix. Then, the color distribution characteristic, the specular reflection characteristic and the ambiguity characteristic are integrated to serve as discrimination information of the true and false face images, and the support vector machine algorithm is used for classifying to obtain the discrimination of the true and false face images. The method can be used as an independent module to be integrated into the existing face recognition algorithm, and safety and reliability of the face recognition system are improved. However, in the invention, the living body identification of the input face cannot be carried out, when the face identification method is applied, if a user realizes the mode of recording or photo, the system reads the face in a digital form, and the digital form is the same as the real living body face, so that the user can bypass the system authentication by the mode of recording, which brings great potential safety hazard to the whole system.
As another example, in the patent application No. 201621174101.1, a ticket-robbing authentication system is disclosed, comprising: the face detection module is used for detecting face position information of a face; the living body verification module is used for verifying whether the face of the user is a living body; the face recognizer is connected with a ticket booking system through a network and is used for recognizing the identity of the user according to the face position information; and the information processing equipment is respectively connected with the living body verification module, the face recognizer and the ticket business booking system and is used for verifying the qualified login of the ticket business booking system when the user identity is qualified and the face of the user is a live person. When logging in a ticket booking system, not only the identity of a user is identified according to the face position information of the user, but also whether the face of the user is a live person is detected; when the user identity is the user and the face is a live person, the user can be connected with a ticket booking system to buy tickets. The method and the device avoid the user from inputting a complex verification code for verification, improve the user experience of online ticket purchasing, and simultaneously avoid the automatic ticket robbery of the ticket swiping software. The invention is applied to a ticket robbing system, when the ticket service department applies the system, a special terminal needs to be set, and a common mobile phone can not use the system.
In patent application No. 201511024101.3, a method and an apparatus for matching face images are disclosed, the method comprising: receiving a face image sequence from front-end equipment, and storing the face image sequence in a dynamic face comparison library; wherein the facial image sequence comprises one or more facial images; aiming at an image to be detected, obtaining a first face characteristic corresponding to the image to be detected; determining second face features matched with the first face features from second face features corresponding to each face image in the dynamic face comparison library; and selecting the face image corresponding to the second face characteristic as a face image matched with the image to be detected. By the technical scheme, the accuracy of the similarity calculation result is improved, and the accuracy of face comparison is further improved. The comparison can be performed on the face images that are temporarily interested in. The human face image matched with the image to be detected does not need to be confirmed manually by a user, so that the manual intervention process is reduced, and the efficiency is improved. This algorithm does not allow for dynamically changing the algorithm of the approximation. The algorithm is characterized in that a trait expression form and an approximation matching algorithm are trained in advance through a certain training sample. However, the algorithm model is fixed, and when the sample is expanded, the distribution of the feature calculation formula and the approximation matching expression obtained by training in advance is changed.
In patent application No. 201310746593.1, a face recognition method is disclosed, comprising: acquiring a face image; acquiring the illuminance of the human face; preprocessing the face image according to the illumination intensity; extracting human face features from the preprocessed human face image; comparing the extracted face features with all face templates to determine whether the face recognition passes or not, and reading the identity information appointed by the recognized person when the extracted face features are compared with all face templates and the face recognition does not pass; obtaining a specified face template according to the specified identity information of the identified person; and comparing the extracted face features with the specified face template to determine whether the face recognition is passed or not. The face recognition method provided by the invention can reduce false recognition and recognition rejection, improve the recognition success rate, avoid human intervention and improve the recognition efficiency.
Disclosure of Invention
In order to solve the above problems, the present invention aims to provide a self-adaptive human face visual authentication method based on interconnection between a mobile terminal and a background, which is applied to the mobile terminal and can detect living bodies and be self-adaptive;
the invention also aims to provide a self-adaptive face vision authentication method based on interconnection of a mobile terminal and a background, and the face recognition method provided by the system is flexible and accurate and is convenient to use.
In order to achieve the purpose, the technical scheme of the invention is as follows:
the invention provides a human face visual authentication self-adaptive system, which comprises a mobile terminal and a background, wherein the mobile terminal is in communication connection with the background; the mobile terminal comprises an image acquisition module, an image detection module and a judgment uploading module; the output end of the image acquisition module is connected with the input end of the image detection module, the output end of the image detection module is connected with the input end of the judgment uploading module, and the output end of the judgment uploading module is in communication connection with the background;
the background comprises a receiving module, a model module, a pre-influence module and a judgment output module, wherein the input end of the receiving module is connected with the moving end, the output end of the receiving module is connected with the input end of the model module, the model module is in two-way communication connection with the pre-influence module, the output end of the pre-influence module is also connected with the judgment output module, and the output end of the judgment output module is connected with the moving end.
In the system, the mobile terminal can be applied to various mobile devices such as smart phones, for example, smart phones, and the face recognition system provided by the invention can be directly embedded into existing APP of users for direct use without additionally developing mobile phone APP, so that the system is convenient and fast. The mobile terminal comprises an image acquisition module, an image detection module and a judgment uploading module; when a user uses the system, the system collects face images of the user through an image collecting module in a mobile terminal, the collected face images are transmitted to an image detecting module to detect the image quality, and the image detecting module firstly carries out conventional detection on the face images (including the detection on conventional image quality parameters such as the brightness, the acutance, the color saturation and the like of the images); after image quality detection is carried out, the judgment uploading module carries out human-computer interaction living body detection on a user, when the human-computer interaction living body detection is carried out, the mobile terminal can randomly send some simple instructions (such as turning the head, blinking, opening the mouth and the like), if the user can press corresponding actions according to the instructions, the mobile terminal judges that the user is a living body, and after the judgment is finished, the mobile terminal uploads a selected key frame image to a background to carry out next operation. The mobile terminal is arranged, so that the system is conveniently applied to the mobile terminal by a user, and the user can conveniently acquire the face image.
The system also comprises a background which is built by a deep convolution network, a receiving module in the background receives the face image uploaded by the mobile terminal and then sends the face image to a model module to process the image, the model module is in bidirectional communication connection with a pre-influence module and operates together to finish the extraction of the feature vector of the face image and the continuous correction of the model, so that the identification, operation and parameter extraction of the uploaded face image are realized, meanwhile, the model is automatically corrected according to the input face image data, the extraction mode of the feature vector of the face is corrected, the follow-up added data is automatically adapted, and the accuracy of the system is continuously improved.
The background model module comprises a model unit which is trained by using the existing data in advance and learns a method for extracting the face feature vector; and the output end of the model unit is respectively connected with the pre-influence module and the judgment output module. The face of each person is considered to have scale-invariant features, the scale-invariant features mean that the face features of the person do not change greatly along with the size of a picture, and distance offset and angle offset of the face facing a lens, and a model unit is trained by using existing data in advance to learn to extract face initial feature vectors aiming at face images. For each face image uploaded to the background, the model unit can extract a corresponding initial feature vector, wherein the vector is a multi-dimensional reference vector for recognizing the face by the system and consists of a plurality of feature items and parameters corresponding to each feature item. The training method of the model, the acquisition of the existing data, the category and the project of the initial feature vector extracted by the model and the like are all the experiences which are accumulated, summarized and continuously modified and perfected by the inventor in the long-term experiment and application process. The model unit built by applying the deep convolution has the advantages that on one hand, for an inventor, the safety coefficient is high, and other people cannot easily break the system, on the other hand, for a user, the system for performing face recognition by applying the model unit has high automation degree, strong computing capability and accurate computing result, errors are not easily caused, the self-adaptive characteristic of the model module can be continuously corrected, the face recognition level of the model module can be improved, and the reliability of the system is improved to a great extent.
The background pre-influence module comprises a metadata unit and a pre-influence unit, the input end of the metadata unit is connected with the output end of the model module, the output end of the metadata unit is connected with the input end of the pre-influence unit, and the output end of the pre-influence unit is connected with the judgment output module. The pre-influence module can further process the operation result of the model module, and extract relevant data and vectors which can carry out pre-influence on the model module from the operation result, the data and the vectors are temporarily stored in the pre-influence module, the pre-influence module is in bidirectional communication connection with the model module, after the data and the parameters are transmitted to the model module, the model module carries out relevant operation and judgment on the data and the vectors, if the data and the parameters are helpful for further accuracy of the final face recognition result, the data and the parameters are allowed to correct the model, the model is helped to change an extraction method of the relevant characteristic vectors and the parameters in the subsequent face recognition process, and the face recognition result of the whole system is ensured to be more accurate and reliable; if the data and parameters do not help the further accuracy of the final face recognition, the data and parameters are not allowed to correct the model, and the influence of accidental errors on the calculation result of the whole system is avoided. The pre-influence module can help the whole system to realize self-adaption and continuously and autonomously improve the face recognition capability of the system.
The background judgment output module comprises a judgment unit and an output unit; the input end of the judging unit is respectively connected with the output ends of the model module and the pre-influence module, the output end of the judging unit is connected with the input end of the output unit, and the output end of the output unit is connected with the moving end. After the final feature vector calculated by the cooperation of the model module and the pre-influence module is transmitted to the judgment output module, the judgment unit performs matching comparison on the calculation result and the stock human faces, and the human faces with the direct distance smaller than the threshold value in the two human face images are considered to be similar, so that the two human faces can be judged as the same person, and the output unit outputs the result. And a judgment output module is arranged, so that the operation result of the cooperation of the front-end model module and the pre-influence module can be judged and output, and the whole face recognition process is completed. When the method is specifically applied to the loan application process, if the judging unit judges that the face appears in the stock after matching and comparing the face information input this time with the face information in the stock, the user applying this time is judged to have applied for the same loan, and the verification fails. And if the facial image is judged to be different from the stock facial image after matching and comparison, judging the facial image as a new application user, allowing the new application user to pass the verification, and storing the facial image applied at the time into a database.
A self-adaptive face visual authentication and identification method comprises the following steps:
step 1: starting;
step 2: the mobile terminal collects and preliminarily processes the face image, and uploads the face image to a background through a judgment uploading module in the mobile terminal;
and step 3: after receiving the face image, a receiving module in the background transmits the face image to a model module and a pre-influence module to perform operation processing on specific characteristics in the face image, and transmits an operation result to a judgment output module;
and 4, step 4: the judgment output module compares the operation result generated by the cooperative operation of the model module and the pre-influence module with the stock face data, calculates the direct distance of the characteristics between the face operated this time and the stock face, and outputs the calculation result to the mobile terminal through the output terminal.
And 5: finishing;
wherein, the step 3: after receiving the face image, a receiving module in the background transmits the face image to a model module and a pre-influence module to perform operation processing on specific characteristics in the face image, and transmits an operation result to a judgment output module; the method comprises the following substeps:
31: the receiving module receives the face image and transmits the image to the input end of the model module;
32: the model module extracts an initial feature vector F1 from the face image and transmits the initial feature vector F1 to the pre-influence module and the judgment output module respectively;
33: after receiving the initial feature vector F1, the pre-influence module performs operation processing on the initial feature vector F1 to obtain space vector metadata;
34: the pre-influence module further processes the obtained space element vector data to obtain a pre-influence feature vector F2, and transmits the pre-influence feature vector F2 to the model module and the judgment transmission module respectively.
In step 3 of the method, the model unit in the model module has been trained in advance by using the existing data, and for the face image, the model unit has learned how to extract the feature vector. After the model module extracts an initial feature vector F1 from the face image, the initial feature vector F1 is sent to a judgment output module for waiting for judgment, and the initial feature vector F1 is also sent to a pre-influence module for pre-influence processing. After the pre-influence module receives the initial feature vector F1, space vector metadata are extracted from the initial feature vector F1, the space vector metadata are further processed to obtain a pre-influence feature vector F2, after the pre-influence feature vector F2 is sent to the model module, the pre-influence feature vector F2 is judged by the model unit, and if the pre-influence feature vector F2 can help the system to further improve the final face recognition result, the model is allowed to be corrected; if the lifting can not be carried out, the lifting is not allowed. The step 3 is set, so that the system can complete the recognition of the face image uploaded to the background, and can continuously input new face image data in the system to continuously correct the model in the process of using the system by a user, so that the model can automatically search the commonality between the new face image data, continuously attach to a subsequent input system to obtain a new face, continuously improve the reliability of the whole system, reduce errors and improve the accuracy of the system.
Further, step 2: the mobile terminal collects and preliminarily processes the face image, and uploads the face image to a background through a judgment uploading module in the mobile terminal, wherein the substeps comprise:
21: an image acquisition module in the mobile terminal detects whether a face exists in the range shot by the lens, if so, the face is shot, and if not, the mobile terminal waits;
22: an image detection module in the mobile terminal performs imaging quality detection on the shot face image, if the shot face image passes the imaging quality detection, the step 23 is executed, and if the shot face image does not pass the imaging quality detection, the step 21 is executed again;
23: the judgment uploading module carries out human-computer interaction living body detection on the user; if the detection is passed, extracting the key frame, converting the key frame into a picture, and uploading the picture to a background; if the detection is not passed, the process returns to step 21.
The step 2 is completed by the mobile terminal, when the system is used, the mobile terminal can automatically detect whether a face exists in the lens range, if the face is detected, the face is shot and the face image is detected (such as detecting the definition, sharpness, brightness and the like of the image), so that the influence on the subsequent face recognition process caused by the poor quality of the acquired face image is avoided, after the imaging quality detection is carried out, the judgment uploading module carries out human-computer interaction living body detection on the user, a moving end sends out an instruction randomly, the user completes corresponding actions along with the instruction, therefore, human-computer interaction living body detection is completed, after the series of detection of the mobile terminal, the system can judge whether the user is the user himself or the living body, judge whether the user is allowed to pass through, the mobile terminal converts the selected key frame into a picture and uploads the picture to the background, and the background further identifies the face of the picture.
Further, step 4: the judgment output module compares the operation result generated by the cooperative operation of the model module and the pre-influence module with the stock face data, calculates the direct distance of the characteristics between the face of the current operation and the stock face, and outputs the calculation result to the mobile terminal through the output terminal, and the judgment output module comprises the following substeps:
41: a judging unit in the judging output module compares the operation result with the stock face data;
42: the judging unit calculates the direct distance between the feature vector of the face operated this time and the feature vector of the face in stock, if the direct distance between the face operated this time and any one face in stock is less than the set threshold, the same face is judged, the first result is output, and the step 43 is entered; if the direct distance is greater than the set threshold, determining that the face is newly authenticated, outputting a second result and entering step 43;
43: if the output unit in the output module receives the first result, outputting corresponding result information to the mobile terminal; and if the output unit receives the second result, storing the image of the face operated this time into the database, and outputting corresponding result information to the mobile terminal. Step 4 is completed by a background judgment output module, and it should be specially noted that the above-mentioned "direct distance" refers to the distance between the feature vectors of each face image, and the setting of the threshold value is also completed by a model module, which cooperates with a pre-influence module, and the threshold value is irregularly adjusted according to the new face image subsequently entering the system, so as to ensure the precision and accuracy of the judgment process.
The invention has the advantages that: compared with the prior art, the method has the advantages that the model unit is arranged in the model module in the background, the background is built by the deep depth convolution network, the model module in the background is trained in advance by using the existing data, and the feature vector of the face image can be extracted automatically. The system is also provided with the pre-influence module, and the model module is continuously corrected by the pre-influence module, so that the whole system is automatically adapted to new face images subsequently added into the system, the accuracy and precision of the system are continuously improved while the full automation of face recognition is realized, and the overall performance of the system is improved.
Drawings
Fig. 1 is a system overall block diagram of a face vision authentication adaptive system according to the present invention.
Fig. 2 is an overall flow chart of the adaptive face vision recognition method of the present invention.
Fig. 3 is a flow chart of the substeps of step 2 in the adaptive face vision recognition method of the present invention.
Fig. 4 is a flow chart of the substeps of step 3 in the adaptive face vision recognition method of the present invention.
Fig. 5 is a flow chart of the substeps of step 4 in the adaptive face vision recognition method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In order to achieve the purpose, the technical scheme of the invention is as follows:
referring to fig. 1-5, the present invention provides a human face visual authentication adaptive system, which includes a mobile terminal 1 and a background 2, wherein the mobile terminal 1 is in communication connection with the background 2; the mobile terminal 1 comprises an image acquisition module 11, an image detection module 12 and a judgment uploading module 13; the output end of the image acquisition module 11 is connected with the input end of the image detection module 12, the output end of the image detection module 12 is connected with the input end of the judgment uploading module 13, and the output end of the judgment uploading module 13 is in communication connection with the background 2;
the background 2 comprises a receiving module 21, a model module 22, a pre-influence module 23 and a judgment output module 24, wherein the input end of the receiving module 21 is connected with the moving end 1, the output end of the receiving module is connected with the input end of the model module 22, the model module 22 is in bidirectional communication connection with the pre-influence module 23, the output end of the pre-influence module 23 is also connected with the judgment output module 24, and the output end of the judgment output module 24 is connected with the moving end 1.
The system can be particularly applied to the financial industry, particularly applied to loan application, a client registers and inputs face information of the client in advance, the face information of the client is stored in a background database, and when the client logs in for the second time, bank workers use a mobile terminal provided with the system to recognize the face of the user.
In the system, the mobile terminal 1 can be applied to various mobile devices such as smart phones, for example, smart phones, and the face recognition system provided by the invention can be directly embedded into existing APP of users for direct use without additionally developing mobile phone APP, so that the system is convenient and fast. The mobile terminal 1 comprises an image acquisition module 11, an image detection module 12 and a judgment uploading module 13; when a user uses the system, the system collects face images of the user through the image collecting module 11 in the mobile terminal 1, the collected face images are transmitted to the image detecting module 12 to detect the image quality, and the image detecting module 12 firstly carries out conventional detection on the face images (including the detection on conventional image quality parameters such as the brightness, the sharpness, the color saturation and the like of the images); after image quality detection, the judgment uploading module 13 performs human-computer interaction living body detection on the user, when performing human-computer interaction living body detection, the mobile terminal 1 randomly sends some simple instructions (such as turning the head, blinking, opening the mouth, and the like), if the user can press corresponding actions according to the instructions, the mobile terminal 1 judges that the user is a living body, and after judgment is completed, the mobile terminal 1 uploads the selected key frame image to a background for next operation. The mobile terminal 1 is arranged, so that the system is convenient for a user to apply to the mobile terminal 1, and the user can conveniently acquire the face image.
The system also comprises a background 2, the background 2 is built by a deep convolution network, after a receiving module 21 in the background 2 receives the face image uploaded by the mobile terminal 1, the face image is sent to a model module 22 to be processed, the model module 22 is in two-way communication connection with a pre-influence module 23 and operates together to complete extraction of feature vectors of the face image and continuous correction of the model, identification, operation and parameter extraction of the uploaded face image are realized, meanwhile, the model is automatically corrected according to the input face image data, the extraction mode of the face feature vectors of the model is corrected, follow-up added data is automatically adapted, and the accuracy of the system is continuously improved.
The model module 22 of the background 2 comprises a model unit 221, and the model unit 221 is trained by using the existing data in advance and learns the method for extracting the face feature vector; the input end of the model unit 221 is connected with the output end of the receiving module 21, and the output end of the model unit 221 is connected with the pre-influence module 23 and the judgment output module 24 respectively. The face of each person is considered to have scale-invariant features, the scale-invariant features mean that the face features do not change greatly with the size of a picture, and the distance offset and the angle offset of the face when facing a lens, and the model unit 221 learns to extract face initial feature vectors for a face image by training with existing data in advance. For each face image uploaded to the background, the model unit 221 may extract a corresponding initial feature vector, where the vector is a multidimensional reference vector for the system to recognize the face and is composed of a plurality of feature items and parameters corresponding to each feature item. The training method of the model, the acquisition of the existing data, the category and the project of the initial feature vector extracted by the model and the like are all the experiences which are accumulated, summarized and continuously modified and perfected by the inventor in the long-term experiment and application process. By applying the model unit 221 built by deep convolution, on one hand, for an inventor, the safety coefficient is high, and the system cannot be easily broken by other people, on the other hand, for a user, the system for performing face recognition by applying the model unit has high automation degree, strong computing capability, accurate computing result and difficulty in making mistakes, and the self-adaptive characteristic of the model module can be continuously corrected and the face recognition level of the model module can be improved, so that the reliability of the system is improved to a great extent.
The pre-influence module 23 of the background 2 comprises a metadata unit 231 and a pre-influence unit 232, an input end of the metadata unit 231 is connected with an output end of the model module 22, an output end of the metadata unit 231 is connected with an input end of the pre-influence unit 232, and an output end of the pre-influence unit 232 is connected with the judgment output module 24. The pre-influence module 23 can further process the operation result of the model module 22, and extract relevant data and vectors which can pre-influence the model module 22, the data and vectors are temporarily stored in the pre-influence module 23, the pre-influence module 23 is in bidirectional communication connection with the model module 22, after the data and parameters are transmitted to the model module 22, the model module 22 performs relevant operation and judgment on the data and vectors, if the data and parameters help the final face recognition result to be further accurate, the data and parameters are allowed to modify the model, the model is helped to change the extraction method of the relevant feature vectors and parameters in the subsequent face recognition process, and the face recognition result of the whole system is ensured to be more accurate and reliable; if the data and parameters do not help the further accuracy of the final face recognition, the data and parameters are not allowed to correct the model, and the influence of accidental errors on the calculation result of the whole system is avoided. The pre-influence module 23 can help the whole system to realize self-adaption and continuously and autonomously improve the face recognition capability of the system.
The judgment output module 24 of the background 2 comprises a judgment unit 241 and an output unit 242; the input terminal of the determination unit 241 is connected to the output terminals of the model module 22 and the pre-influence module 23, the output terminal of the determination unit 241 is connected to the input terminal of the output unit 242, and the output terminal of the output unit 242 is connected to the mobile terminal 1. After the final feature vector calculated by the model module 22 and the pre-influence module 23 in cooperation is transmitted to the determination output module 242, the determination unit 241 performs matching comparison between the calculation result and the stock human face, and it is considered that the human faces whose direct distance between two human face images is smaller than the threshold are similar, and the two human faces can be determined as the same person, and the output unit 242 outputs the result. And a judgment output module is arranged, so that the operation result of the cooperation of the front-end model module 22 and the pre-influence module 23 can be judged and output, and the whole face recognition process is completed. Specifically, when the present application is applied to a loan application process, if the determining unit 241 determines that a face appears in the stock after matching and comparing the face information input this time with the face information in the stock, it determines that the user applying this time has applied for a same loan, and the verification fails. And if the facial image is judged to be different from the stock facial image after matching and comparison, judging the facial image as a new application user, allowing the new application user to pass the verification, and storing the facial image applied at the time into a database.
A self-adaptive face visual authentication and identification method comprises the following steps:
s1: starting;
s2: the mobile terminal 1 collects and preliminarily processes the face image, and uploads the face image to the background 2 through a judgment uploading module 13 in the mobile terminal 1;
s3: after receiving the face image, the receiving module 21 in the background 2 transmits the face image to the model module 22 and the pre-influence module 23 for performing operation processing on the specific features in the face image, and transmits the operation result to the judgment output module 24;
s4: the judgment output module 24 compares the operation result generated by the cooperative operation of the model module 22 and the pre-influence module 23 with the stock face data, calculates the direct distance between the features of the face operated this time and the stock face, and outputs the calculation result to the mobile terminal 1 through the output terminal.
S5: finishing;
wherein, the step 3: after receiving the face image, the receiving module 21 in the background 2 transmits the face image to the model module 22 and the pre-influence module 23 for performing operation processing on the specific features in the face image, and transmits the operation result to the judgment output module 24; the method comprises the following substeps:
s31: the receiving module 21 receives the face image and transmits the image to the input end of the model module 22;
s32: the model module 22 extracts an initial feature vector F1 for the face image and transmits the initial feature vector F1 to the pre-influence module 23 and the decision output module 24, respectively;
s33: after receiving the initial feature vector F1, the pre-influence module 23 performs arithmetic processing on the initial feature vector F1 to obtain space vector metadata;
s34: the pre-influence module 23 further processes the obtained spatial meta-vector data to obtain a pre-influence feature vector F2, and transmits the pre-influence feature vector F2 to the model module 22 and the decision transmission module 24, respectively.
In S3 of the method, the model unit 221 in the model module 22 has been trained in advance using the existing data, and for the face image, the model unit 221 has learned how to extract the feature vectors. After the model module 22 extracts the initial feature vector F1 from the face image, the initial feature vector F1 is sent to the determination output module 24 for waiting for determination, and the initial feature vector F1 is also sent to the pre-influence module 23 for pre-influence processing. After the pre-influence module 23 receives the initial feature vector F1, space vector metadata is extracted from the initial feature vector F1, the space vector metadata is further processed to obtain a pre-influence feature vector F2, and after the pre-influence feature vector F2 is sent to the model module 22, the model unit 221 determines the pre-influence feature vector F2, and if the pre-influence feature vector F2 can help the system to further improve the final face recognition result, the model is allowed to be corrected; if the lifting can not be carried out, the lifting is not allowed. S3 is set, so that the system can complete the recognition of the face image uploaded to the background 2 on one hand, and on the other hand, the model can be continuously corrected by continuously inputting new face image data in the system by a user in the process of using the system, so that the model can automatically search for the commonality between the new face image data, continuously attach to a subsequent input system to obtain a new face, continuously improve the reliability of the whole system, reduce errors and improve the accuracy of the system.
Further, S2: the mobile terminal 1 collects and preliminarily processes the face image, and uploads the face image to the background 2 through the judgment uploading module 13 in the mobile terminal 1, and the substeps are as follows:
s21: an image acquisition module 11 in the mobile terminal 1 detects whether a face exists in a range shot by a lens, if so, the face is shot, and if not, the user waits;
s22: the image detection module 12 in the mobile terminal performs imaging quality detection on the shot face image, if the shot face image passes the imaging quality detection, the process goes to S23, and if the shot face image does not pass the imaging quality detection, the process returns to S21;
s23: the judgment uploading module 13 carries out human-computer interaction living body detection on the user; if the detection is passed, extracting the key frame, converting the key frame into a picture, and uploading the picture to a background 2; if the detection is not passed, the flow returns to S21.
S2 is completed by the mobile terminal 1. when the system is used, the mobile terminal 1 can automatically detect whether a face exists in the lens range, if a face is detected, the face is shot and the face image is detected (such as detecting the definition, sharpness, brightness and the like of the image), so that the influence on the subsequent face recognition process caused by the poor quality of the acquired face image is avoided, after the imaging quality detection is carried out, the judgment uploading module 13 carries out human-computer interaction living body detection on the user, the moving end 1 sends out an instruction at random, the user completes corresponding actions along with the instruction, therefore, human-computer interaction living body detection is completed, after the series of detection of the mobile terminal, the system can judge whether the user is the user himself or the living body, judge whether the user is allowed to pass through, the mobile terminal converts the selected key frame into a picture and uploads the picture to the background 2, and the background 2 carries out further face recognition on the picture.
Further, S4: the judgment output module 24 compares the operation result generated by the cooperative operation of the model module 22 and the pre-influence module 23 with the stock face data, calculates the direct distance between the features of the face operated this time and the stock face, and outputs the calculation result to the mobile terminal through the output terminal, and comprises the following sub-steps:
s41: the judgment unit 241 in the judgment output module 24 compares the operation result with the stock face data;
s42: the determining unit 241 calculates a direct distance between the feature vector of the face calculated this time and the feature vector of the face in stock, determines that the face is the same if the direct distance between the face calculated this time and any one face in stock is smaller than a set threshold, outputs a first result, and proceeds to S43; if the direct distance is greater than the set threshold, determining that the human face is newly authenticated, outputting a second result and entering S43;
s43: if the output unit 242 in the output module 24 receives the first result, outputting corresponding result information to the mobile terminal 1; if the output unit 242 receives the second result, the image of the face calculated this time is stored in the database, and the corresponding result information is output to the mobile terminal 1. S4 is completed by the decision output module 24 of the background 2, and it should be particularly noted that the "direct distance" referred to above refers to the distance between the feature vectors of each face image, and the setting of the threshold is also completed by the model module, and the model module 22 cooperates with the pre-influence module 23 to irregularly adjust the threshold according to the new face image that subsequently enters the system, thereby ensuring the accuracy of the decision process.
When the system is applied to loan application, an operator needs to identify the face of an applicant by applying the system, the same person is prevented from changing identity and applying for a second time for loan of the same kind, the operator installs the system in a corresponding mobile terminal, the mobile terminal acquires face image data and uploads the face image data to a background, after the background identifies the face image, if the background judges that the direct distance between the face of the operation and one face of the stored face is smaller than a threshold value, the two faces are judged to be similar to each other and are the same face, the verification fails, and verification failure information is output as a first result; if the direct distance between the face operated this time and any face stored in the database is larger than the threshold value after matching and comparison, the face operated this time is not stored in the database, the verification is successful, the face operated this time is stored in the database, and the verification is successful and output as a second result.
The invention has the advantages that: compared with the prior art, in the invention, the model unit is arranged in the model module of the background 2, the background 2 is built by the deep depth convolution network, and the model module 22 in the background 2 is trained by utilizing the existing data in advance, so that the feature vector of the face image can be extracted automatically. The system is also provided with the pre-influence module 23, and the model module 22 is continuously corrected by the pre-influence module 23, so that the whole system is automatically adapted to new face images subsequently added into the system, the accuracy and precision of the system are continuously improved while the full automation of face recognition is realized, and the overall performance of the system is improved.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent substitutions and improvements made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (2)

1. A human face vision authentication self-adaptive system comprises a mobile terminal and a background, wherein the mobile terminal is in communication connection with the background; the mobile terminal is characterized by comprising an image acquisition module, an image detection module and a judgment uploading module; the output end of the image acquisition module is connected with the input end of the image detection module, the output end of the image detection module is connected with the input end of the judgment uploading module, and the output end of the judgment uploading module is in communication connection with the background;
the background is built by a deep convolution network, the background comprises a receiving module, a model module, a pre-influence module and a judgment output module, the input end of the receiving module is connected with the moving end, the output end of the receiving module is connected with the input end of the model module, the model module is in bidirectional communication connection with the pre-influence module, the output end of the pre-influence module is also connected with the judgment output module, and the output end of the judgment output module is connected with the moving end;
the background model module comprises a model unit which is trained by using the existing data in advance and learns a method for extracting the face feature vector; the input end of the model unit is connected with the output end of the receiving module, and the output end of the model unit is respectively connected with the pre-influence module and the judgment output module;
the background pre-influence module comprises a metadata unit and a pre-influence unit, the input end of the metadata unit is connected with the output end of the model module, the output end of the metadata unit is connected with the input end of the pre-influence unit, and the output end of the pre-influence unit is connected with the judgment output module;
the background judgment output module comprises a judgment unit and an output unit; the input end of the judging unit is respectively connected with the output ends of the model module and the pre-influence module, the output end of the judging unit is connected with the input end of the output unit, and the output end of the output unit is connected with the moving end.
2. A self-adaptive face visual authentication and identification method comprises the following steps:
step 1: starting;
step 2: the mobile terminal collects and preliminarily processes the face image, and uploads the face image to a background through a judgment uploading module in the mobile terminal;
and step 3: after receiving the face image, a receiving module in the background transmits the face image to a model module and a pre-influence module to perform operation processing on specific characteristics in the face image, and transmits an operation result to a judgment output module;
and 4, step 4: the judgment output module compares the operation result generated by the cooperative operation of the model module and the pre-influence module with the stock face data, calculates the direct distance of the characteristics between the face operated this time and the stock face, and outputs the calculation result to the mobile terminal through the output terminal;
and 5: finishing;
characterized in that, the step 3: after receiving the face image, a receiving module in the background transmits the face image to a model module and a pre-influence module to perform operation processing on specific characteristics in the face image, and transmits an operation result to a judgment output module; the method comprises the following substeps:
31: the receiving module receives the face image and transmits the image to the input end of the model module;
32: the model module extracts an initial feature vector F1 from the face image and transmits the initial feature vector F1 to the pre-influence module and the judgment output module respectively;
33: after receiving the initial feature vector F1, the pre-influence module performs operation processing on the initial feature vector F1 to obtain space vector data;
34: the pre-influence module further processes the obtained space vector data to obtain pre-influence characteristic vectors, and transmits the pre-influence characteristic vectors to the model module and the judgment transmission module respectively;
the step 2: the mobile terminal collects and preliminarily processes the face image, and uploads the face image to a background through a judgment uploading module in the mobile terminal, wherein the substeps comprise:
21: an image acquisition module in the mobile terminal detects whether a face exists in the range shot by the lens, if so, the face is shot, and if not, the mobile terminal waits;
22: an image detection module in the mobile terminal performs imaging quality detection on the shot face image, if the shot face image passes the imaging quality detection, the step 23 is executed, and if the shot face image does not pass the imaging quality detection, the step 21 is executed again;
23: the judgment uploading module carries out human-computer interaction living body detection on the user; if the detection is passed, extracting the key frame, converting the key frame into a picture, and uploading the picture to a background; if the detection is not passed, the step 21 is returned;
the step 4: the judgment output module compares the operation result generated by the cooperative operation of the model module and the pre-influence module with the stock face data, calculates the direct distance of the characteristics between the face of the current operation and the stock face, and outputs the calculation result to the mobile terminal through the output terminal, and the judgment output module comprises the following substeps:
41: a judging unit in the judging output module compares the operation result with the stock face data;
42: the judging unit calculates the direct distance between the feature vector of the face operated this time and the feature vector of the face in stock, if the direct distance between the face operated this time and any one face in stock is less than the set threshold, the same face is judged, the first result is output, and the step 43 is entered; if the direct distance is greater than the set threshold, determining that the face is newly authenticated, outputting a second result and entering step 43;
43: if the output unit in the output module receives the first result, outputting corresponding result information to the mobile terminal; and if the output unit receives the second result, storing the image of the face operated this time into the database, and outputting corresponding result information to the mobile terminal.
CN201810523005.0A 2018-05-28 2018-05-28 Self-adaptive face vision authentication method based on interconnection of mobile terminal and background Active CN108446687B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810523005.0A CN108446687B (en) 2018-05-28 2018-05-28 Self-adaptive face vision authentication method based on interconnection of mobile terminal and background

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810523005.0A CN108446687B (en) 2018-05-28 2018-05-28 Self-adaptive face vision authentication method based on interconnection of mobile terminal and background

Publications (2)

Publication Number Publication Date
CN108446687A CN108446687A (en) 2018-08-24
CN108446687B true CN108446687B (en) 2022-02-01

Family

ID=63204716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810523005.0A Active CN108446687B (en) 2018-05-28 2018-05-28 Self-adaptive face vision authentication method based on interconnection of mobile terminal and background

Country Status (1)

Country Link
CN (1) CN108446687B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111104823A (en) * 2018-10-25 2020-05-05 北京奇虎科技有限公司 Face recognition method and device, storage medium and terminal equipment
CN110047072A (en) * 2019-04-30 2019-07-23 福建南方路面机械有限公司 A kind of gravel size identification processing system and processing method based on mobile interchange
CN110223421B (en) * 2019-05-09 2020-07-21 重庆特斯联智慧科技股份有限公司 Access control method and system adaptive to dynamic change of human face
CN110795500A (en) * 2019-09-25 2020-02-14 北京旷视科技有限公司 Method, device and system for putting face data into storage and storage medium
CN111611934A (en) * 2020-05-22 2020-09-01 北京华捷艾米科技有限公司 Face detection model generation and face detection method, device and equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105681316A (en) * 2016-02-02 2016-06-15 腾讯科技(深圳)有限公司 Identity verification method and device
CN106127233A (en) * 2016-06-15 2016-11-16 天津中科智能识别产业技术研究院有限公司 A kind of eyes open detection method and the system of closed state
CN106600396A (en) * 2016-10-27 2017-04-26 深圳前海微众银行股份有限公司 Account information upgrading method and apparatus
CN106778607A (en) * 2016-12-15 2017-05-31 国政通科技股份有限公司 A kind of people based on recognition of face and identity card homogeneity authentication device and method
CN106803289A (en) * 2016-12-22 2017-06-06 五邑大学 A kind of false proof method and system of registering of intelligent mobile
CN106846577A (en) * 2017-01-19 2017-06-13 泰康保险集团股份有限公司 Personnel's discrepancy authority control method and device based on recognition of face
CN107545889A (en) * 2016-06-23 2018-01-05 华为终端(东莞)有限公司 Suitable for the optimization method, device and terminal device of the model of pattern-recognition
CN107733911A (en) * 2017-10-30 2018-02-23 郑州云海信息技术有限公司 A kind of power and environmental monitoring system client login authentication system and method
CN107992842A (en) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 Biopsy method, computer installation and computer-readable recording medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150103184A1 (en) * 2013-10-15 2015-04-16 Nvidia Corporation Method and system for visual tracking of a subject for automatic metering using a mobile device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105681316A (en) * 2016-02-02 2016-06-15 腾讯科技(深圳)有限公司 Identity verification method and device
CN106127233A (en) * 2016-06-15 2016-11-16 天津中科智能识别产业技术研究院有限公司 A kind of eyes open detection method and the system of closed state
CN107545889A (en) * 2016-06-23 2018-01-05 华为终端(东莞)有限公司 Suitable for the optimization method, device and terminal device of the model of pattern-recognition
CN106600396A (en) * 2016-10-27 2017-04-26 深圳前海微众银行股份有限公司 Account information upgrading method and apparatus
CN106778607A (en) * 2016-12-15 2017-05-31 国政通科技股份有限公司 A kind of people based on recognition of face and identity card homogeneity authentication device and method
CN106803289A (en) * 2016-12-22 2017-06-06 五邑大学 A kind of false proof method and system of registering of intelligent mobile
CN106846577A (en) * 2017-01-19 2017-06-13 泰康保险集团股份有限公司 Personnel's discrepancy authority control method and device based on recognition of face
CN107733911A (en) * 2017-10-30 2018-02-23 郑州云海信息技术有限公司 A kind of power and environmental monitoring system client login authentication system and method
CN107992842A (en) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 Biopsy method, computer installation and computer-readable recording medium

Also Published As

Publication number Publication date
CN108446687A (en) 2018-08-24

Similar Documents

Publication Publication Date Title
CN108446687B (en) Self-adaptive face vision authentication method based on interconnection of mobile terminal and background
CN108985134B (en) Face living body detection and face brushing transaction method and system based on binocular camera
JP5008269B2 (en) Information processing apparatus and information processing method
US20170262472A1 (en) Systems and methods for recognition of faces e.g. from mobile-device-generated images of faces
Kukharev et al. Visitor identification-elaborating real time face recognition system
US8620036B2 (en) System and method for controlling image quality
US20120051605A1 (en) Method and apparatus of a gesture based biometric system
CN110263603B (en) Face recognition method and device based on central loss and residual error visual simulation network
US9449217B1 (en) Image authentication
CN110599187A (en) Payment method and device based on face recognition, computer equipment and storage medium
CN114218543A (en) Encryption and unlocking system and method based on multi-scene expression recognition
CN205644823U (en) Social security self -service terminal device
KR20200119425A (en) Apparatus and method for domain adaptation-based object recognition
WO2018185574A1 (en) Apparatus and method for documents and/or personal identities recognition and validation
Sai et al. Student Attendance Monitoring System Using Face Recognition
KR20080073598A (en) Method of real time face recognition, and the apparatus thereof
Sharanya et al. Online attendance using facial recognition
CN107220612B (en) Fuzzy face discrimination method taking high-frequency analysis of local neighborhood of key points as core
KR20070118806A (en) Method of detecting face for embedded system
Amjed et al. Noncircular iris segmentation based on weighted adaptive hough transform using smartphone database
CN111428670B (en) Face detection method, face detection device, storage medium and equipment
CN113591619A (en) Face recognition verification device based on video and verification method thereof
CN112418078A (en) Score modulation method, face recognition device and medium
CN112182537A (en) Monitoring method, device, server, system and storage medium
CN111985925A (en) Multi-mode biological recognition payment method based on iris recognition and face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220107

Address after: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant after: Weisi e-commerce (Shenzhen) Co.,Ltd.

Address before: Room 1909, building B, Park Road building, 26 Dengliang Road, Nanshan District, Shenzhen, Guangdong 518000

Applicant before: SHENZHEN JIEJIAO ELECTRONIC COMMERCE Co.,Ltd.

GR01 Patent grant
GR01 Patent grant