WO2020211387A1 - 电子合同显示方法、装置、电子设备及计算机存储介质 - Google Patents

电子合同显示方法、装置、电子设备及计算机存储介质 Download PDF

Info

Publication number
WO2020211387A1
WO2020211387A1 PCT/CN2019/121770 CN2019121770W WO2020211387A1 WO 2020211387 A1 WO2020211387 A1 WO 2020211387A1 CN 2019121770 W CN2019121770 W CN 2019121770W WO 2020211387 A1 WO2020211387 A1 WO 2020211387A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
facial
preset
training
deep learning
Prior art date
Application number
PCT/CN2019/121770
Other languages
English (en)
French (fr)
Inventor
卢宁
徐国强
邱寒
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2020211387A1 publication Critical patent/WO2020211387A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6209Protecting access to data via a platform, e.g. using keys or access control rules to a single file or object, e.g. in a secure envelope, encrypted and accessed using a key, or with access control rules appended to the object itself
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the present invention relates to the field of face recognition, in particular to a method, device, electronic equipment and computer storage medium for displaying an electronic contract based on face recognition.
  • the display control of electronic contracts in electronic devices depends on touch operations to trigger display.
  • the confidentiality of electronic contracts is generally relatively high, the method of opening electronic contracts based on user touch operations is not very secure.
  • people generally check electronic contracts on the way to get off work or on business trips, but it is inconvenient to manually trigger up and down or left and right pages.
  • the first aspect of the application provides an electronic contract display method, the method includes the steps:
  • the electronic contract is unlocked and displayed for the user to watch.
  • the second aspect of the present application provides an electronic contract display device, the device includes:
  • the acquisition module is used to acquire a face image
  • Face recognition module for:
  • the display module is used to unlock and display the electronic contract for the user to watch when the recognized face image matches the target face image.
  • a third aspect of the present application provides an electronic device, the electronic device includes a processor, and the processor is configured to implement the electronic contract display method when executing a computer program stored in a memory.
  • the fourth aspect of the present application provides a computer storage medium on which a computer program is stored, and when the computer program is executed by a processor, the electronic contract display method is implemented.
  • the present invention uses a pre-trained preset deep learning model to recognize the face image and determine whether the recognized face image matches the stored target face image, and when the recognized face image matches the target person When the face images match, the electronic contract is unlocked and displayed for the user to watch, thereby improving the security of viewing the electronic contract.
  • the present invention also obtains the face image and uses the trained facial action classification model to determine the facial action in the face image, and according to the analyzed facial action of the user, it searches and searches a preset facial operation instruction relation table. The operation instruction corresponding to the facial motion is controlled according to the determined operation instruction to control the electronic contract, which is convenient for the user to operate the electronic contract.
  • Fig. 1 is an application environment diagram of an electronic contract display method in an embodiment of the present invention.
  • Fig. 2 is a flowchart of an electronic contract display method in an embodiment of the present invention.
  • Fig. 3 is a structural diagram of an electronic contract display device in an embodiment of the present invention.
  • Figure 4 is a schematic diagram of the electronic device of the present invention.
  • the electronic contract display method of the present invention is applied to one or more electronic devices.
  • the electronic device is a device that can automatically perform numerical calculation and/or information processing in accordance with pre-set or stored instructions. Its hardware includes, but is not limited to, a microprocessor and an application specific integrated circuit (ASIC) , Field-Programmable Gate Array (FPGA), Digital Processor (Digital Signal Processor, DSP), embedded equipment, etc.
  • ASIC application specific integrated circuit
  • FPGA Field-Programmable Gate Array
  • DSP Digital Processor
  • embedded equipment etc.
  • the electronic device may be a computing device such as a desktop computer, a notebook computer, a tablet computer, and a cloud server.
  • the device can interact with the user through a keyboard, a mouse, a remote control, a touch panel, or a voice control device.
  • Fig. 1 is a schematic diagram of an application environment of an electronic contract display method in an embodiment of the present invention.
  • the electronic contract display method is applied in the user terminal 1.
  • the user terminal 1 is in communication connection with the server 2 through the network 3, and is used for uploading the collected face image to the server 2.
  • the user terminal 1 may be a mobile phone, a computer device, a tablet computer, or other devices.
  • the server 2 can be a single server, a server cluster or a cloud server.
  • the network 3 used to support the communication between the user terminal 1 and the server 2 may be a wired network or a wireless network, such as radio, wireless fidelity (WIFI), cellular, satellite, broadcasting, etc.
  • WIFI wireless fidelity
  • Fig. 2 is a flowchart of an electronic contract display method in an embodiment of the present invention. According to different needs, the order of the steps in the flowchart can be changed, and some steps can be omitted.
  • the electronic contract display method specifically includes the following steps:
  • Step S201 Obtain a face image.
  • the user terminal 1 includes an image acquisition unit 11.
  • the image collection unit 11 is used to collect face images.
  • the image acquisition unit 11 may be a 2D camera, and the user terminal 1 acquires a user's face image as a face image through the 2D camera.
  • the image acquisition unit may also be a 3D camera, and the user terminal 1 acquires a user's 3D face image as a face image through the 3D camera.
  • the acquired face image may be a face picture or a face video.
  • Step S202 Recognize the face image by using the trained preset deep learning model and determine whether the face image matches the stored target face image.
  • the user terminal 1 compares the recognized face image with the target face image after recognizing the face image, and determines whether to unlock and display the electronic contract according to the comparison result.
  • the target face image is stored in the user terminal 1 or the server 3.
  • step S206 is executed; otherwise, step S203 is executed.
  • the target face image is stored in the user terminal 1 or the server 3.
  • the recognizing the face image by using the trained preset deep learning model and determining whether the recognized face image matches the stored target face image includes:
  • the preset deep learning model is a deep learning model based on a multilayer neural network.
  • the preset deep learning model includes multiple base layers, and each base layer can be used as an independent feature extraction layer to extract local features of the face image.
  • the multilayer neural network may be a convolutional neural network. That is, the preset deep learning model includes an input layer, multiple convolutional layers for feature extraction, a fully connected layer, and an output layer.
  • the input layer is used to provide input channels for the face image or the target face image;
  • the convolutional layer can be used as an independent feature extraction layer to train and extract the local features of the face image or the target face image;
  • the fully connected layer can The local features extracted by the training of each convolutional layer are integrated, and the image features extracted by the training of each convolutional layer are connected into a one-dimensional vector;
  • the output layer is used to output the classification results of the input face image samples.
  • the method further includes: training the preset deep learning model.
  • a preset number of face image samples can be stored in the server 2, and the user classifies these face image samples; for example, 10,000 faces can be prepared Image samples, and then classify the 10,000 face image samples according to the users to which these face image samples belong, and calibrate each classified face image sample according to the user to which it belongs.
  • the classified Each category is calibrated as A, B, C, etc., and each user has 10-100 pictures. At this time, the face image samples in each category belong to the same user.
  • the preset deep learning model can be used as the classification model at this time, and these face image samples are input into the preset deep learning model as training samples. Training, and adjusting the weight parameters of the connections between nodes on each base layer of the preset deep learning model according to the classification results output by the preset deep learning model. After the preset deep learning model is trained based on the input training samples after each adjustment, the accuracy of the output classification result will be gradually improved compared with the classification result calibrated by the user. At the same time, the user can set an accuracy threshold in advance. In the continuous adjustment process, if the classification result output by the preset deep learning model is compared with the classification result calibrated by the user, the accuracy reaches the preset accuracy threshold. Later, at this time, the weight parameters connected between the base-level nodes in the preset deep learning model are all optimal weight parameters, and it can be considered that the preset deep learning model has been trained.
  • the user terminal 1 uses the trained preset deep learning model to extract the face feature vector of the face image and the target face image.
  • a target face image database can be created in advance in the user terminal 1, and each target face image in the target face image database can be used for face recognition on the face image.
  • the user terminal 1 may use the target face image in the target face image database as the input image included in the preset deep learning model.
  • Feature training is performed sequentially in multiple convolutional layers. After the training of each convolutional layer is completed, the feature vector output by the fully connected layer can be extracted as the face feature vector of the target face image.
  • the user terminal 1 when extracting facial features from a facial image, can use the facial image as an input image in the preset deep learning model in the same processing manner.
  • Feature training is sequentially performed in the convolutional layer, and after the training of each convolutional layer is completed, the feature vector output by the fully connected layer can be extracted as the face feature vector of the face image.
  • the user terminal 1 may calculate the vector distance between the face feature vector of the face image and the face feature vector of the target face image, and then take a value based on the pre-established vector distance and similarity.
  • the corresponding relationship list of determines the similarity value corresponding to the vector distance.
  • the user terminal 1 pre-establishes a correspondence list of vector distances and similarity values according to the relationship between the feature vector and the similarity, and the correspondence list can be divided into multiples according to a preset vector distance threshold. Different similarity levels, and set a corresponding similarity value for each similarity level. Since the vector distance between feature vectors is usually inversely proportional to the similarity between feature vectors, when the vector distance is smaller, The higher the similarity value, the lower the similarity value when the vector distance is larger. In this way, the user terminal 1 can obtain the similarity value corresponding to the calculated vector distance by querying the correspondence list.
  • the vector distance may be a cosine distance or Euclidean distance, which is not particularly limited in this embodiment.
  • the user terminal 1 after converting the calculated vector distance into the corresponding similarity value, the user terminal 1 further determines whether the similarity value reaches the similarity threshold, and if the similarity value reaches the similarity At this time, the user terminal 1 can confirm that the face image and the target face image are the same or matching face images, and output the target face image as the recognition result. If the similarity value does not reach the similarity threshold, at this time, the user terminal 1 confirms that the face image and the target face image are not the same or matching face images.
  • the user terminal 1 can repeat the above In the process, continue to calculate the similarity value between the face image and the next target face image in the database, until the same or matching face image is found, or the entire database is not found to be the same as the face image Or stop when it matches the face image.
  • this case is not limited by the specific face recognition method used. Whether it is an existing face recognition method or a face recognition method developed in the future, it can be applied to the face recognition method of this case, and should also include Within the protection scope of the present invention.
  • Step S203 Unlock and display the electronic contract for the user to watch.
  • the electronic contract when the user terminal 1 determines that the face image matches the target face image, the electronic contract is unlocked and displayed on the user terminal 1 for the user to view.
  • a file list is displayed on the user terminal 1.
  • the file list includes multiple confidential contract file options, and the user terminal 1 displays a confidential contract file corresponding to the confidential contract file option for the user to view in response to the user's operation of selecting a confidential contract file option.
  • Step S204 Obtain a face image and use the trained facial action classification model to determine the facial action in the face image.
  • the facial motion is a feature of the user's facial motion.
  • the facial action categories include: blinking left eye category, blinking right eye category, frowning category, blinking eye category, and opening mouth category.
  • the facial action classification model includes, but is not limited to: a Support Vector Machine (SVM) model. Taking face images including blinking left eye, blinking right eye, blinking eyes, frowning or opening mouth as the input of the facial action classification model, and after calculating the facial action classification model, output the facial action category corresponding to the face image .
  • SVM Support Vector Machine
  • the training process of the facial action classification model includes:
  • facial motion data corresponding to 500 blink left eye category, blink right eye category, frown category, blink category, and open mouth category, and label each facial motion data category.
  • the facial motion data of the positive sample and the facial motion data of the negative sample are randomly divided into a training set of a first preset ratio and a verification set of a second preset ratio, and the training set is used to train the facial actions A classification model, and the verification set is used to verify the accuracy of the facial action classification model after training.
  • first preset ratio for example, 70%
  • the training is ended, and the facial action classification model after training is used as a classifier to identify the facial action category in the face image; if the accuracy rate is less than the preset accuracy rate.
  • the accuracy rate is set, the number of positive samples and the number of negative samples are increased to retrain the facial action classification model until the accuracy rate is greater than or equal to the preset accuracy rate.
  • Step S205 searching for an operation instruction corresponding to the facial movement from a preset facial operation instruction relation table according to the analyzed facial movement of the user, and controlling the electronic contract according to the determined operation instruction.
  • the facial operation instruction relationship table defines a plurality of correspondences between facial actions and operation instructions, where the facial action of blinking the left eye corresponds to the control instruction of turning the page to the left, and the facial action of blinking the right eye Corresponding to the control instruction for turning the page to the right, the facial movement of frowning corresponds to the control instruction of locking the page, the facial movement of blinking corresponds to the control instruction of unlocking the page, and the facial movement of opening the mouth corresponds to the saved control instruction.
  • the user terminal 1 searches the preset facial operation instruction relation table for the operation instruction corresponding to the blink of the left eye to turn the page to the left, and controls the Turn the pages of the electronic contract to the left.
  • the user terminal 1 searches the preset facial operation instruction relation table for the operation instruction corresponding to the blinking right eye to turn the page to the right, and controls the electronic contract to proceed to the right Turn the page.
  • the user terminal 1 searches the preset facial operation instruction relationship table for the operation instruction corresponding to the frown as the lock page, and controls the electronic contract to lock the page.
  • the user terminal 1 may acquire the facial features of the user through at least one of a bioelectric sensor, a muscle vibration sensor, and an infrared scanning sensor.
  • the information extracted by the bioelectric sensor and muscle vibration sensor used in this case is the physiological information of the human body.
  • the infrared scanning sensor used in this case is a sensor that uses the physical properties of infrared to measure. It can measure the change type and range of facial expressions, so that the user's facial expressions are determined according to the different change types and range of facial expressions.
  • Step S206 Display a reminder message to remind the user that he does not have the reading authority.
  • a reminder message is displayed to remind the user that the user does not have the reading authority, and an error that the face image does not match the target face image is recorded frequency.
  • an alarm message is issued.
  • the method further includes the step of receiving a user's setting operation to set the correspondence between facial actions and operation instructions in the facial operation instruction relationship table.
  • the user terminal 1 acquires a face image with facial actions through the image acquisition unit 11, and inputs the face image into a facial action classification model to parse the face image
  • the facial motions of the face and the parsed facial motions correspond to the operation instructions set by the user.
  • the user terminal 1 analyzes the facial motions in the face image, it controls the electronic contract in accordance with the facial motions.
  • the operation instruction corresponding to the action is operated.
  • FIG. 3 is a structural diagram of an electronic contract display device 40 in an embodiment of the present invention.
  • the electronic contract display device 40 runs in the user terminal 1.
  • the electronic contract display device 40 may include multiple functional modules composed of program code segments.
  • the program code of each program segment in the electronic contract display device 40 can be stored in the memory and executed by at least one processor to perform the function of face recognition.
  • the electronic contract display device 40 can be divided into multiple functional modules according to the functions it performs.
  • the electronic contract display device 40 may include an acquisition module 401, a face recognition module 402, a display module 403, a facial motion recognition module 404, an operation execution module 405, a reminder module 406, and a setting module 407.
  • the module referred to in the present invention refers to a series of computer program segments that can be executed by at least one processor and can complete fixed functions, and are stored in a memory. In some embodiments, the functions of each module will be detailed in subsequent embodiments.
  • the acquiring module 401 is used to acquire a face image.
  • the user terminal 1 includes an image acquisition unit 11.
  • the image collection unit 11 is used to collect face images.
  • the image acquisition unit 11 may be a 2D camera, and the acquisition module 401 acquires a user's face image as a face image through the 2D camera.
  • the image acquisition unit may also be a 3D camera, and the acquisition module 401 acquires a user's 3D face image as a face image through the 3D camera.
  • the acquired face image may be a face picture or a face video.
  • the face recognition module 402 is configured to recognize the face image using a preset deep learning model that has been trained and determine whether the face image matches the stored target face image.
  • the face recognition module 402 compares the recognized face image with the target face image after recognizing the face image, and determines whether to unlock and display the electronic contract according to the comparison result.
  • the target face image is stored in the user terminal 1 or the server 3.
  • the face recognition module 402 uses a pre-trained preset deep learning model to recognize the face image and determine whether the recognized face image matches the stored target face image includes:
  • the preset deep learning model is a deep learning model based on a multilayer neural network.
  • the preset deep learning model includes multiple base layers, and each base layer can be used as an independent feature extraction layer to extract local features of the face image.
  • the multilayer neural network may be a convolutional neural network. That is, the preset deep learning model includes an input layer, multiple convolutional layers for feature extraction, a fully connected layer, and an output layer.
  • the input layer is used to provide input channels for the face image or the target face image;
  • the convolutional layer can be used as an independent feature extraction layer to train and extract the local features of the face image or the target face image;
  • the fully connected layer can The local features extracted by the training of each convolutional layer are integrated, and the image features extracted by the training of each convolutional layer are connected into a one-dimensional vector;
  • the output layer is used to output the classification results of the input face image samples.
  • a preset number of face image samples can be stored in the server 2, and the user classifies these face image samples; for example, 10,000 images can be prepared Face image samples, and then classify these 10,000 face image samples according to the users to which these face image samples belong, and calibrate each classified face image sample according to the user to which it belongs. For example, you can classify Each of the latter categories is calibrated as A, B, C, etc., and each user has 10-100 pictures. At this time, the face image samples in each category belong to the same user. After the prepared preset number of face image samples are classified, the preset deep learning model can be used as the classification model at this time, and these face image samples are input into the preset deep learning model as training samples.
  • the preset deep learning model can be used as the classification model at this time, and these face image samples are input into the preset deep learning model as training samples.
  • the preset deep learning model After the preset deep learning model is trained based on the input training samples after each adjustment, the accuracy of the output classification result will be gradually improved compared with the classification result calibrated by the user. At the same time, the user can set an accuracy threshold in advance. In the continuous adjustment process, if the classification result output by the preset deep learning model is compared with the classification result calibrated by the user, the accuracy reaches the preset accuracy threshold. Later, at this time, the weight parameters connected between the base-level nodes in the preset deep learning model are all optimal weight parameters, and it can be considered that the preset deep learning model has been trained.
  • the user terminal 1 uses the trained preset deep learning model to extract the face feature vector of the face image and the target face image.
  • a target face image database can be created in advance in the user terminal 1, and each target face image in the target face image database can be used for face recognition on the face image.
  • the user terminal 1 may use the target face image in the target face image database as the input image included in the preset deep learning model.
  • Feature training is performed sequentially in multiple convolutional layers. After the training of each convolutional layer is completed, the feature vector output by the fully connected layer can be extracted as the face feature vector of the target face image.
  • the user terminal 1 when extracting facial features from a facial image, can use the facial image as an input image in the preset deep learning model in the same processing manner.
  • Feature training is sequentially performed in the convolutional layer, and after the training of each convolutional layer is completed, the feature vector output by the fully connected layer can be extracted as the face feature vector of the face image.
  • the user terminal 1 may calculate the vector distance between the face feature vector of the face image and the face feature vector of the target face image, and then take a value based on the pre-established vector distance and similarity.
  • the corresponding relationship list of determines the similarity value corresponding to the vector distance.
  • the user terminal 1 pre-establishes a correspondence list of vector distances and similarity values according to the relationship between the feature vector and the similarity, and the correspondence list can be divided into multiples according to a preset vector distance threshold. Different similarity levels, and set a corresponding similarity value for each similarity level. Since the vector distance between feature vectors is usually inversely proportional to the similarity between feature vectors, when the vector distance is smaller, The higher the similarity value, the lower the similarity value when the vector distance is larger. In this way, the user terminal 1 can obtain the similarity value corresponding to the calculated vector distance by querying the correspondence list.
  • the vector distance may be a cosine distance or Euclidean distance, which is not particularly limited in this embodiment.
  • the user terminal 1 after converting the calculated vector distance into the corresponding similarity value, the user terminal 1 further determines whether the similarity value reaches the similarity threshold, and if the similarity value reaches the similarity At this time, the user terminal 1 can confirm that the face image and the target face image are the same or matching face images, and output the target face image as the recognition result. If the similarity value does not reach the similarity threshold, at this time, the user terminal 1 confirms that the face image and the target face image are not the same or matching face images.
  • the user terminal 1 can repeat the above In the process, continue to calculate the similarity value between the face image and the next target face image in the database, until the same or matching face image is found, or the entire database is not found to be the same as the face image Or stop when it matches the face image.
  • this case is not limited by the specific face recognition method used. Whether it is an existing face recognition method or a face recognition method developed in the future, it can be applied to the face recognition method of this case, and should also include Within the protection scope of the present invention.
  • the display module 403 is used to unlock and display the electronic contract for the user to view when the recognized face image matches the target face image.
  • the display module 403 unlocks the electronic contract and displays it on the user terminal 1 for the user to view when it is determined that the face image matches the target face image.
  • a file list is displayed on the user terminal 1.
  • the file list includes multiple confidential contract file options, and the display module 403 displays the confidential contract file corresponding to the confidential contract file option for the user to view in response to the user's operation of selecting a confidential contract file option.
  • the facial action recognition module 404 is used to obtain a face image and use the trained facial action classification model to determine the facial action in the face image.
  • the facial motion is a feature of the user's facial motion.
  • the facial action categories include: blinking left eye category, blinking right eye category, frowning category, blinking eye category, and opening mouth category.
  • the facial action classification model includes, but is not limited to: a Support Vector Machine (SVM) model. Taking face images including blinking left eye, blinking right eye, blinking eyes, frowning or opening mouth as the input of the facial action classification model, and after calculating the facial action classification model, output the facial action category corresponding to the face image .
  • SVM Support Vector Machine
  • the training process of the facial action classification model includes:
  • facial motion data corresponding to 500 blink left eye category, blink right eye category, frown category, blink category, and open mouth category, and label each facial motion data category.
  • the facial motion data of the positive sample and the facial motion data of the negative sample are randomly divided into a training set of a first preset ratio and a verification set of a second preset ratio, and the training set is used to train the facial actions A classification model, and the verification set is used to verify the accuracy of the facial action classification model after training.
  • first preset ratio for example, 70%
  • the training is ended, and the facial action classification model after training is used as a classifier to identify the facial action category in the face image; if the accuracy rate is less than the preset accuracy rate.
  • the accuracy rate is set, the number of positive samples and the number of negative samples are increased to retrain the facial action classification model until the accuracy rate is greater than or equal to the preset accuracy rate.
  • the operation execution module 405 searches for an operation instruction corresponding to the facial movement from a preset facial operation instruction relation table according to the analyzed facial movement of the user, and controls the electronic contract according to the determined operation instruction.
  • the facial operation instruction relationship table defines a plurality of correspondences between facial actions and operation instructions, where the facial action of blinking the left eye corresponds to the control instruction of turning the page to the left, and the facial action of blinking the right eye Corresponding to the control instruction for turning the page to the right, the facial movement of frowning corresponds to the control instruction of locking the page, the facial movement of blinking corresponds to the control instruction of unlocking the page, and the facial movement of opening the mouth corresponds to the saved control instruction.
  • the operation execution module 405 searches the preset facial operation instruction relation table for the operation instruction corresponding to the blinking left eye to turn the page to the left, and controls all Turn the pages of the electronic contract to the left.
  • the operation execution module 405 searches the preset facial operation instruction relationship table for the operation instruction corresponding to the blinking right eye to turn the page to the right, and controls the electronic contract to turn right. Turn pages.
  • the operation execution module 405 searches the preset facial operation instruction relationship table for the operation instruction corresponding to the frown as the lock page, and controls the electronic contract to lock the page.
  • the operation execution module 405 may acquire the facial features of the user through at least one of a bioelectric sensor, a muscle vibration sensor, and an infrared scanning sensor.
  • the information extracted by the bioelectric sensor and muscle vibration sensor used in this case is the physiological information of the human body.
  • the infrared scanning sensor used in this case is a sensor that uses the physical properties of infrared to measure. It can measure the change type and range of facial expressions, so that the user's facial expressions are determined according to the different change types and range of facial expressions.
  • the reminder module 406 is configured to display a reminder message to remind the user that the user does not have the reading authority when the recognized face image does not match the target face image.
  • a reminder message is displayed to remind the user that the user does not have the reading authority, and an error that the face image does not match the target face image is recorded frequency.
  • an alarm message is issued.
  • the setting module 407 is configured to receive a user's setting operation to set the correspondence between facial actions and operation instructions in the facial operation instruction relationship table.
  • the setting module 407 obtains a face image with facial actions through the image acquisition unit 11, and inputs the face image into a facial action classification model to parse out the face The facial action of the image is established, and the parsed facial action is established in correspondence with the operation instruction set by the user.
  • the operation execution module 405 controls the electronic contract in accordance with The operation instruction corresponding to the facial motion is performed.
  • FIG. 4 is a schematic diagram of the electronic device 6 in an embodiment of the present invention.
  • the electronic device 6 may be the user terminal 1 in the present invention.
  • the electronic device 6 includes a memory 61, a processor 62, and a computer program 63 that is stored in the memory 61 and can run on the processor 62.
  • the processor 62 executes the computer program 63
  • the steps in the embodiment of the electronic contract display method described above are implemented, for example, steps S201 to S206 shown in FIG. 2.
  • the processor 62 executes the computer program 63
  • the function of each module/unit in the embodiment of the electronic contract display device described above is realized, for example, the modules 401 to 407 in FIG. 3.
  • the computer program 63 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 61 and executed by the processor 62 to complete this invention.
  • the one or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program 63 in the electronic device 6.
  • the computer program 63 can be divided into a function entry determination module 401, a recording module 402, a preference tag determination module 403, and an adjustment module 404 in FIG. 3.
  • a function entry determination module 401 For specific functions of each module, refer to the second embodiment.
  • the electronic device 6 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the schematic diagram is only an example of the electronic device 6 and does not constitute a limitation on the electronic device 6. It may include more or less components than those shown in the figure, or a combination of certain components, or different components. Components, for example, the electronic device 6 may also include input and output devices, network access devices, buses, and so on.
  • the so-called processor 62 may be a central processing module (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor can be a microprocessor or the processor 62 can also be any conventional processor, etc.
  • the processor 62 is the control center of the electronic device 6 and connects the entire electronic device 6 through various interfaces and lines. Parts.
  • the memory 61 may be used to store the computer program 63 and/or modules/units.
  • the processor 62 runs or executes the computer programs and/or modules/units stored in the memory 61 and calls the computer programs and/or modules/units stored in the memory 61.
  • the data in 61 realizes various functions of the computer electronic device 6.
  • the memory 61 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.); the storage data area may The data (such as audio data, phone book, etc.) created according to the use of the electronic device 6 is stored.
  • the memory 61 may include a high-speed random access memory, and may also include a non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a smart memory card (Smart Media Card, SMC), and a Secure Digital (SD) Card, Flash Card, at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
  • a non-volatile memory such as a hard disk, a memory, a plug-in hard disk, a smart memory card (Smart Media Card, SMC), and a Secure Digital (SD) Card, Flash Card, at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
  • the integrated module/unit of the electronic device 6 is implemented in the form of a software function module and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the present invention implements all or part of the procedures in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium.
  • the computer program is executed by the processor, it can implement the steps of the foregoing method embodiments.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signal, telecommunications signal, and software distribution media, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electrical carrier signal telecommunications signal
  • software distribution media etc.
  • the content contained in the computer-readable medium can be appropriately added or deleted in accordance with the requirements of the legislation and patent practice in the jurisdiction.
  • the computer-readable medium Does not include electrical carrier signals and telecommunication signals.
  • the disclosed electronic device and method may be implemented in other ways.
  • the electronic device embodiments described above are only illustrative.
  • the division of the modules is only a logical function division, and there may be other division methods in actual implementation.
  • the functional modules in the various embodiments of the present invention may be integrated in the same processing module, or each module may exist alone physically, or two or more modules may be integrated in the same module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, or in the form of hardware plus software functional modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioethics (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

一种基于人脸识别的电子合同显示方法、装置、电子设备及计算机存储介质,该方法包括:获取人脸图像;根据模型提取人脸图像和目标人脸图像的人脸特征向量;基于人脸特征向量计算人脸图像与目标人脸图像的相似度;根据相似度判断人脸图像是否与目标人脸图像相匹配;当人脸图像与目标人脸图像相匹配时解锁并显示电子合同。

Description

电子合同显示方法、装置、电子设备及计算机存储介质
本申请要求于2019年4月18日提交中国专利局、申请号为201910315169.9、发明名称为“电子合同显示方法、装置、电子设备及计算机存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及人脸识别领域,具体涉及一种基于人脸识别的电子合同显示方法、装置、电子设备及计算机存储介质。
背景技术
目前电子合同在电子装置中的显示控制都要依赖于触摸操作触发显示。然而,由于电子合同保密性一般都比较高,而基于用户的触摸操作打开电子合同的方式安全性不高。此外,人们一般在上下班或出差的路途上查看电子合同,然而通过手动触发上下或左右翻页及其不便。
发明内容
鉴于以上内容,有必要提出一种电子合同显示方法、装置、电子设备及计算机存储介质,以提高查看电子合同的安全性及便利性。
本申请的第一方面提供一种电子合同显示方法,所述方法包括步骤:
获取人脸图像;
根据已经训练好的所述预设深度学习模型提取所述人脸图像的人脸特征向量以及所述目标人脸图像的人脸特征向量;
计算所述人脸图像的人脸特征向量与所述目标人脸图像的人脸特征向量之间的向量距离;
根据预先建立的向量距离与相似度取值的对应关系列表确定与所述向量距离对应的相似度值,其中所述向量距离可以是余弦距离或欧氏距离;
根据计算出的所述相似度取值判断识别出的所述人脸图像是否与存储的所述目标人脸图像相匹配;及
当识别出的人脸图像与所述目标人脸图像相匹配时解锁并显示电子合同以供用户观看。
本申请的第二方面提供一种电子合同显示装置,所述装置包括:
获取模块,用于获取人脸图像;
人脸识别模块,用于:
根据已经训练好的所述预设深度学习模型提取所述人脸图像的人脸特征向量以及所述目标人脸图像的人脸特征向量;
计算所述人脸图像的人脸特征向量与所述目标人脸图像的人脸特征向量之间的向量距离;
根据预先建立的向量距离与相似度取值的对应关系列表确定与所述向量距离对应的相似度值,其中所述向量距离可以是余弦距离或欧氏距离;及
根据计算出的所述相似度取值判断识别出的所述人脸图像是否与存储的所述目标人脸图像相匹配;及
显示模块,用于当识别出的人脸图像与所述目标人脸图像相匹配时解锁并显示电子合同以供用户观看。
本申请的第三方面提供一种电子设备,所述电子设备包括处理器,所述处理器用于执行存储器中存储的计算机程序时实现所述电子合同显示方法。
本申请的第四方面提供一种计算机存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现所述电子合同显示方法。
本发明利用已训练好的预设深度学习模型识别所述人脸图像并判断识别出的人脸图像是否与存储的目标人脸图像相匹配,及当识别出的人脸图像与所述目标人脸图像相匹配时解锁并显示电子合同以供用户观看,从而提高查看电子合同的安全性。本发明还获取所述人脸图像并利用训练好的面部动作分类模型确定所述人脸图像中的面部动作,及根据分析出的用户的面部动作从预设的面部操作指令关系表中查找与所述面部动作相对应的操作指令,并根据确定的操作指令对所述电子合同进行控制,如此方便用户对电子合同的操作。
附图说明
图1为本发明一实施方式中电子合同显示方法的应用环境图。
图2是本发明一实施方式中电子合同显示方法的流程图。
图3是本发明一实施方式中电子合同显示装置的结构图。
图4为本发明电子设备的示意图。
具体实施方式
为了能够更清楚地理解本发明的上述目的、特征和优点,下面结合附图和具体实施例对本发明进行详细描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本发明,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。
优选地,本发明电子合同显示方法应用在一个或者多个电子设备中。所 述电子设备是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程门阵列(Field-Programmable Gate Array,FPGA)、数字处理器(Digital Signal Processor,DSP)、嵌入式设备等。
所述电子设备可以是桌上型计算机、笔记本电脑、平板电脑及云端服务器等计算设备。所述设备可以与用户通过键盘、鼠标、遥控器、触摸板或声控设备等方式进行人机交互。
实施例1
图1是本发明一实施方式中电子合同显示方法的应用环境示意图。
参阅图1所示,所述电子合同显示方法应用在用户终端1中。所述用户终端1通过网络3与服务器2通信连接,用于将采集的人脸图像上传到所述服务器2中。本实施方式中,所述用户终端1可以为移动手机、计算机装置、平板电脑等装置。所述服务器2可以为单一的服务器、服务器集群或云端服务器。用于支持用户终端1与服务器2进行通信的网络3可以是有线网络,也可以是无线网络,例如无线电、无线保真(Wireless Fidelity,WIFI)、蜂窝、卫星、广播等。
图2是本发明一实施方式中电子合同显示方法的流程图。根据不同的需求,所述流程图中步骤的顺序可以改变,某些步骤可以省略。
参阅图2所示,所述电子合同显示方法具体包括以下步骤:
步骤S201、获取人脸图像。
本实施方式中,所述用户终端1包括一图像采集单元11。所述图像采集单元11用于采集人脸图像。例如,在一实现方式中,所述图像采集单元11可以为2D摄影机,所述用户终端1通过所述2D摄影机获取用户的人脸图像作为人脸图像。在另一实施方式中,所述图像采集单元也可以为3D摄像机,所述用户终端1通过所述3D摄像机获取用户的3D人脸图像作为人脸图像。本实施方式中,所述获取的人脸图像可以是人脸图片,也可以是人脸视频等。
步骤S202、利用已训练好的预设深度学习模型识别所述人脸图像并判断所述人脸图像是否与存储的目标人脸图像相匹配。
本实施方式中,用户终端1在识别出人脸图像后将识别出的人脸图像与所述目标人脸图像进行比较,并根据比较结果确定是否解锁并显示电子合同。本实施方式中,所述目标人脸图像存储在所述用户终端1或服务器3中。当识别出的人脸图像与所述目标人脸图像不相匹配则执行步骤S206,否则执行步骤S203。本实施方式中,所述目标人脸图像存储在所述用户终端1或服务器3中。
在一实施方式中,所述利用已训练好的预设深度学习模型识别所述人脸图像并判断识别出的人脸图像是否与存储的目标人脸图像相匹配包括:
(S2021)根据已经训练好的预设深度学习模型提取人脸图像的人脸特征向量以及目标人脸图像的人脸特征向量。
本实施方式中,所述预设深度学习模型为基于多层神经网络的深度学习 模型。所述预设深度学习模型包括多个基层,每一个基层可作为独立的特征提取层对人脸图像的局部特征进行提取。在一具体实施方式中,所述多层神经网络可以为卷积神经网络。也即,所述预设深度学习模型包括输入层、多个用于进行特征提取的卷积层、全连接层以及输出层。输入层用于为人脸图像或目标人脸图像提供输入通道;卷积层可以作为独立的特征提取层对所述人脸图像或所述目标人脸图像的局部特征进行训练提取;全连接层可以对各卷积层所训练提取出的局部特征进行整合,将各卷积层训练提取出的图像特征连接为一个一维向量;输出层用于输出对输入的人脸图像样本的分类结果。
本实施方式中,所述方法还包括:对所述预设深度学习模型进行训练。具体的,在对预设深度学习模型进行训练时,可以在服务器2中存储预设数量的人脸图像样本,并且由用户对这些人脸图像样本进行分类;例如,可以准备1万张人脸图像样本,然后按照这些人脸图像样本所归属的用户对这1万张人脸图像样本进行分类,并将每一个分类的人脸图像样本按照所属的用户进行标定,比如,可以将分类后的每一个分类分别标定为甲、乙、丙等,每一个用户具备10~100张不等的图片,此时每一个分类中的人脸图像样本均归属于同一个用户。当准备的预设数量的人脸图像样本分类完成后,此时可以将所述预设深度学习模型作为分类模型,将这些人脸图像样本作为训练样本输入到所述预设深度学习模型中进行训练,并根据预设深度学习模型输出的分类结果,对所述预设深度学习模型各基层上节点之间的连接的权重参数进行调整。所述预设深度学习模型在每次调整后基于输入的训练样本进行训练后,输出的分类结果与用户标定的分类结果相比,准确度将会逐渐提高。与此同时,用户可以预先设置一个准确度阈值,在不断的调整过程中,如果所述预设深度学习模型输出的分类结果与用户标定的分类结果相比,准确度达到预先设置的准确度阈值后,此时所述预设深度学习模型中各基层节点之间连接的权重参数均为最佳权重参数,可以认为所述预设深度学习模型已经训练完毕。
本实施方式中,在预设深度学习模型训练完毕后,用户终端1使用已训练好的所述预设深度学习模型,对人脸图像以及目标人脸图像进行人脸特征向量的提取。在具体实施方式中,在用户终端1中可以预先创建一个目标人脸图像数据库,所述目标人脸图像数据库中的每一张目标人脸图像均可在对人脸图像进行人脸识别时,作为与所述人脸图像进行比对的参照物。在针对目标人脸图像数据库中的目标人脸图像进行人脸特征提取时,用户终端1可以将目标人脸图像数据库中的目标人脸图像作为输入图像在所述预设深度学习模型中包含的多个卷积层中依次进行特征训练。当各卷积层均训练完成后,可以提取全连接层输出的特征向量作为所述目标人脸图像的人脸特征向量。
在本实施方式中,在针对人脸图像进行人脸特征提取时,用户终端1可以按照相同的处理方式,将所述人脸图像作为输入图像在所述预设深度学习模型中包含的多个卷积层中依次进行特征训练,当各卷积层均训练完成后,可以提取全连接层输出的特征向量作为所述人脸图像的人脸特征向量。
(S2022)基于所述人脸图像的人脸特征向量以及所述目标人脸图像的 人脸特征向量计算所述人脸图像与所述目标人脸图像的相似度取值。
本实施方式中,用户终端1可以计算所述人脸图像的人脸特征向量与所述目标人脸图像的人脸特征向量之间的向量距离,然后根据预先建立的向量距离与相似度取值的对应关系列表确定与所述向量距离对应的相似度值。
具体的,所述用户终端1根据特征向量与相似度之间的关系预先建立一个向量距离与相似度取值的对应关系列表,所述对应关系列表中可以根据预设的向量距离阈值划分为多个不同的相似度等级,并为每一个相似度等级设置一个对应的相似度取值,由于特征向量之间的向量距离通常与特征向量之间的相似度成反比,因此当向量距离越小时,相似度取值越高,当向量距离越大时,相似度取值越低。用户终端1通过这种方式可以通过查询所述对应关系列表就可以得到与计算出的向量距离对应的相似度取值。本实施方式中,所述向量距离可以是余弦距离或欧氏距离,在本实施方式中不进行特别限定。
(S2023)根据计算出的所述相似度取值判断识别出的所述人脸图像是否与存储的所述目标人脸图像相匹配。
本实施方式中,当将计算出的向量距离转换成对应的相似度取值后,所述用户终端1进一步判断所述相似度取值是否达到相似度阈值,如果所述相似度取值达到相似度阈值时,此时用户终端1可以确认所述人脸图像与所述目标人脸图像是相同或相匹配的人脸图像,并将所述目标人脸图像作为识别结果进行输出。如果所述相似度取值未达到相似度阈值时,此时用户终端1确认所述人脸图像与所述目标人脸图像不是相同或相匹配的人脸图像,此时用户终端1可以重复以上过程,继续计算所述人脸图像与数据库中的下一个目标人脸图像的相似度取值,直到查找到相同或相匹配的人脸图像,或者遍历整个数据库未发现与所述人脸图像相同或相匹配的人脸图像时停止。
应该理解,本案不受具体采用的人脸识别方法的限制,无论是现有的人脸识别方法还是将来开发的人脸识别方法,都可以应用于本案的人脸识别方法中,并且也应包括在本发明的保护范围内。
步骤S203、解锁并显示电子合同以供用户观看。
本实施方式中,用户终端1在确定出所述人脸图像与所述目标人脸图像相匹配时将所述电子合同解锁并显示在用户终端1上供用户查看。在一具体实施方式,在确定出所述人脸图像与所述目标人脸图像相匹配时在用户终端1上显示一文件列表。所述文件列表中包括多个保密合同文件选项,所述用户终端1响应用户选择一保密合同文件选项的操作显示与所述保密合同文件选项对应的保密合同文件供用户查看。
步骤S204、获取人脸图像并利用训练好的面部动作分类模型确定所述人脸图像中的面部动作。本实施方式中,所述面部动作为用户的面部动作特征。在本发明实施方式中,所述面部动作类别包括:眨左眼类别、眨右眼类别、皱眉类别、眨双眼类别、张口类别。
本实施方式中,面部动作分类模型包括,但不限于:支持向量机(Support Vector Machine,SVM)模型。将包含有眨左眼、眨右眼、眨双眼、皱眉或张口等人脸图像作为所述面部动作分类模型的输入,经过面部动作分类模型 计算后,输出对应所述人脸图像的面部动作类别。
在一实施方式中,所述面部动作分类模型的训练过程包括:
1)获取正样本的面部动作数据及负样本的面部动作数据,并将正样本的面部动作数据标注面部动作类别作为面部动作类别标签。
例如,分别选取500眨左眼类别、眨右眼类别、皱眉类别、眨双眼类别、张口类别对应的面部动作数据,并对每个面部动作数据标注类别,可以以“1”作为眨左眼的面部动作数据标签,以“2”作为眨右眼的面部动作数据标签,以“3”作为皱眉的面部动作数据标签,以“4”作为眨双眼的面部动作数据标签,以“5”作为张口的面部动作数据标签。
2)将所述正样本的面部动作数据及所述负样本的面部动作数据随机分成第一预设比例的训练集和第二预设比例的验证集,利用所述训练集训练所述面部动作分类模型,并利用所述验证集验证训练后的所述面部动作分类模型的准确率。
先将不同面部动作类别的训练集中的训练样本分发到不同的文件夹里。例如,将眨左眼类别的训练样本分发到第一文件夹里、将眨右眼类别的训练样本分发到第二文件夹里、将皱眉类别的训练样本分发到第三文件夹里、将眨双眼类别训练样本分发到第四文件夹里及将张口类别的训练样本分发到第五文件夹里。然后从不同的文件夹里分别提取第一预设比例(例如,70%)的训练样本作为总的训练样本进行面部动作分类模型的训练,从不同的文件夹里分别取剩余第二预设比例(例如,30%)的训练样本作为总的测试样本对训练完成的面部动作分类模型进行准确性验证。
3)若所述准确率大于或者等于预设准确率时,则结束训练,以训练后的所述面部动作分类模型作为分类器识别人脸图像中的面部动作类别;若所述准确率小于预设准确率时,则增加正样本数量及负样本数量以重新训练所述面部动作分类模型直至所述准确率大于或者等于预设准确率。
步骤S205、根据分析出的用户的面部动作从预设的面部操作指令关系表中查找与所述面部动作相对应的操作指令,并根据确定的操作指令对电子合同进行控制。
本实施方式中,所述面部操作指令关系表中定义有多个面部动作与操作指令的对应关系,其中,眨左眼的面部动作与向左翻页的控制指令对应,眨右眼的面部动作与向右翻页的控制指令对应,皱眉的面部动作与锁定页面的控制指令对应,眨双眼的面部动作与解除锁定页面的控制指令对应,张口的面部动作与保存的控制指令对应。本实施方式中,当确定面部动作为眨左眼时,所述用户终端1从预设的面部操作指令关系表中查找与眨左眼相对应的操作指令为向左翻页,并控制所述电子合同向左进行翻页。当确定面部动作为眨右眼时,所述用户终端1从预设的面部操作指令关系表中查找与眨右眼相对应的操作指令为向右翻页,并控制所述电子合同向右进行翻页。当确定面部动作为皱眉时,所述用户终端1从预设的面部操作指令关系表中查找与皱眉相对应的操作指令为锁定页面,并控制所述电子合同锁定页面。
在另一实施方式中,用户终端1可以通过生物电传感器、肌肉振动传感器以及红外扫描传感器中的至少一种获取用户的面部特征。本案采用的生物电传感器和肌肉振动传感器所提取的信息为人体的生理信息。本案采用的红外扫描传感器是利用红外线的物理性质来进行测量的传感器,其可以测量出面部表情的变化类型以及变化幅度,如此根据面部表情的不同变化类型及变化幅度确定出用户的表情动作。
步骤S206、显示一提醒信息提醒用户不具有阅读权限。
本实施方式中,当确定出所述人脸图像与所述目标人脸图像不相匹配时显示一提醒信息提醒用户不具有阅读权限,并记录人脸图像与目标人脸图像不相匹配的错误次数。当人脸图像与目标人脸图像不相匹配的错误次数超过预设次数时发出报警信息。
本实施方式中,所述方法还包括步骤:接收用户的设置操作设置面部操作指令关系表中面部动作与操作指令的对应关系。在一具体实施方式中,所述用户终端1通过所述图像采集单元11获取带有面部动作的人脸图像,将所述人脸图像输入到面部动作分类模型中以解析出所述人脸图像的面部动作,并将解析出的面部动作与用户设定的操作指令建立对应的关系,如此,用户终端1在解析出人脸图像中的面部动作时,控制所述电子合同按照与所述面部动作相对应的操作指令进行操作。
实施例2
图3为本发明一实施方式中电子合同显示装置40的结构图。
在一些实施例中,所述电子合同显示装置40运行于用户终端1中。所述电子合同显示装置40可以包括多个由程序代码段所组成的功能模块。所述电子合同显示装置40中的各个程序段的程序代码可以存储于存储器中,并由至少一个处理器所执行,以执行人脸识别的功能。
本实施例中,所述电子合同显示装置40根据其所执行的功能,可以被划分为多个功能模块。参阅图3所示,所述电子合同显示装置40可以包括获取模块401、人脸识别模块402、显示模块403、面部动作识别模块404、操作执行模块405、提醒模块406及设定模块407。本发明所称的模块是指一种能够被至少一个处理器所执行并且能够完成固定功能的一系列计算机程序段,其存储在存储器中。所述在一些实施例中,关于各模块的功能将在后续的实施例中详述。
所述获取模块401用于获取人脸图像。
本实施方式中,所述用户终端1包括一图像采集单元11。所述图像采集单元11用于采集人脸图像。例如,在一实现方式中,所述图像采集单元11可以为2D摄影机,所述获取模块401通过所述2D摄影机获取用户的人脸图像作为人脸图像。在另一实施方式中,所述图像采集单元也可以为3D摄像机,所述获取模块401通过所述3D摄像机获取用户的3D人脸图像作为人脸图像。本实施方式中,所述获取的人脸图像可以是人脸图片,也可以是人脸视频等。
所述人脸识别模块402用于利用已训练好的预设深度学习模型识别所述人脸图像并判断所述人脸图像是否与存储的目标人脸图像相匹配。
本实施方式中,所述人脸识别模块402在识别出人脸图像后将识别出的人脸图像与所述目标人脸图像进行比较,并根据比较结果确定是否解锁并显示电子合同。本实施方式中,所述目标人脸图像存储在所述用户终端1或服务器3中。在一实施方式中,所述人脸识别模块402利用已训练好的预设深度学习模型识别所述人脸图像并判断识别出的人脸图像是否与存储的目标人脸图像相匹配包括:
a)根据已经训练好的预设深度学习模型提取人脸图像的人脸特征向量以及目标人脸图像的人脸特征向量;
b)基于所述人脸图像的人脸特征向量以及所述目标人脸图像的人脸特征向量计算所述人脸图像与所述目标人脸图像的相似度取值;及
c)根据计算出的所述相似度取值判断识别出的所述人脸图像是否与存储的所述目标人脸图像相匹配。
本实施方式中,所述预设深度学习模型为基于多层神经网络的深度学习模型。所述预设深度学习模型包括多个基层,每一个基层可作为独立的特征提取层对人脸图像的局部特征进行提取。在一具体实施方式中,所述多层神经网络可以为卷积神经网络。也即,所述预设深度学习模型包括输入层、多个用于进行特征提取的卷积层、全连接层以及及输出层。输入层用于为人脸图像或目标人脸图像提供输入通道;卷积层可以作为独立的特征提取层对所述人脸图像或所述目标人脸图像的局部特征进行训练提取;全连接层可以对各卷积层所训练提取出的局部特征进行整合,将各卷积层训练提取出的图像特征连接为一个一维向量;输出层用于输出对输入的人脸图像样本的分类结果。
本实施方式中,在对预设深度学习模型进行训练时,可以在服务器2中存储预设数量的人脸图像样本,并且由用户对这些人脸图像样本进行分类;例如,可以准备1万张人脸图像样本,然后按照这些人脸图像样本所归属的用户对这1万张人脸图像样本进行分类,并将每一个分类的人脸图像样本按照所属的用户进行标定,比如,可以将分类后的每一个分类分别标定为甲、乙、丙等,每一个用户具备10~100张不等的图片,此时每一个分类中的人脸图像样本均归属于同一个用户。当准备的预设数量的人脸图像样本分类完成后,此时可以将所述预设深度学习模型作为分类模型,将这些人脸图像样本作为训练样本输入到所述预设深度学习模型中进行训练,并根据预设深度学习模型输出的分类结果,对所述预设深度学习模型各基层上节点之间的连接的权重参数进行调整。所述预设深度学习模型在每次调整后基于输入的训练样本进行训练后,输出的分类结果与用户标定的分类结果相比,准确度将会逐渐提高。与此同时,用户可以预先设置一个准确度阈值,在不断的调整过程中,如果所述预设深度学习模型输出的分类结果与用户标定的分类结果相比,准确度达到预先设置的准确度阈值后,此时所述预设深度学习模型中各基层节点之间连接的权重参数均为最佳权重参数,可以认为所述预设深度 学习模型已经训练完毕。
本实施方式中,在预设深度学习模型训练完毕后,用户终端1使用已训练好的所述预设深度学习模型,对人脸图像以及目标人脸图像进行人脸特征向量的提取。在具体实施方式中,在用户终端1中可以预先创建一个目标人脸图像数据库,所述目标人脸图像数据库中的每一张目标人脸图像均可在对人脸图像进行人脸识别时,作为与所述人脸图像进行比对的参照物。在针对目标人脸图像数据库中的目标人脸图像进行人脸特征提取时,用户终端1可以将目标人脸图像数据库中的目标人脸图像作为输入图像在所述预设深度学习模型中包含的多个卷积层中依次进行特征训练。当各卷积层均训练完成后,可以提取全连接层输出的特征向量作为所述目标人脸图像的人脸特征向量。
在本实施方式中,在针对人脸图像进行人脸特征提取时,用户终端1可以按照相同的处理方式,将所述人脸图像作为输入图像在所述预设深度学习模型中包含的多个卷积层中依次进行特征训练,当各卷积层均训练完成后,可以提取全连接层输出的特征向量作为所述人脸图像的人脸特征向量。
本实施方式中,用户终端1可以计算所述人脸图像的人脸特征向量与所述目标人脸图像的人脸特征向量之间的向量距离,然后根据预先建立的向量距离与相似度取值的对应关系列表确定与所述向量距离对应的相似度值。
具体的,所述用户终端1根据特征向量与相似度之间的关系预先建立一个向量距离与相似度取值的对应关系列表,所述对应关系列表中可以根据预设的向量距离阈值划分为多个不同的相似度等级,并为每一个相似度等级设置一个对应的相似度取值,由于特征向量之间的向量距离通常与特征向量之间的相似度成反比,因此当向量距离越小时,相似度取值越高,当向量距离越大时,相似度取值越低。用户终端1通过这种方式可以通过查询所述对应关系列表就可以得到与计算出的向量距离对应的相似度取值。本实施方式中,所述向量距离可以是余弦距离或欧氏距离,在本实施方式中不进行特别限定。
本实施方式中,当将计算出的向量距离转换成对应的相似度取值后,所述用户终端1进一步判断所述相似度取值是否达到相似度阈值,如果所述相似度取值达到相似度阈值时,此时用户终端1可以确认所述人脸图像与所述目标人脸图像是相同或相匹配的人脸图像,并将所述目标人脸图像作为识别结果进行输出。如果所述相似度取值未达到相似度阈值时,此时用户终端1确认所述人脸图像与所述目标人脸图像不是相同或相匹配的人脸图像,此时用户终端1可以重复以上过程,继续计算所述人脸图像与数据库中的下一个目标人脸图像的相似度取值,直到查找到相同或相匹配的人脸图像,或者遍历整个数据库未发现与所述人脸图像相同或相匹配的人脸图像时停止。
应该理解,本案不受具体采用的人脸识别方法的限制,无论是现有的人脸识别方法还是将来开发的人脸识别方法,都可以应用于本案的人脸识别方法中,并且也应包括在本发明的保护范围内。
所述显示模块403用于当识别出的人脸图像与所述目标人脸图像相匹配时解锁并显示电子合同以供用户观看。
本实施方式中,所述显示模块403在确定出所述人脸图像与所述目标人脸图像相匹配时将所述电子合同解锁并显示在用户终端1上供用户查看。在一具体实施方式,在确定出所述人脸图像与所述目标人脸图像相匹配时在用户终端1上显示一文件列表。所述文件列表中包括多个保密合同文件选项,所述显示模块403响应用户选择一保密合同文件选项的操作显示与所述保密合同文件选项对应的保密合同文件供用户查看。
所述面部动作识别模块404用于获取人脸图像并利用训练好的面部动作分类模型确定所述人脸图像中的面部动作。本实施方式中,所述面部动作为用户的面部动作特征。在本发明实施方式中,所述面部动作类别包括:眨左眼类别、眨右眼类别、皱眉类别、眨双眼类别、张口类别。
本实施方式中,面部动作分类模型包括,但不限于:支持向量机(Support Vector Machine,SVM)模型。将包含有眨左眼、眨右眼、眨双眼、皱眉或张口等人脸图像作为所述面部动作分类模型的输入,经过面部动作分类模型计算后,输出对应所述人脸图像的面部动作类别。
在一实施方式中,所述面部动作分类模型的训练过程包括:
1)获取正样本的面部动作数据及负样本的面部动作数据,并将正样本的面部动作数据标注面部动作类别作为面部动作类别标签。
例如,分别选取500眨左眼类别、眨右眼类别、皱眉类别、眨双眼类别、张口类别对应的面部动作数据,并对每个面部动作数据标注类别,可以以“1”作为眨左眼的面部动作数据标签,以“2”作为眨右眼的面部动作数据标签,以“3”作为皱眉的面部动作数据标签,以“4”作为眨双眼的面部动作数据标签,以“5”作为张口的面部动作数据标签。
2)将所述正样本的面部动作数据及所述负样本的面部动作数据随机分成第一预设比例的训练集和第二预设比例的验证集,利用所述训练集训练所述面部动作分类模型,并利用所述验证集验证训练后的所述面部动作分类模型的准确率。
先将不同面部动作类别的训练集中的训练样本分发到不同的文件夹里。例如,将眨左眼类别的训练样本分发到第一文件夹里、将眨右眼类别的训练样本分发到第二文件夹里、将皱眉类别的训练样本分发到第三文件夹里、将眨双眼类别训练样本分发到第四文件夹里及将张口类别的训练样本分发到第五文件夹里。然后从不同的文件夹里分别提取第一预设比例(例如,70%)的训练样本作为总的训练样本进行面部动作分类模型的训练,从不同的文件夹里分别取剩余第二预设比例(例如,30%)的训练样本作为总的测试样本对训练完成的面部动作分类模型进行准确性验证。
3)若所述准确率大于或者等于预设准确率时,则结束训练,以训练后的所述面部动作分类模型作为分类器识别人脸图像中的面部动作类别;若所述准确率小于预设准确率时,则增加正样本数量及负样本数量以重新训练所述面部动作分类模型直至所述准确率大于或者等于预设准确率。
所述操作执行模块405根据分析出的用户的面部动作从预设的面部操作指令关系表中查找与所述面部动作相对应的操作指令,并根据确定的操作指令对电子合同进行控制。
本实施方式中,所述面部操作指令关系表中定义有多个面部动作与操作指令的对应关系,其中,眨左眼的面部动作与向左翻页的控制指令对应,眨右眼的面部动作与向右翻页的控制指令对应,皱眉的面部动作与锁定页面的控制指令对应,眨双眼的面部动作与解除锁定页面的控制指令对应,张口的面部动作与保存的控制指令对应。本实施方式中,当确定面部动作为眨左眼时,所述操作执行模块405从预设的面部操作指令关系表中查找与眨左眼相对应的操作指令为向左翻页,并控制所述电子合同向左进行翻页。当确定面部动作为眨右眼时,所述操作执行模块405从预设的面部操作指令关系表中查找与眨右眼相对应的操作指令为向右翻页,并控制所述电子合同向右进行翻页。当确定面部动作为皱眉时,所述操作执行模块405从预设的面部操作指令关系表中查找与皱眉相对应的操作指令为锁定页面,并控制所述电子合同锁定页面。
在另一实施方式中,所述操作执行模块405可以通过生物电传感器、肌肉振动传感器以及红外扫描传感器中的至少一种获取用户的面部特征。本案采用的生物电传感器和肌肉振动传感器所提取的信息为人体的生理信息。本案采用的红外扫描传感器是利用红外线的物理性质来进行测量的传感器,其可以测量出面部表情的变化类型以及变化幅度,如此根据面部表情的不同变化类型及变化幅度确定出用户的表情动作。
所述提醒模块406用于当识别出的人脸图像与所述目标人脸图像不相匹配时显示一提醒信息提醒用户不具有阅读权限。
本实施方式中,当确定出所述人脸图像与所述目标人脸图像不相匹配时显示一提醒信息提醒用户不具有阅读权限,并记录人脸图像与目标人脸图像不相匹配的错误次数。当人脸图像与目标人脸图像不相匹配的错误次数超过预设次数时发出报警信息。
本实施方式中,所述设定模块407用于接收用户的设置操作设置面部操作指令关系表中面部动作与操作指令的对应关系。在一具体实施方式中,所述设定模块407通过所述图像采集单元11获取带有面部动作的人脸图像,将所述人脸图像输入到面部动作分类模型中以解析出所述人脸图像的面部动作,并将解析出的面部动作与用户设定的操作指令建立对应的关系,如此,在解析出人脸图像中的面部动作时,所述操作执行模块405控制所述电子合同按照与所述面部动作相对应的操作指令进行操作。
实施例3
图4为本发明一实施方式中电子设备6的示意图。
在一实施方式中,所述电子设备6可以为本发明中的用户终端1。所述电子设备6包括存储器61、处理器62以及存储在所述存储器61中并可在所述处理器62上运行的计算机程序63。所述处理器62执行所述计算机程序63 时实现上述电子合同显示方法实施例中的步骤,例如图2所示的步骤S201~S206。或者,所述处理器62执行所述计算机程序63时实现上述电子合同显示装置实施例中各模块/单元的功能,例如图3中的模块401~407。
示例性的,所述计算机程序63可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器61中,并由所述处理器62执行,以完成本发明。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,所述指令段用于描述所述计算机程序63在所述电子设备6中的执行过程。例如,所述计算机程序63可以被分割成图3中的功能入口确定模块401、记录模块402、喜好标签确定模块403、调整模块404,各模块具体功能参见实施例二。
所述电子设备6可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。本领域技术人员可以理解,所述示意图仅仅是电子设备6的示例,并不构成对电子设备6的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述电子设备6还可以包括输入输出设备、网络接入设备、总线等。
所称处理器62可以是中央处理模块(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者所述处理器62也可以是任何常规的处理器等,所述处理器62是所述电子设备6的控制中心,利用各种接口和线路连接整个电子设备6的各个部分。
所述存储器61可用于存储所述计算机程序63和/或模块/单元,所述处理器62通过运行或执行存储在所述存储器61内的计算机程序和/或模块/单元,以及调用存储在存储器61内的数据,实现所述计电子设备6的各种功能。所述存储器61可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据电子设备6的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器61可以包括高速随机存取存储器,还可以包括非易失性存储器,例如硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
所述电子设备6集成的模块/单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,所述计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或 装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
在本发明所提供的几个实施例中,应该理解到,所揭露的电子设备和方法,可以通过其它的方式实现。例如,以上所描述的电子设备实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
另外,在本发明各个实施例中的各功能模块可以集成在相同处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在相同模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。
对于本领域技术人员而言,显然本发明不限于上述示范性实施例的细节,而且在不背离本发明的精神或基本特征的情况下,能够以其他的具体形式实现本发明。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本发明的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本发明内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他模块或步骤,单数不排除复数。电子设备权利要求中陈述的多个模块或电子设备也可以由同一个模块或电子设备通过软件或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。
最后应说明的是,以上实施例仅用以说明本发明的技术方案而非限制,尽管参照较佳实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,可以对本发明的技术方案进行修改或等同替换,而不脱离本发明技术方案的精神和范围。

Claims (20)

  1. 一种电子合同显示方法,所述方法包括步骤:
    获取人脸图像;
    根据已经训练好的预设深度学习模型提取所述人脸图像的人脸特征向量以及目标人脸图像的人脸特征向量;
    计算所述人脸图像的人脸特征向量与所述目标人脸图像的人脸特征向量之间的向量距离;
    根据预先建立的向量距离与相似度取值的对应关系列表确定与所述向量距离对应的相似度值,其中所述向量距离可以是余弦距离或欧氏距离;
    根据计算出的所述相似度取值判断识别出的所述人脸图像是否与存储的所述目标人脸图像相匹配;及
    当识别出的人脸图像与所述目标人脸图像相匹配时解锁并显示电子合同以供用户观看。
  2. 如权利要求1所述的电子合同显示方法,其中,所述方法还包括:对所述预设深度学习模型进行训练,其中,对所述预设深度学习模型进行训练包括:
    存储预设数量的人脸图像样本,并且对所述人脸图像样本进行分类;
    按照所述人脸图像样本所归属的用户对所述人脸图像样本进行分类,并将每一个分类的人脸图像样本按照所属的用户进行标定;
    当所述预设数量的人脸图像样本分类完成后将所述人脸图像样本作为训练样本输入到所述预设深度学习模型中进行训练,并根据预设深度学习模型输出的分类结果,对所述预设深度学习模型各基层上节点之间的连接的权重参数进行调整;
    每次调整后将输出的分类结果与对所述人脸图像样本进行标定后得到的分类结果相比,若准确度达到预先设置的准确度阈值时所述预设深度学习模型中各基层节点之间连接的权重参数均为最佳权重参数,则所述预设深度学习模型训练完毕。
  3. 如权利要求1所述的电子合同显示方法,其中,所述方法还包括步骤:
    获取所述人脸图像并利用训练好的面部动作分类模型确定所述人脸图像中的面部动作;及
    根据分析出的用户的面部动作从预设的面部操作指令关系表中查找与所述面部动作相对应的操作指令,并根据确定的操作指令对所述电子合同进行控制。
  4. 如权利要求3所述的电子合同显示方法,其中,所述面部动作分类模型的训练过程包括:
    获取正样本的面部动作数据及负样本的面部动作数据,并将正样本的面部动作数据标注面部动作类别作为面部动作类别标签,其中,所述面部动作类别包括:眨左眼类别、眨右眼类别、皱眉类别、眨双眼类别、张口类别;
    将正样本的面部动作数据及负样本的面部动作数据随机分成第一预设比例的训练集和第二预设比例的验证集,利用所述训练集训练所述面部动作分类模型,并利用所述验证集验证训练后的所述面部动作分类模型的准确率;
    当所述准确率大于或者等于预设准确率时,则结束训练,以训练后的所述面部动作分类模型作为分类器识别所述人脸图像中的面部动作类别;及
    当所述准确率小于预设准确率时,增加正样本的面部动作数据数量及负样本的面部动作数据数量以重新训练所述面部动作分类模型直至所述准确率大于或者等于预设准确率。
  5. 如权利要求3所述的电子合同显示方法,其中,所述方法还包括:
    接收用户的设置操作设置面部操作指令关系表中面部动作与操作指令的对应关系。
  6. 如权利要求1所述的电子合同显示方法,其中,所述方法还包括步骤:
    当识别出的人脸图像与所述目标人脸图像不相匹配时,显示一提醒信息提醒用户不具有阅读权限。
  7. 如权利要求1所述的电子合同显示方法,其中,所述人脸图像包括2D人脸图像和3D人脸图像。
  8. 一种电子合同显示装置,所述装置包括:
    获取模块,用于获取人脸图像;
    人脸识别模块,用于:
    根据已经训练好的预设深度学习模型提取所述人脸图像的人脸特征向量以及目标人脸图像的人脸特征向量;
    计算所述人脸图像的人脸特征向量与所述目标人脸图像的人脸特征向量之间的向量距离;
    根据预先建立的向量距离与相似度取值的对应关系列表确定与所述向量距离对应的相似度值,其中所述向量距离可以是余弦距离或欧氏距离;及
    根据计算出的所述相似度取值判断识别出的所述人脸图像是否与存储的所述目标人脸图像相匹配;及
    显示模块,用于当识别出的人脸图像与所述目标人脸图像相匹配时解锁并显示电子合同以供用户观看。
  9. 一种电子设备,所述电子设备包括处理器,所述处理器用于执行存储器中存储的计算机程序时实现如下步骤:
    获取人脸图像;
    根据已经训练好的预设深度学习模型提取所述人脸图像的人脸特征向量以及目标人脸图像的人脸特征向量;
    计算所述人脸图像的人脸特征向量与所述目标人脸图像的人脸特征向量之间的向量距离;
    根据预先建立的向量距离与相似度取值的对应关系列表确定与所述向量距离对应的相似度值,其中所述向量距离可以是余弦距离或欧氏距离;
    根据计算出的所述相似度取值判断识别出的所述人脸图像是否与存储的 所述目标人脸图像相匹配;及
    当识别出的人脸图像与所述目标人脸图像相匹配时解锁并显示电子合同以供用户观看。
  10. 如权利要求9所述的电子设备,其中,所述处理器还用于实现:对所述预设深度学习模型进行训练,其中,所述处理器在实现所述对所述预设深度学习模型进行训练时,具体实现:
    存储预设数量的人脸图像样本,并且对所述人脸图像样本进行分类;
    按照所述人脸图像样本所归属的用户对所述人脸图像样本进行分类,并将每一个分类的人脸图像样本按照所属的用户进行标定;
    当所述预设数量的人脸图像样本分类完成后将所述人脸图像样本作为训练样本输入到所述预设深度学习模型中进行训练,并根据预设深度学习模型输出的分类结果,对所述预设深度学习模型各基层上节点之间的连接的权重参数进行调整;
    每次调整后将输出的分类结果与对所述人脸图像样本进行标定后得到的分类结果相比,若准确度达到预先设置的准确度阈值时所述预设深度学习模型中各基层节点之间连接的权重参数均为最佳权重参数,则所述预设深度学习模型训练完毕。
  11. 如权利要求9所述的电子设备,其中,所述处理器还用于实现:
    获取所述人脸图像并利用训练好的面部动作分类模型确定所述人脸图像中的面部动作;及
    根据分析出的用户的面部动作从预设的面部操作指令关系表中查找与所述面部动作相对应的操作指令,并根据确定的操作指令对所述电子合同进行控制。
  12. 如权利要求11所述的电子设备,其中,所述处理器在实现所述面部动作分类模型的训练过程时,用于实现:
    获取正样本的面部动作数据及负样本的面部动作数据,并将正样本的面部动作数据标注面部动作类别作为面部动作类别标签,其中,所述面部动作类别包括:眨左眼类别、眨右眼类别、皱眉类别、眨双眼类别、张口类别;
    将正样本的面部动作数据及负样本的面部动作数据随机分成第一预设比例的训练集和第二预设比例的验证集,利用所述训练集训练所述面部动作分类模型,并利用所述验证集验证训练后的所述面部动作分类模型的准确率;
    当所述准确率大于或者等于预设准确率时,则结束训练,以训练后的所述面部动作分类模型作为分类器识别所述人脸图像中的面部动作类别;及
    当所述准确率小于预设准确率时,增加正样本的面部动作数据数量及负样本的面部动作数据数量以重新训练所述面部动作分类模型直至所述准确率大于或者等于预设准确率。
  13. 如权利要求11所述的电子设备,其中,所述处理器还用于实现:
    接收用户的设置操作设置面部操作指令关系表中面部动作与操作指令的对应关系。
  14. 如权利要求9所述的电子设备,其中,所述处理器还用于实现:
    当识别出的人脸图像与所述目标人脸图像不相匹配时,显示一提醒信息提醒用户不具有阅读权限。
  15. 如权利要求9所述的电子设备,其中,所述人脸图像包括2D人脸图像和3D人脸图像。
  16. 一种计算机存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如下步骤:
    获取人脸图像;
    根据已经训练好的预设深度学习模型提取所述人脸图像的人脸特征向量以及目标人脸图像的人脸特征向量;
    计算所述人脸图像的人脸特征向量与所述目标人脸图像的人脸特征向量之间的向量距离;
    根据预先建立的向量距离与相似度取值的对应关系列表确定与所述向量距离对应的相似度值,其中所述向量距离可以是余弦距离或欧氏距离;
    根据计算出的所述相似度取值判断识别出的所述人脸图像是否与存储的所述目标人脸图像相匹配;及
    当识别出的人脸图像与所述目标人脸图像相匹配时解锁并显示电子合同以供用户观看。
  17. 如权利要求16所述的计算机存储介质,其中,所述处理器还用于实现:对所述预设深度学习模型进行训练,其中,所述处理器在实现所述对所述预设深度学习模型进行训练时,具体实现:
    存储预设数量的人脸图像样本,并且对所述人脸图像样本进行分类;
    按照所述人脸图像样本所归属的用户对所述人脸图像样本进行分类,并将每一个分类的人脸图像样本按照所属的用户进行标定;
    当所述预设数量的人脸图像样本分类完成后将所述人脸图像样本作为训练样本输入到所述预设深度学习模型中进行训练,并根据预设深度学习模型输出的分类结果,对所述预设深度学习模型各基层上节点之间的连接的权重参数进行调整;
    每次调整后将输出的分类结果与对所述人脸图像样本进行标定后得到的分类结果相比,若准确度达到预先设置的准确度阈值时所述预设深度学习模型中各基层节点之间连接的权重参数均为最佳权重参数,则所述预设深度学习模型训练完毕。
  18. 如权利要求16所述的计算机存储介质,其中,所述处理器还用于实现:
    获取所述人脸图像并利用训练好的面部动作分类模型确定所述人脸图像中的面部动作;及
    根据分析出的用户的面部动作从预设的面部操作指令关系表中查找与所述面部动作相对应的操作指令,并根据确定的操作指令对所述电子合同进行控制。
  19. 如权利要求18所述的计算机存储介质,其中,所述处理器在实现所述面部动作分类模型的训练过程时,用于实现:
    获取正样本的面部动作数据及负样本的面部动作数据,并将正样本的面 部动作数据标注面部动作类别作为面部动作类别标签,其中,所述面部动作类别包括:眨左眼类别、眨右眼类别、皱眉类别、眨双眼类别、张口类别;
    将正样本的面部动作数据及负样本的面部动作数据随机分成第一预设比例的训练集和第二预设比例的验证集,利用所述训练集训练所述面部动作分类模型,并利用所述验证集验证训练后的所述面部动作分类模型的准确率;
    当所述准确率大于或者等于预设准确率时,则结束训练,以训练后的所述面部动作分类模型作为分类器识别所述人脸图像中的面部动作类别;及
    当所述准确率小于预设准确率时,增加正样本的面部动作数据数量及负样本的面部动作数据数量以重新训练所述面部动作分类模型直至所述准确率大于或者等于预设准确率。
  20. 如权利要求18所述的计算机存储介质,其中,所述处理器还用于实现:
    接收用户的设置操作设置面部操作指令关系表中面部动作与操作指令的对应关系。
PCT/CN2019/121770 2019-04-18 2019-11-28 电子合同显示方法、装置、电子设备及计算机存储介质 WO2020211387A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910315169.9A CN110210194A (zh) 2019-04-18 2019-04-18 电子合同显示方法、装置、电子设备及存储介质
CN201910315169.9 2019-04-18

Publications (1)

Publication Number Publication Date
WO2020211387A1 true WO2020211387A1 (zh) 2020-10-22

Family

ID=67785356

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/121770 WO2020211387A1 (zh) 2019-04-18 2019-11-28 电子合同显示方法、装置、电子设备及计算机存储介质

Country Status (2)

Country Link
CN (1) CN110210194A (zh)
WO (1) WO2020211387A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112148907A (zh) * 2020-10-23 2020-12-29 北京百度网讯科技有限公司 图像数据库的更新方法、装置、电子设备和介质
CN112434722A (zh) * 2020-10-23 2021-03-02 浙江智慧视频安防创新中心有限公司 基于类别相似度的标签平滑计算的方法、装置、电子设备及介质
CN112733645A (zh) * 2020-12-30 2021-04-30 平安科技(深圳)有限公司 手写签名校验方法、装置、计算机设备及存储介质
CN113591782A (zh) * 2021-08-12 2021-11-02 北京惠朗时代科技有限公司 一种基于训练式的人脸识别智能保险柜应用方法及系统
TWI812946B (zh) * 2021-05-04 2023-08-21 世界先進積體電路股份有限公司 影像辨識模型系統及維護方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210194A (zh) * 2019-04-18 2019-09-06 深圳壹账通智能科技有限公司 电子合同显示方法、装置、电子设备及存储介质
CN111191207A (zh) * 2019-12-23 2020-05-22 深圳壹账通智能科技有限公司 电子文件的控制方法、装置、计算机设备及存储介质
CN111273798B (zh) * 2020-01-16 2024-02-06 钮永豪 执行鼠标宏指令的方法及系统、执行宏指令的方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170193286A1 (en) * 2015-12-31 2017-07-06 Pinhole (Beijing) Technology Co., Ltd. Method and device for face recognition in video
CN108229269A (zh) * 2016-12-31 2018-06-29 深圳市商汤科技有限公司 人脸检测方法、装置和电子设备
CN108363999A (zh) * 2018-03-22 2018-08-03 百度在线网络技术(北京)有限公司 基于人脸识别的操作执行方法和装置
CN108446674A (zh) * 2018-04-28 2018-08-24 平安科技(深圳)有限公司 电子装置、基于人脸图像与声纹信息的身份识别方法及存储介质
CN109117801A (zh) * 2018-08-20 2019-01-01 深圳壹账通智能科技有限公司 人脸识别的方法、装置、终端及计算机可读存储介质
CN110210194A (zh) * 2019-04-18 2019-09-06 深圳壹账通智能科技有限公司 电子合同显示方法、装置、电子设备及存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101221612A (zh) * 2007-01-11 2008-07-16 上海银晨智能识别科技有限公司 利用人脸识别进行加密解密电子文档的方法
CN103577764A (zh) * 2012-07-27 2014-02-12 国基电子(上海)有限公司 文档加解密方法及具有文档加解密功能的电子装置
CN102999164B (zh) * 2012-11-30 2016-08-03 广东欧珀移动通信有限公司 一种电子书翻页控制方法及智能终端
CN104537289A (zh) * 2014-12-18 2015-04-22 乐视致新电子科技(天津)有限公司 保护终端设备中指定目标的方法和装置
CN104899579A (zh) * 2015-06-29 2015-09-09 小米科技有限责任公司 人脸识别方法和装置
CN107766785B (zh) * 2017-01-25 2022-04-29 丁贤根 一种面部识别方法
CN107862292B (zh) * 2017-11-15 2019-04-12 平安科技(深圳)有限公司 人物情绪分析方法、装置及存储介质
CN109254661B (zh) * 2018-09-03 2022-05-03 Oppo(重庆)智能科技有限公司 图像显示方法、装置、存储介质及电子设备
CN109359456A (zh) * 2018-09-21 2019-02-19 百度在线网络技术(北京)有限公司 文件的安全管理方法、装置、设备及计算机可读介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170193286A1 (en) * 2015-12-31 2017-07-06 Pinhole (Beijing) Technology Co., Ltd. Method and device for face recognition in video
CN108229269A (zh) * 2016-12-31 2018-06-29 深圳市商汤科技有限公司 人脸检测方法、装置和电子设备
CN108363999A (zh) * 2018-03-22 2018-08-03 百度在线网络技术(北京)有限公司 基于人脸识别的操作执行方法和装置
CN108446674A (zh) * 2018-04-28 2018-08-24 平安科技(深圳)有限公司 电子装置、基于人脸图像与声纹信息的身份识别方法及存储介质
CN109117801A (zh) * 2018-08-20 2019-01-01 深圳壹账通智能科技有限公司 人脸识别的方法、装置、终端及计算机可读存储介质
CN110210194A (zh) * 2019-04-18 2019-09-06 深圳壹账通智能科技有限公司 电子合同显示方法、装置、电子设备及存储介质

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112148907A (zh) * 2020-10-23 2020-12-29 北京百度网讯科技有限公司 图像数据库的更新方法、装置、电子设备和介质
CN112434722A (zh) * 2020-10-23 2021-03-02 浙江智慧视频安防创新中心有限公司 基于类别相似度的标签平滑计算的方法、装置、电子设备及介质
CN112434722B (zh) * 2020-10-23 2024-03-19 浙江智慧视频安防创新中心有限公司 基于类别相似度的标签平滑计算的方法、装置、电子设备及介质
CN112733645A (zh) * 2020-12-30 2021-04-30 平安科技(深圳)有限公司 手写签名校验方法、装置、计算机设备及存储介质
CN112733645B (zh) * 2020-12-30 2023-08-01 平安科技(深圳)有限公司 手写签名校验方法、装置、计算机设备及存储介质
TWI812946B (zh) * 2021-05-04 2023-08-21 世界先進積體電路股份有限公司 影像辨識模型系統及維護方法
CN113591782A (zh) * 2021-08-12 2021-11-02 北京惠朗时代科技有限公司 一种基于训练式的人脸识别智能保险柜应用方法及系统

Also Published As

Publication number Publication date
CN110210194A (zh) 2019-09-06

Similar Documents

Publication Publication Date Title
WO2020211387A1 (zh) 电子合同显示方法、装置、电子设备及计算机存储介质
RU2642369C2 (ru) Аппарат и способ распознавания отпечатка пальца
CN109461167B (zh) 图像处理模型的训练方法、抠图方法、装置、介质及终端
US20200327311A1 (en) Image clustering method and apparatus, electronic device, and storage medium
US9742764B1 (en) Performing biometrics in uncontrolled environments
US20200082157A1 (en) Periocular facial recognition switching
US20200104567A1 (en) Obstruction detection during facial recognition processes
US11113510B1 (en) Virtual templates for facial recognition
CN109920174B (zh) 图书借阅方法、装置、电子设备及存储介质
CN109800325A (zh) 视频推荐方法、装置和计算机可读存储介质
EP2698742A2 (en) Facial recognition similarity threshold adjustment
CN109599187A (zh) 一种在线问诊的分诊方法、服务器、终端、设备及介质
CN107527059A (zh) 文字识别方法、装置及终端
US10990805B2 (en) Hybrid mode illumination for facial recognition authentication
CN106056083B (zh) 一种信息处理方法及终端
WO2021036309A1 (zh) 图像识别方法、装置、计算机装置及存储介质
CN106062871A (zh) 使用所选择的群组样本子集来训练分类器
CN111126347B (zh) 人眼状态识别方法、装置、终端及可读存储介质
JP2021515321A (ja) メディア処理方法、その関連装置及びコンピュータプログラム
CN112417121A (zh) 客户意图识别方法、装置、计算机设备及存储介质
CN112883980A (zh) 一种数据处理方法及系统
CN111784665A (zh) 基于傅里叶变换的oct图像质量评估方法、系统及装置
US20230142898A1 (en) Device and network-based enforcement of biometric data usage
WO2019033518A1 (zh) 信息获取方法、装置、计算机可读存储介质及终端设备
CN113220828A (zh) 意图识别模型处理方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19925207

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 31/01/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19925207

Country of ref document: EP

Kind code of ref document: A1