WO2019075840A1 - 身份验证方法、装置、存储介质和计算机设备 - Google Patents

身份验证方法、装置、存储介质和计算机设备 Download PDF

Info

Publication number
WO2019075840A1
WO2019075840A1 PCT/CN2017/112485 CN2017112485W WO2019075840A1 WO 2019075840 A1 WO2019075840 A1 WO 2019075840A1 CN 2017112485 W CN2017112485 W CN 2017112485W WO 2019075840 A1 WO2019075840 A1 WO 2019075840A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
frame
data
target image
Prior art date
Application number
PCT/CN2017/112485
Other languages
English (en)
French (fr)
Inventor
郑佳
赵骏
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019075840A1 publication Critical patent/WO2019075840A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/30Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities

Definitions

  • the present application relates to the field of network security technologies, and in particular, to an identity verification method, apparatus, storage medium, and computer device.
  • the traditional identity card photo-based authentication method can only obtain the user's personal information through the OCR technology to identify the photo of the ID card, but it cannot prevent the malicious user from stealing the ID card of another person and registering it through the photo of another person's ID card. Authentication is less secure.
  • an authentication method, apparatus, storage medium, and computer device are provided.
  • An authentication method includes: receiving an identity verification instruction; acquiring a multi-frame target image by an imaging device according to the identity verification instruction; extracting a face region image and a document region image in each frame target image; and parsing the multi-frame target image a face area image, generating face change data; determining whether the face change data matches the preset change data; and when the face change data and the pre- When the change data is matched, the corresponding document area image is verified according to the multi-frame face area image, and whether the identity verification is performed is determined according to the verification result.
  • An identity verification device comprising: an identity verification instruction receiving module, configured to receive an identity verification instruction; a target image acquisition module, configured to acquire a multi-frame target image by using an imaging device according to the identity verification instruction; and an area image extraction module For extracting a face area image and a document area image in each target image; a face change data matching module for parsing a face area image in the multi-frame target image to generate face change data; an identity verification module, And determining whether the face change data matches the preset change data; and when the face change data matches the preset change data, verifying the corresponding document area image according to the multi-frame face area image, according to The verification result determines whether or not it is authenticated.
  • One or more non-transitory readable storage mediums storing computer readable instructions, when executed by one or more processors, cause the one or more processors to perform the steps of: receiving An authentication instruction; acquiring a multi-frame target image by the camera device according to the identity verification instruction; extracting a face region image and a document region image in each frame target image; and parsing a face region image in the multi-frame target image to generate a face Changing data; determining whether the face change data matches the preset change data; and when the face change data matches the preset change data, verifying the corresponding document area image according to the multi-frame face area image, According to the verification result, it is determined whether or not the authentication is performed.
  • a computer device comprising a memory and one or more processors, the memory storing computer readable instructions, the computer readable instructions being executed by the processor, causing the one or more processors to execute The following steps: receiving an authentication instruction; acquiring a multi-frame target image by the camera device according to the identity verification instruction; extracting a face region image and a document region image in each frame target image; and parsing the face region image in the multi-frame target image Generating face change data; determining whether the face change data matches the preset change data; and when the face change data matches the preset change data, the corresponding document area is based on the multi-frame face area image The image is verified, and it is determined whether or not the authentication is performed based on the verification result.
  • FIG. 1 is an application environment diagram of an identity verification method in an embodiment
  • FIG. 2 is a flow chart of an identity verification method in an embodiment
  • Figure 3 is a schematic illustration of a target image in one embodiment
  • FIG. 5 is a flowchart of an identity verification method in still another embodiment
  • FIG. 6 is a structural block diagram of an identity verification apparatus in an embodiment
  • FIG. 7 is a structural block diagram of an identity verification apparatus in another embodiment
  • Figure 8 is a diagram showing the internal structure of a computer device in an embodiment.
  • first may be referred to as a second threshold without departing from the scope of the present application, and similarly, the second threshold may be referred to as a first threshold. Both the first threshold and the second threshold are thresholds, but they are not the same threshold.
  • the identity verification method provided by the embodiment of the present application can be applied to an application environment as shown in FIG. 1.
  • the application environment includes a terminal 102 and a server 104.
  • Terminal 102 can be a hand Machines, tablets, personal digital assistants or smart devices.
  • the server 104 may be a stand-alone physical server or a server cluster composed of a plurality of physical servers.
  • the terminal 102 can be used to perform the identity verification method provided by the embodiment of the present application.
  • the server 104 may store data related to identity verification, including but not limited to identity information data, verification face data, and the like.
  • the terminal 102 can be connected to the server 104 network and perform data transmission.
  • the server 104 can receive the identity information data sent by the terminal 102, and search for the corresponding verification face data according to the identity information data, and the terminal 102 can also obtain the verification face data sent by the server 104.
  • an identity verification method is provided.
  • the method can be used in the terminal 102 in the application environment as shown in FIG. 1, the method includes:
  • Step S202 receiving an identity verification instruction.
  • the authentication command can be an authentication command that is triggered when an authenticated operation is detected.
  • the operation includes, but is not limited to, one or more of a preset click operation, a sliding operation, and a shaking operation.
  • a corresponding authentication interface may be provided, where the interface includes a corresponding control for receiving an authentication instruction, and when detecting a click operation acting on the control, triggering generation of an authentication instruction .
  • Step S204 acquiring a multi-frame target image by the imaging device according to the identity verification instruction.
  • An imaging device can be preset in the terminal, and the terminal can also be externally connected to the imaging device.
  • the camera device can be a mobile phone camera, a camera, a video camera, and the like.
  • the target image refers to an image including target information or a target subject.
  • the target subject may be two objects of a document and a face, and an image including both the document and the face may be used as the target image.
  • the image may be an image directly captured by the imaging device by scanning the visible area, or may be an image captured by the captured video stream.
  • the camera After receiving the authentication command, the camera can perform shooting, and the image acquired in real time is displayed on the display screen, and the image including the target subject can be taken as the target image.
  • the terminal may also display corresponding prompt information, so that the user can determine the target photographic subject, the position of the target photographic subject in the image, the action that the target photographic subject needs to perform, and the like according to the prompt information.
  • prompt information For example, as shown in FIG. 3, an object map in one embodiment is provided.
  • a schematic diagram of the image, for the two objects of the document and the face, respectively, the document area dotted line frame 302 and the face area dotted line frame 304 may be respectively displayed, and when the corresponding target object is detected in the dotted line frame in the detected image, the image may be As the target image.
  • acquiring the multi-frame target image by the camera according to the identity verification instruction includes: acquiring, according to the identity verification instruction, the multi-frame image by the camera device according to the preset time interval; and detecting whether the image of each frame is The resolution is greater than the preset resolution.
  • the image can be continuously acquired by the camera device at preset time intervals until the preset number of images is reached. For example, an image is taken every 0.1 or 0.2 seconds until 20 images are acquired. It is also possible to capture a video of a preset duration by the camera device, and perform image capture on the captured video stream at preset time intervals. Further, for the image acquisition process, the image captured by the camera device may be displayed on the display screen in real time, and the control for starting to acquire the image and the control for stopping the image acquisition may be provided on the corresponding interface, when the click operation acting on the control is detected , perform the corresponding operation of starting to acquire an image or stop acquiring an image.
  • each frame image After obtaining the multi-frame image, detecting the resolution of each frame image in the multi-frame image, when the resolution of each frame image is greater than the preset resolution, each frame image is taken as the target image; when less than the preset is detected When the image of the resolution is restored, the image is returned to be executed by the camera device at a preset time interval for a preset time period until the resolution of each frame image is greater than the preset resolution.
  • Step S206 extracting a face area image and a document area image in each target image.
  • the target image contains two target subjects, a document and a face.
  • the face area image refers to an area image including a face portion in the target image
  • the document area image refers to an area image of the target image including the part of the document.
  • the documents include but are not limited to identity cards, driver's licenses, passports, etc.
  • the target subject When the target subject is detected in the target image, the target subject may be segmented by the preset image segmentation algorithm by the image portion occupied by the document portion and the face portion.
  • the image segmentation algorithm may be a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, and a specific theory-based segmentation method.
  • the face area and the document area in the target image may also be set in advance.
  • the ID area dashed box 302 in FIG. 3 is used as the credential area
  • the face area dashed box 304 is used.
  • As a face area When it is detected that there is a corresponding target subject in the preset area, the image of the preset area is directly extracted as the face area image and the document area image.
  • Step S208 parsing the face region image in the multi-frame target image to generate face change data.
  • the face change data refers to data reflecting a change process of a face region image in a multi-frame target image. For example, expression change or face movement trajectory.
  • a face area image is included in each target image. After the face area image in each frame of the target image is extracted, the face change data is generated according to the difference of the image of the face area of each adjacent two frames.
  • step S210 it is determined whether the face change data matches the preset change data.
  • step S212 According to the generated face change data and the preset change data, it is compared whether the two match, and if yes, the process goes to step S212; otherwise, the process returns to step S204.
  • the multi-frame target image may be acquired again by the camera device until the face change data matching the preset change data is acquired.
  • Target image the number of returns can also be preset. For the same authentication command, when it is detected that the number of returns reaches the preset number of returns, it is no longer returned. For example, the preset return number can be 4 times.
  • the image is stopped and the mismatched prompt information is displayed.
  • Step S212 verifying the corresponding document area image according to the multi-frame face area image, and determining whether to pass the identity verification according to the verification result.
  • the document area image in the corresponding target image can be verified according to the face area image in each frame of the icon image.
  • the ID of the document area image may include a photo of the ID, and when the image of the face area matches the photo of the ID, the verification may be successful.
  • the face target subject itself can be dynamic, and the document target subject itself is still, the document area image in one of the target images can be extracted as an image for verification, and one or more frames of the face are taken.
  • the area image is matched with the extracted document area image.
  • the image verification of the document area it may be determined that the authentication is passed; otherwise, the authentication is not passed.
  • the multi-frame target image is acquired according to the received identity verification instruction, and the target image includes both the face region image and the document region image, and the face region in the target image is parsed by Generating face change data according to the domain image, determining whether the face region image meets the requirements according to the matching result of the face change data and the preset change data, and when the face change data matches the preset change data, that is, in the multi-frame target image
  • the image of the face area meets the requirements
  • the image of the document area is verified according to the image of the face area, and then the authentication result is determined according to the verification result of the image of the document area.
  • the facial change data includes an expression change. Parsing the face region image in the multi-frame target image to generate facial face change data, comprising: extracting facial expression region data of the face region image in each frame target image; calculating according to the expression feature data of the adjacent target image of each two frames Get the expression change.
  • the expression feature data refers to data that reflects the face features included in the face region image.
  • facial features include, but are not limited to, facial features, facial shapes, and the like.
  • the feature points of the face in the face region image may be extracted according to a preset algorithm, and the feature feature data is generated according to the related data of the feature points.
  • the expression change degree refers to the degree of difference in the facial expression displayed by the image of the face region between the two target images.
  • the expression feature data of each adjacent two frames of the target image may be calculated according to the acquisition time sequence of the multi-frame target image, and the expression change degree is obtained. When it is detected that the expression feature data corresponding to the face region image in the two target images is inconsistent, the expression change degree takes a value greater than zero.
  • the expression feature data may also be expressed in the form of a feature vector, and the cosine similarity may be calculated according to the vector dot product formula for the two feature vectors corresponding to the adjacent two frame target images, and the difference in cosine similarity indicates the expression difference.
  • the facial expression recognition model may be pre-built, and the acquired facial region image is directly input into the facial expression recognition model, and the corresponding facial expression feature data is directly generated.
  • the 3D face model can also be established in advance, and the real 3D facial expression motion can be learned through the deep learning algorithm. After extracting the feature points on the image of each frame of the face region, the feature points are mapped to the 3D face model.
  • the model forms a three-dimensional model corresponding to each frame of the face region image.
  • Emoticon feature data is generated from a three-dimensional face model reconstructed in conjunction with feature points.
  • Determining whether the face change data matches the preset change data includes: determining whether each expression change degree is within a preset change degree interval.
  • the preset change degree interval is a change degree interval formed by the first threshold value and the second threshold value, and refers to a value range of the change degree corresponding to the real user.
  • the first threshold refers to the minimum degree of change generated by the real user in the two-frame target image
  • the second threshold refers to the maximum degree of change generated by the real user in the two-frame target image.
  • each expression change degree of the face region image of the two-frame target image is smaller than the first threshold, the expression feature data corresponding to the face region image is consistent, and there is a possibility that another person uses the static image to impersonate the real user; if the two-frame target If the expression change degree of the face region image of the image is greater than the second threshold, it indicates that the acquired target image is not coherent. Therefore, whether each expression change degree is within the preset change degree interval may be determined according to the first threshold value and the second generated preset change degree interval. When each expression change degree is within the preset change degree interval, it is determined that the face change data matches the preset change data; otherwise, it is determined that the face change data does not match the preset change data.
  • the person can be ensured according to the dynamic change of the face region image in the multi-frame target image.
  • the authenticity of the face corresponding to the image of the face region avoids the risk of malicious users stealing static photos of others for authentication, thereby improving the security of identity verification.
  • the method before acquiring the multi-frame target image by the camera according to the identity verification instruction, the method further includes: acquiring and displaying the random expression prompt information; after extracting the expression feature data of the face region image in each target image, The method includes: detecting, in the extracted expression feature data, whether there is expression feature data matching the random expression prompt information; and when there is expression feature data matching the expression prompt information, according to the expression feature data of the adjacent target image of each two frames , calculate the degree of change in expression.
  • the random expression prompt information refers to the prompt information corresponding to the facial expression randomly obtained by the terminal from the preset expression database.
  • the user can make a corresponding table according to the displayed random expression prompt information facing the camera device. situation.
  • the preset expression database stores relevant data of various facial expressions, and the facial expression includes, but is not limited to, a combination of one or more of a smile, a wink, a mouth open, a frown, a tongue, a nod, a shaking head, and the like.
  • the random expression is determined by a random algorithm, and the corresponding random expression prompt information is obtained.
  • the text information corresponding to the random expression may be used as the random expression prompt information, and the random expression determined by the line drawing may also be used as the random expression prompt information.
  • the preset expression feature data corresponding to the random expression prompt is obtained. After extracting the facial expression feature data of the face region image in each frame of the target image, it is compared whether the emoticon feature data of each frame face region image matches the preset emoticon feature data. For example, for a smile expression, when the curvature of the mouth in the expression feature data reaches a preset radian threshold, it can be determined that the expression feature data of the frame face image matches the preset expression feature data, and the calculation can be performed. Expression change.
  • the face change data includes a face movement trajectory
  • the method further includes: acquiring and displaying the random face trajectory prompt information; and parsing the multi-frame target image
  • the face area image generates face change data, including: extracting face position data of the face area image in each frame target image; generating a face movement track according to the face position data of each frame target image; determining the face change Whether the data matches the preset change data includes: when the face movement trajectory matches the random face trajectory prompt information, determining that the face change data matches the preset change data.
  • the face movement trajectory refers to a movement trajectory formed by a position of a multi-frame face area image in a corresponding target image.
  • the text information of the random face trajectory may be used as the random face trajectory prompt information, and the determined random face trajectory may be determined by line drawing.
  • the text information can be “shifted to the left”, “shifted to the right”, “moved up”, etc., and the direction and distance of the face movement can also be displayed by arrows.
  • the face position data refers to data capable of indicating the position of the face area image in the corresponding target image. For example, the location data of the face feature points in the face area image.
  • the facial feature point may be a combined feature point of one or more of an eye corner, a corner of the mouth, and a tip of the nose.
  • the preset curve can be The algorithm generates a face movement trajectory. Determining whether the face movement track matches the random face track corresponding to the random face track prompt information, and if yes, determining that the face change data matches the preset change data, and the corresponding document area may be according to the multi-frame face area image The image is verified, and it is determined whether the authentication is performed according to the verification result; otherwise, it can be determined that the face change data does not match the preset change data.
  • the method before acquiring the multi-frame target image by the camera according to the identity verification instruction, the method further includes: acquiring and displaying the random document track prompt information; before verifying the corresponding document area image according to the multi-frame face area image, The method further includes: extracting document location data of the document area image in each frame of the target image; generating a document movement trajectory according to the document position data in each frame of the target image; determining whether the document movement trajectory matches the document trajectory prompt information; and when the document movement trajectory and random When the document track prompt information matches, and the face change data matches the preset change data, performing verification on the corresponding document area image according to the multi-frame face area image.
  • the document movement track refers to a movement track formed by the position of the multi-frame certificate area image in the corresponding target image.
  • the text information of the random document track may be used as the random document track prompt information, and the random document track determined by the line drawing may be used as the random document track prompt. information.
  • the text information can be “shifted to the left”, “shifted to the right”, “moved up”, etc., and the direction and distance of movement of the document can also be displayed by arrows.
  • the document location data refers to data capable of indicating the location of the document area image in the corresponding target image.
  • the location data of the feature points of one or more characters in the document in the corresponding target image may be extracted as the document location data.
  • the document movement trajectory can be generated according to a preset curve fitting algorithm. Determining whether the document movement track matches the random document track corresponding to the random document track prompt information, and if yes, and the face change data matches the preset change data, performing verification on the corresponding document area image according to the multi-frame face area image.
  • the foregoing step S212 includes:
  • Step S402 extracting the document photo feature data of the document area image in each target image and the face feature data of the face area image in the corresponding target image.
  • the document corresponding to the image of the document area in the target image contains the photo of the ID.
  • the document photo data refers to the feature data of the document photo in the image of the document area extracted according to the face recognition technology.
  • the face feature data refers to the face in the face region image extracted according to the face recognition technology. Characteristic data.
  • the face recognition technology includes but is not limited to a recognition algorithm based on face feature points, a recognition algorithm based on a full face image, a template-based recognition algorithm, an algorithm using a neural network for recognition, and the like.
  • a face recognition algorithm that utilizes a neural network for recognition may be employed to train a neural network for identifying images of face regions.
  • the face region image acquired by the camera device may be used as a training sample set, and the same image may be used as the training verification set of the user's document area image, and the neural network may be performed through the training sample set and the training verification set.
  • Training extracting corresponding document feature data and face feature data until the recognition rate of the neural network for the document feature data and the face feature data is greater than a preset recognition rate threshold.
  • the acquired target image is input into the trained neural network for identification, and is used for extracting the document photo feature data and the face feature data in the target image, and identifying whether the face region image and the document region image in the target image match.
  • Step S404 determining whether there is a target image whose document photo feature data matches the face feature data.
  • the document photo data and the face feature data may be extracted from the target image of each frame.
  • step S406 is performed; otherwise, the process proceeds to step S407. deal with.
  • Step S406 extracting identity information data included in the target image that matches the document photo feature data and the face feature data.
  • the identity information data refers to data related to the identity information contained in the image of the document area.
  • the identity information includes, but is not limited to, a name, an ID number, a date of birth, a home address, and the like.
  • the identity information data contained in the image of the certificate area can be identified by OCR technology.
  • OCR technology is to use various optical input methods such as scanning to put various bills, newspapers, books, manuscripts and other
  • the text of the printed matter is converted into image information, and the image recognition technology is used to convert the image information into a computer input technology that can be used. Due to the limited recognition rate, multiple results may be identified.
  • the plurality of identified identification results may be sorted according to information legality, similarity, number of occurrences, etc., and the optimal result may be presented to the user for confirmation. Among them, illegal results can be excluded according to the legality of the information.
  • the weighting calculation may be performed according to the similarity, the number of occurrences, and the like of each recognition result, and the recognition result exceeding the preset threshold is displayed to the user as optional information. If there is more than one optional information, multiple optional information can be displayed in the drop-down list for the user to select. The optional information selected by the user is used as the identity information data contained in the target image.
  • Step S408 obtaining verification face data corresponding to the identity information data in the document database.
  • the document database refers to the preset data related to the documents of all users, including but not limited to identity information data and verification face data.
  • the verification face data refers to face data used for verifying the authenticity of the document in the image of the document area.
  • the verification face data may be the feature data of the document photos in the pre-extracted documents.
  • the image of the ID area may be an ID card image, and after identifying the ID number in the ID card image, the ID number corresponding to the ID number may be obtained in the public security system database according to the ID number.
  • step S410 it is determined whether the verification face data and the document photo feature data match.
  • step S412 when the verification face data matches the certificate photo feature data, the process proceeds to step S412.
  • step S412 it is determined that the authentication is passed.
  • step S414 it is determined that the authentication is not passed.
  • the verification face data in the certificate database is obtained according to the identity information data, and the identity data of the certificate is verified according to the verification face data, thereby avoiding malicious user fraudulent use.
  • Other people's documents and affix their photos to the photo ID The risk of online authentication is improved.
  • another authentication method is provided, as shown in connection with FIG. 5, the method comprising:
  • Step S502 receiving an identity verification instruction.
  • the terminal may provide a control for triggering an authentication command to trigger an authentication command upon detecting a trigger operation on the control.
  • Step S504 acquiring and displaying random expression prompt information.
  • the expression database is pre-configured in the terminal, and after the random expression prompt information is obtained according to the preset random algorithm, the random expression prompt information is displayed through the display screen.
  • the text prompt information “smile” may be displayed on the display screen, and the portrait of the smiley face may be displayed in the preset area, so that the user can respond according to the random expression prompt information. Smile expression.
  • Step S506 Acquire a multi-frame image by the camera device according to a preset time interval within a preset time period.
  • a camera device is pre-configured in the terminal, and a plurality of frames of images can be acquired by the camera device every 0.1 seconds, and the image is displayed in real time through the display screen.
  • Step S508 detecting whether the resolution of each frame of the image is greater than the preset resolution.
  • step S510 If not, the process returns to step S506 until the resolution of each frame of the image is greater than the preset resolution; if yes, the process proceeds to step S510.
  • step S510 each frame image is taken as a target image.
  • the target image refers to an image containing two target objects of the document and the face, and the resolution of each target image is greater than the preset resolution.
  • Step S512 extracting a face area image and a document area image in each target image.
  • the face area image refers to an area image including a face target subject in the target image
  • the ID area image refers to an area image including the document target subject in the target image.
  • the face area image and the document area image in the target image can be segmented and extracted by a preset image segmentation method.
  • Step S514 extracting facial expression feature data of the face region image in each frame of the target image.
  • a face recognition algorithm based on face feature points may be used to extract key features such as eye shape and mouth curvature in each frame of the face region.
  • the key features are analyzed and calculated, and the corresponding facial feature data of each frame of the face region is generated.
  • Step S516 detecting whether there is expression feature data matching the random expression prompt information in the extracted expression feature data.
  • step S618 If yes, go to step S618; if no, go back to step S604.
  • the acquired random expression prompt information is a smile
  • Step S518, calculating the expression change degree according to the expression feature data of the adjacent target image of each two frames.
  • the expression change degree refers to the degree of difference in the facial expression displayed by the image of the face region between the two target images. For example, it can be displayed according to the expression of the face area in the two target images.
  • step S520 it is determined whether each expression change degree is within the preset change degree interval.
  • step S522 If yes, go to step S522; if no, go to step S534.
  • Step S522 extracting the document photo feature data of the document area image in each target image and the face feature data of the face area image in the corresponding target image.
  • the face feature data of the document area image and the face feature data of the face area image may be extracted according to the same face recognition algorithm, and the two are matched.
  • Step S524 determining whether there is a target image whose document photo feature data matches the face feature data.
  • step S526 If yes, go to step S526; if no, go to step S534.
  • Step S526, extracting identity information data included in the target image that matches the document photo feature data and the face feature data.
  • the ID may be an ID card.
  • the identity information contained in the target image may be extracted by using OCR technology. data.
  • the identity information data includes, but is not limited to, an identity card number.
  • the verification face data of the corresponding ID card photo can be found in the ID card database of the public security system according to the extracted ID number.
  • step S530 it is determined whether the verification face data and the document photo feature data match.
  • step S532 If yes, go to step S532; if no, go to step S534.
  • step S532 it is determined that the authentication is passed.
  • step S534 it is determined that the authentication is not passed.
  • the face region image and the document region image in the target image are verified by the multiple judgment process, thereby ensuring the authenticity of the face corresponding to the face region image, and also avoiding malicious users from using other people's documents.
  • the risk of authentication increases the security of authentication.
  • an identity verification apparatus 600 includes an identity verification instruction receiving module 602 for receiving an identity verification instruction, and a target image obtaining module 604 for authenticating according to the identity.
  • the instruction acquires a multi-frame target image by the camera device; the region image extraction module 606 is configured to extract a face region image and a document region image in each frame target image; and the face change data matching module 608 is configured to parse the multi-frame target image.
  • the face area image generates face change data;
  • the identity verification module 610 is configured to determine whether the face change data matches the preset change data, and when the face change data matches the preset change data, according to the multi-frame person
  • the face area image verifies the corresponding document area image, and determines whether to pass the authentication according to the verification result.
  • the target image obtaining module 604 is further configured to: acquire, according to the identity verification instruction, the multi-frame image by the camera device according to the preset time interval; and determine whether the resolution of each frame image is greater than the preset resolution. Rate; if yes, each frame image is used as the target image; if not, return to continue to execute the multi-frame image by the camera device according to the preset time interval within the preset time period until the resolution of each frame image is greater than the preset resolution rate.
  • the face change data includes an expression change degree
  • the face change data matching module 608 is further configured to extract the expression feature data of the face region image in each frame of the target image, according to each two frames of the adjacent target image.
  • the expression feature data is calculated
  • the identity verification module 610 is further configured to determine whether each expression change degree is within a preset change degree interval, when each expression change When the degree is within the preset change interval, it is determined that the face change data matches the preset change data.
  • another authentication device 700 is provided.
  • the device further includes: a random prompt information display module 603, configured to acquire and display random expression prompt information; and a face change data matching module.
  • 608 is further configured to detect, in the extracted expression feature data, whether there is expression feature data matching the random expression prompt information; when there is expression feature data matching the expression prompt information, performing execution of the adjacent target image according to each two frames Expression feature data, calculated degree of change in expression.
  • the face change data includes a face movement trajectory; the random cue information display module 703 is further configured to acquire and display random face trajectory cue information; the face change data matching module 708 is further configured to extract each frame target image.
  • the face position data of the face area image is generated; the face movement track is generated according to the face position data of the target image of each frame; and the identity verification module 610 is further configured to: when the face movement track matches the random face track prompt information, Then, it is determined that the face change data matches the preset change data.
  • the random reminder information display module 603 is further configured to acquire and display the random document track prompt information;
  • the face change data matching module 608 is further configured to extract the document location data of the document area image in each frame of the target image;
  • the document location data in the frame target image generates a document movement trajectory; whether the document movement trajectory matches the document trajectory prompt information;
  • the identity verification module 610 is further configured to match the document movement trajectory with the random document trajectory prompt information, and the face change data and When the preset change data is matched, the image of the corresponding document area is verified according to the multi-frame face area image.
  • the identity verification module 610 is further configured to extract the document photo feature data of the credential region image in each frame of the target image and the face feature data of the face region image in the corresponding target image; and determine whether the ID card feature data exists The target image matching the facial feature data; if yes, extracting the identity information data included in the target image that matches the signature feature data and the face feature data; and obtaining the verification face data corresponding to the identity information data in the document database Determining whether the verification face data matches the document photo feature data; if yes, determining to pass the identity verification.
  • the above authentication device can be implemented in the form of a computer readable instruction readable by a computer
  • the instructions can be run on a computer device as shown in FIG.
  • the various modules in the above authentication device may be implemented in whole or in part by software, hardware, and combinations thereof.
  • Each of the above modules may be embedded in or independent of the memory of the computer device in hardware, or may be stored in the memory of the computer device in software form, so that the processor invokes the operations corresponding to the above modules.
  • the processor can be a central processing unit (CPU), a microprocessor, a microcontroller, or the like.
  • a computer apparatus comprising a memory and one or more processors having stored therein computer readable instructions that, when executed by a processor, cause one or more processors Performing the following steps: receiving an authentication instruction; acquiring a multi-frame target image by the camera according to the identity verification instruction; extracting a face area image and a document area image in each target image; and parsing the face area image in the multi-frame target image, Generating face change data; determining whether the face change data matches the preset change data; when the face change data matches the preset change data, verifying the corresponding document area image according to the multi-frame face area image, according to the verification The result is determined whether or not it is authenticated.
  • the acquiring, by the processor, the multi-frame target image by the camera according to the identity verification instruction comprises: acquiring, according to the identity verification instruction, the multi-frame image by the camera device according to the preset time interval; Whether the resolution of each frame of the image is greater than the preset resolution; if so, each frame of the image is used as the target image; if not, returning to continue to perform the acquisition of the multi-frame image by the camera device according to the preset time interval within the preset time period, Until the resolution of each frame of the image is greater than the preset resolution.
  • the face change data includes an expression change degree, a step of parsing the face area image in the multi-frame target image performed by the processor, and generating the face change data, specifically including the following steps: extracting each target image of the frame The expression feature data of the face region image in the middle; the expression change degree is calculated according to the expression feature data of the adjacent target image of each two frames; and the step of determining whether the face change data is matched with the preset change data by the processor, Specifically, the method includes the following steps: determining whether each expression change degree is within a preset change degree interval; and when each expression change degree is within a preset change degree interval, determining that the face change data matches the preset change data.
  • the processor before the step of acquiring the multi-frame target image by the camera according to the identity verification instruction, the processor further includes the following steps: acquiring and displaying the random expression prompt information; and extracting each target image by the processor
  • the method further includes the following steps: detecting whether there is expression feature data matching the random expression prompt information in the extracted expression feature data; when there is matching with the expression prompt information
  • the expression change degree is calculated based on the expression feature data of the adjacent target image every two frames.
  • the face change data includes a face movement trajectory
  • the step of the processor performing the multi-frame target image acquisition by the camera according to the identity verification instruction further includes the following steps: acquiring and displaying the random face trajectory prompt
  • the step of generating the face change data by parsing the face region image in the multi-frame target image performed by the processor specifically comprising the steps of: extracting face location data of the face region image in each frame target image;
  • the face position data of the frame target image generates a face movement track;
  • the step of determining whether the face change data matches the preset change data by the processor specifically comprising: when the face movement track matches the random face track prompt information When it is determined, the face change data is determined to match the preset change data.
  • the processor before the step of acquiring the multi-frame target image by the camera according to the identity verification instruction, the processor further includes the following steps: acquiring and displaying the random document track prompt information; Before the step of verifying the image of the corresponding document area by the face area image, the method further comprises the steps of: extracting the document position data of the document area image in the target image of each frame; generating a document movement track according to the document position data in each frame of the target image; determining the document Whether the movement track matches the document track prompt information; when the document movement track matches the random document track prompt information, and the face change data matches the preset change data, performing verification on the corresponding document area image according to the multi-frame face area image is performed. .
  • the step of verifying, by the processor, the image of the corresponding document area according to the multi-frame face area image comprises the following steps: extracting the ID picture data of the ID of the ID area of the target image in each frame and the corresponding target The face feature data of the face region image in the image; determining whether there is a target image whose document photo feature data matches the face feature data; if yes, extracting the target image data that matches the face feature data and the face feature data includes Number of identity information According to the method; obtaining the verification face data corresponding to the identity information data in the certificate database; determining whether the verification face data matches the certificate photo feature data; if yes, determining to pass the identity verification.
  • the computer device described above can be used as the terminal 102 in an application environment as shown in FIG.
  • the computer device includes a processor connected by a system bus, a non-volatile storage medium, an internal memory, a display screen, a camera, and a network interface.
  • the processor of the computer device is used to provide computing and control capabilities to support the operation of the entire computer device.
  • a non-volatile storage medium of a computer device stores an operating system and computer readable instructions.
  • the computer readable instructions are executable by a processor for implementing an authentication method provided by the various embodiments above.
  • the internal memory in the computer device provides a cached operating environment for operating systems and computer readable instructions in a non-volatile storage medium.
  • the display screen can be a touch screen, such as a capacitive screen or an electronic screen, and a corresponding command can be generated by receiving a click operation of a control applied to the touch screen.
  • the camera device can be a mobile phone camera, a camera, a video camera, and the like.
  • the network interface may be an Ethernet card or a wireless network card or the like for communicating with an external terminal or server.
  • the structure of the computer device shown in FIG. 8 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation of the computer device to which the solution of the present application is applied.
  • the computer device may include more or fewer components than those shown in the figures, or some components may be combined, or have different component arrangements.
  • the computer device in the figure may not include the camera device, but the identity verification method in each of the above embodiments may be implemented by an external camera device.
  • one or more non-volatile readable storage media having computer readable instructions stored by one or more processors are provided, such that one or more processors are Performing the following steps: receiving an authentication instruction; acquiring a multi-frame target image by the camera according to the identity verification instruction; extracting a face area image and a document area image in each target image; and parsing the face area image in the multi-frame target image, Generating face change data; determining whether the face change data matches the preset change data; when the face change data matches the preset change data, verifying the corresponding document area image according to the multi-frame face area image, according to the verification The result is determined whether or not it is authenticated.
  • the acquiring, by the processor, the multi-frame target image by the camera according to the identity verification instruction comprises: acquiring, according to the identity verification instruction, the multi-frame image by the camera device according to the preset time interval; Whether the resolution of each frame of the image is greater than the preset resolution; if so, each frame of the image is used as the target image; if not, returning to continue to perform the acquisition of the multi-frame image by the camera device according to the preset time interval within the preset time period, Until the resolution of each frame of the image is greater than the preset resolution.
  • the face change data includes an expression change degree, a step of parsing the face area image in the multi-frame target image performed by the processor, and generating the face change data, specifically including the following steps: extracting each target image of the frame The expression feature data of the face region image in the middle; the expression change degree is calculated according to the expression feature data of the adjacent target image of each two frames; and the step of determining whether the face change data is matched with the preset change data by the processor, Specifically, the method includes the following steps: determining whether each expression change degree is within a preset change degree interval; and when each expression change degree is within a preset change degree interval, determining that the face change data matches the preset change data.
  • the processor before the step of acquiring the multi-frame target image by the camera according to the identity verification instruction, the processor further includes the following steps: acquiring and displaying the random expression prompt information; and extracting each target image by the processor
  • the method further includes the following steps: detecting whether there is expression feature data matching the random expression prompt information in the extracted expression feature data; when there is matching with the expression prompt information
  • the expression change degree is calculated based on the expression feature data of the adjacent target image every two frames.
  • the face change data includes a face movement trajectory
  • the step of the processor performing the multi-frame target image acquisition by the camera according to the identity verification instruction further includes the following steps: acquiring and displaying the random face trajectory prompt
  • the step of generating the face change data by parsing the face region image in the multi-frame target image performed by the processor specifically comprising the steps of: extracting face location data of the face region image in each frame target image;
  • the face position data of the frame target image generates a face movement track;
  • the step of determining whether the face change data matches the preset change data by the processor specifically comprising: when the face movement track matches the random face track prompt information When it is determined, the face change data is determined to match the preset change data.
  • the processor before the step of acquiring the multi-frame target image by the camera according to the identity verification instruction, the processor further includes the following steps: acquiring and displaying the random document track prompt information; Before the step of verifying the image of the corresponding document area by the face area image, the method further comprises the steps of: extracting the document position data of the document area image in the target image of each frame; generating a document movement track according to the document position data in each frame of the target image; determining the document Whether the movement track matches the document track prompt information; when the document movement track matches the random document track prompt information, and the face change data matches the preset change data, performing verification on the corresponding document area image according to the multi-frame face area image is performed. .
  • the step of verifying, by the processor, the image of the corresponding document area according to the multi-frame face area image comprises the following steps: extracting the ID picture data of the ID of the ID area of the target image in each frame and the corresponding target The face feature data of the face region image in the image; determining whether there is a target image whose document photo feature data matches the face feature data; if yes, extracting the target image data that matches the face feature data and the face feature data includes The identity information data; obtaining the verification face data corresponding to the identity information data in the certificate database; determining whether the verification face data and the certificate photo feature data match; if yes, determining to pass the identity verification.
  • the readable storage medium which when executed, may include the flow of an embodiment of the methods as described above.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

一种身份验证方法,该方法包括:接收身份验证指令;根据身份验证指令通过摄像装置获取多帧目标图像;提取每帧目标图像中的人脸区域图像和证件区域图像;解析多帧目标图像中的人脸区域图像,生成人脸变化数据;判断人脸变化数据是否与预设变化数据匹配;及当人脸变化数据与预设变化数据匹配时,则根据多帧人脸区域图像对相应证件区域图像进行验证,根据验证结果判定是否通过身份验证。

Description

身份验证方法、装置、存储介质和计算机设备
本申请要求于2017年10月17日提交中国专利局,申请号为201710965802X,发明名称为“身份验证方法、装置、存储介质和计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及网络安全技术领域,特别是涉及一种身份验证方法、装置、存储介质和计算机设备。
背景技术
随着网络的普及,许多网络系统,比实名制社交平台、互联网金融平台等,都需要用户在注册时实名验证。为了避免恶意用户盗用他人身份信息进行注册,网络系统往往会要求用户上传身份证等证件的照片进行审核。网络系统可通过OCR(Optical Character Recognition,光学字符识别)技术识别证件照片上的姓名身份证号等个人信息。
然而,传统的基于身份证照片的身份验证方法只能通过OCR技术识别证件照片进行获取用户的个人信息,而并不能避免恶意用户盗取他人身份证,通过他人身份证照片进行注册的情况,在线身份验证的安全性较低。
发明内容
根据本申请的各种实施例,提供一种身份验证方法、装置、存储介质和计算机设备。
一种身份验证方法,包括:接收身份验证指令;根据所述身份验证指令通过摄像装置获取多帧目标图像;提取每帧目标图像中的人脸区域图像和证件区域图像;解析多帧目标图像中的人脸区域图像,生成人脸变化数据;判断所述人脸变化数据是否与预设变化数据匹配;及当所述人脸变化数据与预 设变化数据匹配时,则根据多帧人脸区域图像对相应证件区域图像进行验证,根据验证结果判定是否通过身份验证。
一种身份验证装置,所述装置包括:身份验证指令接收模块,用于接收身份验证指令;目标图像获取模块,用于根据所述身份验证指令通过摄像装置获取多帧目标图像;区域图像提取模块,用于提取每帧目标图像中的人脸区域图像和证件区域图像;人脸变化数据匹配模块,用于解析多帧目标图像中的人脸区域图像,生成人脸变化数据;身份验证模块,用于判断所述人脸变化数据是否与预设变化数据匹配;及当所述人脸变化数据与预设变化数据匹配时,则根据多帧人脸区域图像对相应证件区域图像进行验证,根据验证结果判定是否通过身份验证。
一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:接收身份验证指令;根据所述身份验证指令通过摄像装置获取多帧目标图像;提取每帧目标图像中的人脸区域图像和证件区域图像;解析多帧目标图像中的人脸区域图像,生成人脸变化数据;判断所述人脸变化数据是否与预设变化数据匹配;及当所述人脸变化数据与预设变化数据匹配时,则根据多帧人脸区域图像对相应证件区域图像进行验证,根据验证结果判定是否通过身份验证。
一种计算机设备,包括存储器和一个或多个处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行以下步骤:接收身份验证指令;根据所述身份验证指令通过摄像装置获取多帧目标图像;提取每帧目标图像中的人脸区域图像和证件区域图像;解析多帧目标图像中的人脸区域图像,生成人脸变化数据;判断所述人脸变化数据是否与预设变化数据匹配;及当所述人脸变化数据与预设变化数据匹配时,则根据多帧人脸区域图像对相应证件区域图像进行验证,根据验证结果判定是否通过身份验证。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请 的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为一个实施例中身份验证方法的应用环境图;
图2为一个实施例中身份验证方法的流程图;
图3为一个实施例中目标图像的示意图;
图4为另一个实施例中身份验证方法的流程图;
图5为又一个实施例中身份验证方法的流程图;
图6为一个实施例中身份验证装置的结构框图;
图7为另一个实施例中身份验证装置的结构框图;
图8为一个实施例中计算机设备的内部结构图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受这些术语的限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一阈值称为第二阈值,且类似地,可将第二阈值称为第一阈值。第一阈值和第二阈值两者都是阈值,但其不是同一阈值。
本申请实施例所提供的身份验证方法,可应用于如图1所示的应用环境中。参照图1,该应用环境包括:终端102和服务器104。终端102可以是手 机、平板电脑、个人数字助理或者智能设备等。服务器104可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群。终端102可用于执行本申请实施例所提供的身份验证方法。服务器104中可存储有与身份验证相关的数据,包括但不限于身份信息数据以及验证人脸数据等。终端102可与服务器104网络连接,并进行数据传输。比如说,服务器104可接收终端102发送的身份信息数据,并根据身份信息数据查找相应的验证人脸数据,终端102还可获取服务器104发送的验证人脸数据等。
在一个实施例中,如图2所示,提供了一种身份验证方法,该方法可用于如图1所示的应用环境中的终端102,该方法包括:
步骤S202,接收身份验证指令。
身份验证指令可为在检测到进行身份验证的操作而触发的身份验证指令。其中,该操作包括但不限于预先设置的点击操作、滑动操作及摇晃操作等其中一种或多种。进一步地,可针对进行身份验证的操作,提供相应的身份验证界面,该界面上包括相应的用于接收身份验证指令的控件,在检测到作用于该控件的点击操作时,触发生成身份验证指令。
步骤S204,根据身份验证指令通过摄像装置获取多帧目标图像。
终端中可预置有摄像装置,终端也可以外接摄像装置。其中,摄像装置可以为手机摄像头、照相机及摄像机等。目标图像是指包含目标信息或包含目标拍摄对象的图像,比如说,目标拍摄对象可以是证件和人脸两个对象,则可以把同时包含证件和人脸的图像作为目标图像。其中,图像可以是摄像装置通过扫描可视区域直接拍摄得到的图像,也可以是通过拍摄的视频流中截取得到的图像。在接收到身份验证指令之后,可通过摄像装置进行拍摄,并将实时获取的图像通过显示屏展示,可将包含目标拍摄对象的图像作为目标图像。
在一个实施例中,终端还可展示相应的提示信息,使得用户能够根据该提示信息确定目标拍摄对象、目标拍摄对象处于图像中的位置以及目标拍摄对象需要进行的动作等。比如说,如图3所示,提供了一个实施例中目标图 像的示意图,针对证件和人脸两个对象,可分别展示证件区域虚线框302和人脸区域虚线框304,当检测到图像中的虚线框中包含相应的目标拍摄对象时,可将该图像作为目标图像。
在一个实施例中,根据身份验证指令通过摄像装置获取多帧目标图像,包括:根据身份验证指令,在预设时长内按照预设时间间隔通过摄像装置获取多帧图像;检测是否每帧图像的分辨率都大于预设分辨率。
可以通过摄像装置按照预设时间间隔持续获取图像直至达到预设图像数量。比如,每隔0.1秒或0.2秒获取一张图像,直至获取到20张图像。还可以通过摄像装置拍摄预设时长的视频,对拍摄得到的视频流按照预设的时间间隔进行图像截取。进一步地,可以针对获取图像过程,将摄像装置拍摄的图像实时展示在显示屏上,并在相应界面上提供开始获取图像的控件和停止获取图像的控件,当检测到作用于控件的点击操作时,执行相应的开始获取图像或停止获取图像的操作。在获取到多帧图像之后,检测多帧图像中每帧图像的分辨率,当每帧图像的分辨率都大于预设分辨率时,则将每帧图像作为目标图像;当检测到小于预设分辨率的图像时,则返回继续执行在预设时长内按照预设时间间隔通过摄像装置获取多帧图像,直至每帧图像的分辨率都大于预设分辨率。
步骤S206,提取每帧目标图像中的人脸区域图像和证件区域图像。
目标图像中包含证件和人脸两个目标拍摄对象。人脸区域图像是指目标图像中包含人脸部分的区域图像,相应的,证件区域图像是指目标图像中包含证件部分的区域图像。其中,证件包括但不限于身份证、驾驶证、护照等。
可以在当检测到目标图像中包含目标拍摄对象时,将目标拍摄对象通过预设的图像分割算法将证件部分及人脸部分占据的图像区域分割出来。其中,图像分割算法可以为基于阈值的分割方法、基于区域的分割方法、基于边缘的分割方法以及基于特定理论的分割方法等。
在一个实施例中,还可以预先设置目标图像中的人脸区域和证件区域。比如,将图3中的证件区域虚线框302作为证件区域,将人脸区域虚线框304 作为人脸区域。在检测到预设区域中有相应的目标拍摄对象时,直接将预设区域的图像提取出来作为人脸区域图像和证件区域图像。
步骤S208,解析多帧目标图像中的人脸区域图像,生成人脸变化数据。
人脸变化数据是指反映多帧目标图像中人脸区域图像变化过程的数据。比如说,表情变化度或人脸移动轨迹。每帧目标图像中都包含人脸区域图像,当提取到每帧目标图像中的人脸区域图像之后,根据每相邻两帧人脸区域图像的差异生成人脸变化数据。
步骤S210,判断人脸变化数据是否与预设变化数据匹配。
根据生成的人脸变化数据与预设变化数据,比较两者是否匹配,若是,则进入步骤S212;否则,返回步骤S204。
在一个实施例中,当检测到人脸变化数据与预设变化数据不匹配时,还可以重新通过摄像装置获取多帧目标图像,直至获取到与预设变化数据匹配的人脸变化数据所对应的目标图像。进一步地,还可以预设返回次数。针对同一身份验证指令,当检测到返回次数达到预设返回次数时,则不再返回。比如说,预设返回次数可为4次,当检测到第5次人脸变化数据与预设变化数据不匹配时,则停止获取图像,并展示不匹配的提示信息。
步骤S212,根据多帧人脸区域图像对相应证件区域图像进行验证,根据验证结果判定是否通过身份验证。
可根据每帧图标图像中的人脸区域图像对相应目标图像中的证件区域图像进行验证。比如说,证件区域图像中可包括证件照,当人脸区域图像与证件照匹配时,可判定验证成功。由于人脸目标拍摄对象本身可为动态的,而证件目标拍摄对象本身为静止的,因此可提取其中一张目标图像中的证件区域图像作为用于验证的图像,将一帧或多帧人脸区域图像与该提取出的证件区域图像进行匹配。当证件区域图像验证成功时,可判定通过身份验证;否则,判定不通过身份验证。
上述实施例中,根据接收的身份验证指令获取多帧目标图像,且目标图像中同时包括人脸区域图像和证件区域图像,通过解析目标图像中的人脸区 域图像而生成人脸变化数据,根据人脸变化数据与预设变化数据的匹配结果判断人脸区域图像是否符合要求,当人脸变化数据与预设变化数据匹配时,即多帧目标图像中的人脸区域图像符合要求时,根据人脸区域图像对证件区域图像进行验证,再根据证件区域图像的验证结果判定是否通过身份验证。根据人脸变化数据获取目标图像中人脸的动态,确保人脸的真实性,再根据具有真实性的人脸所对应的人脸区域图像对证件区域图像进行验证来对用户身份进行验证,避免恶意用户冒用他人证件进行身份验证的风险,从而提高了在线身份验证的安全性。
在一个实施例中,人脸变化数据包括表情变化度。解析多帧目标图像中的人脸区域图像,生成人脸变化数据,包括:提取每帧目标图像中的人脸区域图像的表情特征数据;根据每两帧相邻目标图像的表情特征数据,计算得表情变化度。
表情特征数据是指能反映人脸区域图像所包含的人脸特征的数据。其中,人脸特征包括但不限于五官特征、面部形状等。可按照预设的算法提取人脸区域图像中人脸的特征点,根据特征点的相关数据生成表情特征数据。表情变化度是指两张目标图像之间人脸区域图像所展示的人脸表情的差异程度。可按照多帧目标图像的获取时间顺序,对每相邻两帧目标图像的表情特征数据进行计算,得到表情变化度。当检测到两张目标图像中的人脸区域图像所对应的表情特征数据不一致时,表情变化度取值大于0。进一步地,表情特征数据还可以表示为特征向量的形式,可以对相邻两帧目标图像对应的两个特征向量按照向量点积公式计算得到余弦相似度,由于余弦相似度越大则说明表情差异程度越小,因此可将该余弦相似度的倒数作为相应两帧目标图像中人脸区域图像的表情变化度。
在一个实施例中,可预先构建人脸表情识别模型,将获取到的人脸区域图像输入该人脸表情识别模型之后,直接生成相应的表情特征数据。还可预先建立三维人脸模型,通过深度学习算法,学习真实的三维面部表情运动的情况,提取每帧人脸区域图像上的特征点之后,将特征点映射到三维人脸模 型形成与每帧人脸区域图像对应的三维模型。根据结合特征点重新构建的三维人脸模型生成表情特征数据。
判断人脸变化数据是否与预设变化数据匹配,包括:判断每个表情变化度是否均处于预设变化度区间内。其中,预设变化度区间是由第一阈值和第二阈值构成的变化度区间,是指真实用户所对应的变化度的取值范围。第一阈值是指真实用户在两帧目标图像中所生成的最小变化度,第二阈值是指真实用户在两帧目标图像中所生成的最大变化度。若两帧目标图像的人脸区域图像的表情变化度小于第一阈值,说明人脸区域图像所对应的表情特征数据是一致的,存在他人利用静态图片冒充真实用户的可能性;若两帧目标图像的人脸区域图像的表情变化度大于第二阈值,则说明获取到的目标图像不是连贯的。因此,可根据第一阈值和第二生成预设变化度区间,判断每个表情变化度是否均处于预设变化度区间内。当每个表情变化度均处于预设变化度区间内时,则判定人脸变化数据与预设变化数据匹配;否则,判定人脸变化数据与预设变化数据不匹配。
上述实施例中,通过提取每帧目标图像中的人脸区域图像的表情特征数据,并根据表情特征数据计算得到表情变化度,从而可以根据多帧目标图像中人脸区域图像的动态变化确保人脸区域图像所对应人脸的真实性,避免了恶意用户盗用他人的静态照片进行身份验证的风险,从而提高了身份验证的安全性。
在一个实施例中,根据身份验证指令通过摄像装置获取多帧目标图像之前,还包括:获取并显示随机表情提示信息;在提取每帧目标图像中的人脸区域图像的表情特征数据之后,还包括:检测提取的表情特征数据中,是否存在与随机表情提示信息相匹配的表情特征数据;当存在与表情提示信息相匹配的表情特征数据时,根据每两帧相邻目标图像的表情特征数据,计算得表情变化度。
随机表情提示信息是指终端从预设表情库中随机获取的人脸表情相应的提示信息。用户可根据显示的随机表情提示信息面对摄像装置做出相应的表 情。其中,预设表情库中存储有多种人脸表情的相关数据,人脸表情包括但不限于微笑、眨眼、张嘴、皱眉、吐舌头、点头、摇头等表情中一种或多种的组合。通过随机算法确定随机表情,并获取到相应的随机表情提示信息。其中,可将该随机表情对应的文字信息作为随机表情提示信息,还可以通过线条描绘确定的随机表情作为随机表情提示信息。
确定随机表情之后,获取与随机表情提示相对应的预设表情特征数据。提取每帧目标图像中的人脸区域图像的表情特征数据之后,比较每帧人脸区域图像的表情特征数据与预设表情特征数据是否匹配。比如说,针对微笑的表情,当检测表情特征数据中的嘴部弧度达到预设的弧度阈值时,则可判定该帧人脸区域图像的表情特征数据与预设表情特征数据匹配,可执行计算表情变化度。
在一个实施例中,人脸变化数据包括人脸移动轨迹,在根据身份验证指令通过摄像装置获取多帧目标图像之前,还包括:获取并显示随机人脸轨迹提示信息;解析多帧目标图像中的人脸区域图像生成人脸变化数据,包括:提取每帧目标图像中的人脸区域图像的人脸位置数据;根据每帧目标图像的人脸位置数据生成人脸移动轨迹;判断人脸变化数据是否与预设变化数据匹配,包括:当人脸移动轨迹与随机人脸轨迹提示信息匹配时,则判定人脸变化数据与预设变化数据匹配。
人脸移动轨迹是指多帧人脸区域图像在相应目标图像中所处位置构成的移动轨迹。按照预设的随机算法确定随机人脸轨迹提示信息所对应的人脸轨迹之后,可将该随机人脸轨迹的文字信息作为随机人脸轨迹提示信息,还可以通过线条描绘确定的随机人脸轨迹作为随机人脸轨迹提示信息。比如说,文字信息可为“向左移”、“向右移”、“向上移”等,还可通过箭头展示人脸移动方向及距离等。人脸位置数据是指能够表示人脸区域图像在相应目标图像中所处位置的数据。比如说,人脸区域图像中人脸特征点的位置数据。其中,人脸特征点可以为眼角、嘴角、鼻尖等其中一种或多种的组合特征点。根据每帧目标图像中的人脸区域图像的人脸位置数据,可按照预设的曲线拟 合算法生成人脸移动轨迹。判断人脸移动轨迹是否与随机人脸轨迹提示信息所对应的随机人脸轨迹匹配,若是,则可判定人脸变化数据与预设变化数据匹配,可以根据多帧人脸区域图像对相应证件区域图像进行验证,根据验证结果判定是否通过身份验证;否则,可判定人脸变化数据与预设变化数据不匹配。
在一个实施例中,在根据身份验证指令通过摄像装置获取多帧目标图像之前,还包括:获取并显示随机证件轨迹提示信息;在根据多帧人脸区域图像对相应证件区域图像进行验证之前,还包括:提取每帧目标图像中证件区域图像的证件位置数据;根据每帧目标图像中的证件位置数据生成证件移动轨迹;判断证件移动轨迹是否与证件轨迹提示信息匹配;当证件移动轨迹与随机证件轨迹提示信息匹配,且人脸变化数据与预设变化数据匹配时,执行根据多帧人脸区域图像对相应证件区域图像进行验证。
证件移动轨迹是指多帧证件区域图像在相应目标图像中所处位置构成的移动轨迹。按照预设的随机算法确定随机证件轨迹提示信息所对应的证件轨迹之后,可将该随机证件轨迹的文字信息作为随机证件轨迹提示信息,还可以通过线条描绘确定的随机证件轨迹作为随机证件轨迹提示信息。比如说,文字信息可为“向左移”、“向右移”、“向上移”等,还可通过箭头展示证件移动方向及距离等。证件位置数据是指能够表示证件区域图像在相应目标图像中所处位置的数据。比如说,可以提取证件中一个或多个字符的特征点在相应目标图像中的位置数据作为证件位置数据。根据每帧目标图像中的证件区域图像的证件位置数据,可按照预设的曲线拟合算法生成证件移动轨迹。判断证件移动轨迹是否与随机证件轨迹提示信息所对应的随机证件轨迹匹配,若是,且人脸变化数据与预设变化数据匹配时,执行根据多帧人脸区域图像对相应证件区域图像进行验证。
在一个实施例中,如图4所示,上述步骤S212,包括:
步骤S402,提取每帧目标图像中证件区域图像的证件照特征数据和相应目标图像中人脸区域图像的人脸特征数据。
目标图像中的证件区域图像所对应的证件包含证件照。证件照特征数据是指根据人脸识别技术所提取出的证件区域图像中证件照的特征数据,相应的,人脸特征数据是指根据人脸识别技术所提取出的人脸区域图像中人脸的特征数据。其中,人脸识别技术包括但不限于基于人脸特征点的识别算法、基于整幅人脸图像的识别算法、基于模板的识别算法、利用神经网络进行识别的算法等。
在一个实施例中,可采用利用神经网络进行识别的人脸识别算法,训练用于识别人脸区域图像的神经网络。初始化神经网络之后,可以针对摄像装置获取的人脸区域图像作为训练样本集,还可以将采集相同图像用该用户的证件区域图像作为训练验证集,通过训练样本集和训练验证集对神经网络进行训练,提取相应的证件特征数据与人脸特征数据,直到神经网络针对证件特征数据与人脸特征数据的识别率大于预设识别率阈值。将获取到的目标图像输入训练好的神经网络进行识别,用于提取该目标图像中的证件照特征数据和人脸特征数据,识别该目标图像中的人脸区域图像与证件区域图像是否匹配。
步骤S404,判断是否存在证件照特征数据与人脸特征数据相匹配的目标图像。
可从每帧目标图像中提取出证件照特征数据与人脸特征数据,当检测到存在证件照特征数据与人脸特征数据相匹配的目标图像时,执行步骤S406;否则,进入步骤S407,不作处理。
步骤S406,提取证件照特征数据与人脸特征数据相匹配的目标图像中包含的身份信息数据。
身份信息数据是指证件区域图像中所包含身份信息的相关数据。比如说,当证件区域图像所对应的证件为身份证时,则身份信息包括但不限于姓名、身份证号、出生年月日、家庭住址等。
可通过OCR技术识别证件区域图像中所包含的身份信息数据。其中,OCR技术是通过扫描等光学输入方式将各种票据、报刊、书籍、文稿及其它 印刷品的文字转化为图像信息,再利用文字识别技术将图像信息转化为可以使用的计算机输入技术。由于识别率有限,可能识别出多个结果。可以对识别出的多种结果按照信息合法性、相似度、出现次数等,将多个识别出的身份信息数据进行排序,可以将最优结果展示给用户进行确认。其中,可根据信息合法性将不合法的结果进行排除。比如,检测识别出的身份证号、出生年月日是否符合预设的取值范围以及检测识别出的姓名中的姓氏是否为合法姓氏,若否,则认为该身份信息数据不合法,可以排除。进一步地,还可以根据每个识别结果的相似度、出现次数等进行加权计算,将超过预设阈值的识别结果作为可选信息展示给用户。若可选信息为多个,可将多个可选信息展示在下拉列表让用户进行选择。将用户选择的可选信息作为该目标图像中包含的身份信息数据。
步骤S408,获取证件数据库中与身份信息数据相对应的验证人脸数据。
证件数据库是指预设的存储有所有用户的证件的相关数据,包括但不限于身份信息数据和验证人脸数据。其中,验证人脸数据是指用于验证证件区域图像中证件照真实性的人脸数据。验证人脸数据可为预先提取出的证件中证件照的特征数据。
举例来说,证件区域图像可为身份证图像,识别出了身份证图像中的身份证号之后,可根据该身份证号在公安系统数据库中获取与该身份证号相对应的证件照。
步骤S410,判断验证人脸数据与证件照特征数据是否匹配。
当验证人脸数据与证件照特征数据匹配时,进入步骤S412;当验证人脸数据与证件照特征数据不匹配时,进入步骤S414。
步骤S412,判定通过身份验证。
步骤S414,判定不通过身份验证。
上述实施例中,根据人脸特征数据验证证件照特征数据之后,再根据身份信息数据获取证件数据库中的验证人脸数据,通过根据验证人脸数据验证证件照特征数据,避免了恶意用户冒用他人证件并在证件照处贴盖自己照片 的风险,提高了在线身份验证的安全性。
在一个实施例中,提供了另一种身份验证方法,结合图5所示,该方法包括:
步骤S502,接收身份验证指令。
终端可提供用于触发身份验证指令的控件,在检测到对该控件的触发操作时,触发身份验证指令。
步骤S504,获取并显示随机表情提示信息。
终端中预设有表情数据库,按照预设的随机算法获取到随机表情提示信息之后,通过显示屏显示该随机表情提示信息。
举例来说,若获取的随机表情提示信息为微笑,则可在显示屏上显示文字提示信息“微笑”,还可以在预设区域展示笑脸的画像,使得用户能够根据随机表情提示信息做出相应的微笑表情。
步骤S506,在预设时长内按照预设时间间隔通过摄像装置获取多帧图像。
终端中预设有摄像装置,可以每隔0.1秒通过该摄像装置获取多帧图像,并通过显示屏实时显示图像。
步骤S508,检测是否每帧图像的分辨率都大于预设分辨率。
若否,则返回步骤S506,直至每帧图像的分辨率都大于预设分辨率;若是,则进入步骤S510。
步骤S510,将每帧图像作为目标图像。
目标图像是指包含证件和人脸两个目标拍摄对象的图像,且每帧目标图像的分辨率都大于预设分辨率。
步骤S512,提取每帧目标图像中的人脸区域图像和证件区域图像。
人脸区域图像是指目标图像中包含人脸目标拍摄对象的区域图像,证件区域图像是指目标图像中包含证件目标拍摄对象的区域图像。可通过预设的图像分割方法将目标图像中的人脸区域图像和证件区域图像进行分割,并提取出来。
步骤S514,提取每帧目标图像中的人脸区域图像的表情特征数据。
举例来说,可采用基于人脸特征点的人脸识别算法,提取每帧人脸区域图像中的眼睛形状及嘴部弧度等关键特征。对关键特征进行分析计算,生成每帧人脸区域图像相应的表情特征数据。
步骤S516,检测提取的表情特征数据中,是否存在与随机表情提示信息相匹配的表情特征数据。
若是,则进入步骤S618;若否,则返回步骤S604。举例来说,针对获取的随机表情提示信息为微笑的情况,则需要检测人脸区域图像包含的表情特征数据与“微笑”相应的表情特征数据是否匹配。比如,嘴部弧度是否达到预设阈值。
步骤S518,根据每两帧相邻目标图像的表情特征数据,计算得表情变化度。
表情变化度是指两张目标图像之间人脸区域图像所展示的人脸表情的差异程度。举例来说,可以根据两张目标图像中人脸区域表情所展示的
步骤S520,判断每个表情变化度是否均处于预设变化度区间内。
若是,进入步骤S522;若否,进入步骤S534。
步骤S522,提取每帧目标图像中证件区域图像的证件照特征数据和相应目标图像中人脸区域图像的人脸特征数据。
可按照相同的人脸识别算法提取证件区域图像的证件照特征数据与人脸区域图像的人脸特征数据,并比较两者是否匹配。
步骤S524,判断是否存在证件照特征数据与人脸特征数据相匹配的目标图像。
若是,进入步骤S526;若否,进入步骤S534。
步骤S526,提取证件照特征数据与人脸特征数据相匹配的目标图像中包含的身份信息数据。
举例来说,证件可以是身份证,当检测到证件区域图像中所包含的身份证照的特征数据与用户的人脸特征数据相匹配时,可通过OCR技术提取该目标图像中包含的身份信息数据。其中,身份信息数据包括但不限于身份证号。
步骤S528,获取证件数据库中与身份信息数据相对应的验证人脸数据。
举例来说,针对证件为身份证的情况,可根据提取出的身份证号在公安系统的身份证数据库中查找相应的身份证照的验证人脸数据。
步骤S530,判断验证人脸数据与证件照特征数据是否匹配。
若是,进入步骤S532;若否,进入步骤S534。
步骤S532,判定通过身份验证。
步骤S534,判定不通过身份验证。
上述实施例中,通过多重判断过程,对目标图像中人脸区域图像和证件区域图像进行验证,确保了人脸区域图像所对应人脸的真实性,同时也避免了恶意用户冒用他人证件进行身份验证的风险,提高了身份验证的安全性。
在一个实施例中,如图6所示,提供了一种身份验证装置600,该装置包括:身份验证指令接收模块602,用于接收身份验证指令;目标图像获取模块604,用于根据身份验证指令通过摄像装置获取多帧目标图像;区域图像提取模块606,用于提取每帧目标图像中的人脸区域图像和证件区域图像;人脸变化数据匹配模块608,用于解析多帧目标图像中的人脸区域图像,生成人脸变化数据;身份验证模块610,用于判断人脸变化数据是否与预设变化数据匹配,当人脸变化数据与预设变化数据匹配时,则根据多帧人脸区域图像对相应证件区域图像进行验证,根据验证结果判定是否通过身份验证。
在一个实施例中,目标图像获取模块604还用于根据身份验证指令,在预设时长内按照预设时间间隔通过摄像装置获取多帧图像;检测是否每帧图像的分辨率都大于预设分辨率;若是,则将每帧图像作为目标图像;若否,则返回继续执行在预设时长内按照预设时间间隔通过摄像装置获取多帧图像,直至每帧图像的分辨率都大于预设分辨率。
在一个实施例中,人脸变化数据包括表情变化度;人脸变化数据匹配模块608还用于提取每帧目标图像中的人脸区域图像的表情特征数据,根据每两帧相邻目标图像的表情特征数据,计算得表情变化度;身份验证模块610还用于判断每个表情变化度是否均处于预设变化度区间内,当每个表情变化 度均处于预设变化度区间内时,则判定人脸变化数据与预设变化数据匹配。
在一个实施例中,如图7所示,提供了另一个种身份验证装置700,该装置还包括:随机提示信息显示模块603,用于获取并显示随机表情提示信息;人脸变化数据匹配模块608还用于检测提取的表情特征数据中,是否存在与随机表情提示信息相匹配的表情特征数据;当存在与表情提示信息相匹配的表情特征数据时,执行根据每两帧相邻目标图像的表情特征数据,计算得表情变化度。
在一个实施例中,人脸变化数据包括人脸移动轨迹;随机提示信息显示模块703还用于获取并显示随机人脸轨迹提示信息;人脸变化数据匹配模块708还用于提取每帧目标图像中的人脸区域图像的人脸位置数据;根据每帧目标图像的人脸位置数据生成人脸移动轨迹;身份验证模块610还用于当人脸移动轨迹与随机人脸轨迹提示信息匹配时,则判定人脸变化数据与预设变化数据匹配。
在一个实施例中,随机提示信息显示模块603还用于获取并显示随机证件轨迹提示信息;人脸变化数据匹配模块608还用于提取每帧目标图像中证件区域图像的证件位置数据;根据每帧目标图像中的证件位置数据生成证件移动轨迹;判断证件移动轨迹是否与证件轨迹提示信息匹配;身份验证模块610还用于当证件移动轨迹与随机证件轨迹提示信息匹配,且人脸变化数据与预设变化数据匹配时,执行根据多帧人脸区域图像对相应证件区域图像进行验证。
在一个实施例中,身份验证模块610还用于提取每帧目标图像中证件区域图像的证件照特征数据和相应目标图像中人脸区域图像的人脸特征数据;判断是否存在证件照特征数据与人脸特征数据相匹配的目标图像;若是,则提取证件照特征数据与人脸特征数据相匹配的目标图像中包含的身份信息数据;获取证件数据库中与身份信息数据相对应的验证人脸数据;判断验证人脸数据与证件照特征数据是否匹配;若是,则判定通过身份验证。
上述身份验证装置可以实现为一种计算机可读指令的形式,计算机可读 指令可以在如图8所示的计算机设备上运行。
上述身份验证装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备的存储器中,也可以以软件形式存储于计算机设备的存储器中,以便于处理器调用执行以上各个模块对应的操作。该处理器可以为中央处理单元(CPU)、微处理器、单片机等。
在一个实施例中,提供了一种计算机设备,包括存储器和一个或多个处理器,存储器中存储有计算机可读指令,该计算机可读指令被处理器执行时,使得一个或多个处理器执行以下步骤:接收身份验证指令;根据身份验证指令通过摄像装置获取多帧目标图像;提取每帧目标图像中的人脸区域图像和证件区域图像;解析多帧目标图像中的人脸区域图像,生成人脸变化数据;判断人脸变化数据是否与预设变化数据匹配;当人脸变化数据与预设变化数据匹配时,则根据多帧人脸区域图像对相应证件区域图像进行验证,根据验证结果判定是否通过身份验证。
在一个实施例中,处理器所执行的根据身份验证指令通过摄像装置获取多帧目标图像,包括:根据身份验证指令,在预设时长内按照预设时间间隔通过摄像装置获取多帧图像;检测是否每帧图像的分辨率都大于预设分辨率;若是,则将每帧图像作为目标图像;若否,则返回继续执行在预设时长内按照预设时间间隔通过摄像装置获取多帧图像,直至每帧图像的分辨率都大于预设分辨率。
在一个实施例中,人脸变化数据包括表情变化度,处理器所执行的解析多帧目标图像中的人脸区域图像,生成人脸变化数据的步骤,具体包括以下步骤:提取每帧目标图像中的人脸区域图像的表情特征数据;根据每两帧相邻目标图像的表情特征数据,计算得表情变化度;处理器所执行的判断人脸变化数据是否与预设变化数据匹配的步骤,具体包括以下步骤:判断每个表情变化度是否均处于预设变化度区间内;当每个表情变化度均处于预设变化度区间内时,则判定人脸变化数据与预设变化数据匹配。
在一个实施例中,处理器所执行的根据身份验证指令通过摄像装置获取多帧目标图像的步骤之前,还包括以下步骤:获取并显示随机表情提示信息;处理器所执行的提取每帧目标图像中的人脸区域图像的表情特征数据的步骤之后,还包括以下步骤:检测提取的表情特征数据中,是否存在与随机表情提示信息相匹配的表情特征数据;当存在与表情提示信息相匹配的表情特征数据时,执行根据每两帧相邻目标图像的表情特征数据,计算得表情变化度。
在一个实施例中,人脸变化数据包括人脸移动轨迹,处理器所执行的根据身份验证指令通过摄像装置获取多帧目标图像的步骤之前,还包括以下步骤:获取并显示随机人脸轨迹提示信息;处理器所执行的解析多帧目标图像中的人脸区域图像生成人脸变化数据的步骤,具体包括以下步骤:提取每帧目标图像中的人脸区域图像的人脸位置数据;根据每帧目标图像的人脸位置数据生成人脸移动轨迹;处理器所执行的判断人脸变化数据是否与预设变化数据匹配的步骤,具体包括:当人脸移动轨迹与随机人脸轨迹提示信息匹配时,则判定人脸变化数据与预设变化数据匹配。
在一个实施例中,处理器所执行的根据身份验证指令通过摄像装置获取多帧目标图像的步骤之前,还包括以下步骤:获取并显示随机证件轨迹提示信息;处理器所执行的根据多帧人脸区域图像对相应证件区域图像进行验证的步骤之前,还包括以下步骤:提取每帧目标图像中证件区域图像的证件位置数据;根据每帧目标图像中的证件位置数据生成证件移动轨迹;判断证件移动轨迹是否与证件轨迹提示信息匹配;当证件移动轨迹与随机证件轨迹提示信息匹配,且人脸变化数据与预设变化数据匹配时,执行根据多帧人脸区域图像对相应证件区域图像进行验证。
在一个实施例中,处理器所执行的根据多帧人脸区域图像对相应证件区域图像进行验证的步骤,具体包括以下步骤:提取每帧目标图像中证件区域图像的证件照特征数据和相应目标图像中人脸区域图像的人脸特征数据;判断是否存在证件照特征数据与人脸特征数据相匹配的目标图像;若是,则提取证件照特征数据与人脸特征数据相匹配的目标图像中包含的身份信息数 据;获取证件数据库中与身份信息数据相对应的验证人脸数据;判断验证人脸数据与证件照特征数据是否匹配;若是,则判定通过身份验证。
在一个实施例中,上述的计算机设备可用作如图1所示的应用环境中的终端102。如图8所示,该计算机设备包括通过系统总线连接的处理器、非易失性存储介质、内存储器、显示屏、摄像装置和网络接口。其中,该计算机设备的处理器用于提供计算和控制能力,支撑整个计算机设备的运行。计算机设备的非易失性存储介质存储有操作系统和计算机可读指令。该计算机可读指令可被处理器所执行,以用于实现以上各个实施例所提供的一种身份验证方法。计算机设备中的内存储器为非易失性存储介质中的操作系统和计算机可读指令提供高速缓存的运行环境。显示屏可以是触摸屏,比如为电容屏或电子屏,可通过接收作用于该触摸屏上显示的控件的点击操作,生成相应的指令。摄像装置可以为手机摄像头、照相机及摄像机等。网络接口可以是以太网卡或无线网卡等,用于与外部的终端或服务器进行通信。
本领域技术人员可以理解,图8中示出的计算机设备的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。比如,该图中的计算机设备还可不包括摄像装置,而是通过外接摄像装置实现上述各实施中的身份验证方法。
在一个实施例中,提供了一个或多个存储有计算机可读指令的非易失性可读存储介质,该计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤:接收身份验证指令;根据身份验证指令通过摄像装置获取多帧目标图像;提取每帧目标图像中的人脸区域图像和证件区域图像;解析多帧目标图像中的人脸区域图像,生成人脸变化数据;判断人脸变化数据是否与预设变化数据匹配;当人脸变化数据与预设变化数据匹配时,则根据多帧人脸区域图像对相应证件区域图像进行验证,根据验证结果判定是否通过身份验证。
在一个实施例中,处理器所执行的根据身份验证指令通过摄像装置获取多帧目标图像,包括:根据身份验证指令,在预设时长内按照预设时间间隔通过摄像装置获取多帧图像;检测是否每帧图像的分辨率都大于预设分辨率;若是,则将每帧图像作为目标图像;若否,则返回继续执行在预设时长内按照预设时间间隔通过摄像装置获取多帧图像,直至每帧图像的分辨率都大于预设分辨率。
在一个实施例中,人脸变化数据包括表情变化度,处理器所执行的解析多帧目标图像中的人脸区域图像,生成人脸变化数据的步骤,具体包括以下步骤:提取每帧目标图像中的人脸区域图像的表情特征数据;根据每两帧相邻目标图像的表情特征数据,计算得表情变化度;处理器所执行的判断人脸变化数据是否与预设变化数据匹配的步骤,具体包括以下步骤:判断每个表情变化度是否均处于预设变化度区间内;当每个表情变化度均处于预设变化度区间内时,则判定人脸变化数据与预设变化数据匹配。
在一个实施例中,处理器所执行的根据身份验证指令通过摄像装置获取多帧目标图像的步骤之前,还包括以下步骤:获取并显示随机表情提示信息;处理器所执行的提取每帧目标图像中的人脸区域图像的表情特征数据的步骤之后,还包括以下步骤:检测提取的表情特征数据中,是否存在与随机表情提示信息相匹配的表情特征数据;当存在与表情提示信息相匹配的表情特征数据时,执行根据每两帧相邻目标图像的表情特征数据,计算得表情变化度。
在一个实施例中,人脸变化数据包括人脸移动轨迹,处理器所执行的根据身份验证指令通过摄像装置获取多帧目标图像的步骤之前,还包括以下步骤:获取并显示随机人脸轨迹提示信息;处理器所执行的解析多帧目标图像中的人脸区域图像生成人脸变化数据的步骤,具体包括以下步骤:提取每帧目标图像中的人脸区域图像的人脸位置数据;根据每帧目标图像的人脸位置数据生成人脸移动轨迹;处理器所执行的判断人脸变化数据是否与预设变化数据匹配的步骤,具体包括:当人脸移动轨迹与随机人脸轨迹提示信息匹配时,则判定人脸变化数据与预设变化数据匹配。
在一个实施例中,处理器所执行的根据身份验证指令通过摄像装置获取多帧目标图像的步骤之前,还包括以下步骤:获取并显示随机证件轨迹提示信息;处理器所执行的根据多帧人脸区域图像对相应证件区域图像进行验证的步骤之前,还包括以下步骤:提取每帧目标图像中证件区域图像的证件位置数据;根据每帧目标图像中的证件位置数据生成证件移动轨迹;判断证件移动轨迹是否与证件轨迹提示信息匹配;当证件移动轨迹与随机证件轨迹提示信息匹配,且人脸变化数据与预设变化数据匹配时,执行根据多帧人脸区域图像对相应证件区域图像进行验证。
在一个实施例中,处理器所执行的根据多帧人脸区域图像对相应证件区域图像进行验证的步骤,具体包括以下步骤:提取每帧目标图像中证件区域图像的证件照特征数据和相应目标图像中人脸区域图像的人脸特征数据;判断是否存在证件照特征数据与人脸特征数据相匹配的目标图像;若是,则提取证件照特征数据与人脸特征数据相匹配的目标图像中包含的身份信息数据;获取证件数据库中与身份信息数据相对应的验证人脸数据;判断验证人脸数据与证件照特征数据是否匹配;若是,则判定通过身份验证。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以 所附权利要求为准。

Claims (20)

  1. 一种身份验证方法,包括:
    接收身份验证指令;
    根据所述身份验证指令通过摄像装置获取多帧目标图像;
    提取每帧目标图像中的人脸区域图像和证件区域图像;
    解析多帧目标图像中的人脸区域图像,生成人脸变化数据;
    判断所述人脸变化数据是否与预设变化数据匹配;及
    当所述人脸变化数据与预设变化数据匹配时,则根据多帧人脸区域图像对相应证件区域图像进行验证,根据验证结果判定是否通过身份验证。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述身份验证指令通过摄像装置获取多帧目标图像,包括:
    根据所述身份验证指令,在预设时长内按照预设时间间隔通过摄像装置获取多帧图像;
    检测是否每帧图像的分辨率都大于预设分辨率;
    若是,则将每帧图像作为目标图像;及
    若否,则返回继续执行所述在预设时长内按照预设时间间隔通过摄像装置获取多帧图像,直至每帧图像的分辨率都大于预设分辨率。
  3. 根据权利要求1所述的方法,其特征在于,所述人脸变化数据包括表情变化度;
    所述解析多帧目标图像中的人脸区域图像,生成人脸变化数据,包括:
    提取每帧目标图像中的人脸区域图像的表情特征数据;
    根据每两帧相邻目标图像的表情特征数据,计算得表情变化度;
    所述判断所述人脸变化数据是否与预设变化数据匹配,包括:
    判断每个表情变化度是否均处于预设变化度区间内;及
    当每个表情变化度均处于预设变化度区间内时,则判定人脸变化数据与预设变化数据匹配。
  4. 根据权利要求3所述的方法,其特征在于,在所述根据所述身份验证 指令通过摄像装置获取多帧目标图像之前,还包括:
    获取并显示随机表情提示信息;
    在所述提取每帧目标图像中的人脸区域图像的表情特征数据之后,还包括:
    检测提取的表情特征数据中,是否存在与所述随机表情提示信息相匹配的表情特征数据;及
    当存在与所述表情提示信息相匹配的表情特征数据时,执行所述根据每两帧相邻目标图像的表情特征数据,计算得表情变化度。
  5. 根据权利要求1所述的方法,其特征在于,所述人脸变化数据包括人脸移动轨迹;
    在所述根据所述身份验证指令通过摄像装置获取多帧目标图像之前,还包括:
    获取并显示随机人脸轨迹提示信息;
    所述解析多帧目标图像中的人脸区域图像生成人脸变化数据,包括:
    提取每帧目标图像中的人脸区域图像的人脸位置数据;
    根据每帧目标图像的人脸位置数据生成人脸移动轨迹;及
    所述判断所述人脸变化数据是否与预设变化数据匹配,包括:
    当所述人脸移动轨迹与所述随机人脸轨迹提示信息匹配时,则判定人脸变化数据与预设变化数据匹配。
  6. 根据权利要求1所述的方法,其特征在于,在所述根据所述身份验证指令通过摄像装置获取多帧目标图像之前,还包括:
    获取并显示随机证件轨迹提示信息;
    在所述根据多帧人脸区域图像对相应证件区域图像进行验证之前,还包括:
    提取每帧目标图像中证件区域图像的证件位置数据;
    根据每帧目标图像中的证件位置数据生成证件移动轨迹;
    判断所述证件移动轨迹是否与所述证件轨迹提示信息匹配;及
    当所述证件移动轨迹与所述随机证件轨迹提示信息匹配,且所述人脸变化数据与预设变化数据匹配时,执行所述根据多帧人脸区域图像对相应证件区域图像进行验证。
  7. 根据权利要求1所述的方法,其特征在于,所述根据多帧人脸区域图像对相应证件区域图像进行验证,包括:
    提取每帧目标图像中证件区域图像的证件照特征数据和相应目标图像中人脸区域图像的人脸特征数据;
    判断是否存在所述证件照特征数据与所述人脸特征数据相匹配的目标图像;
    若是,则提取所述证件照特征数据与所述人脸特征数据相匹配的目标图像中包含的身份信息数据;
    获取证件数据库中与所述身份信息数据相对应的验证人脸数据;
    判断所述验证人脸数据与所述证件照特征数据是否匹配;及
    若是,则判定通过身份验证。
  8. 一种身份验证装置,其特征在于,所述装置包括:
    身份验证指令接收模块,用于接收身份验证指令;
    目标图像获取模块,用于根据所述身份验证指令通过摄像装置获取多帧目标图像;
    区域图像提取模块,用于提取每帧目标图像中的人脸区域图像和证件区域图像;
    人脸变化数据匹配模块,用于解析多帧目标图像中的人脸区域图像,生成人脸变化数据;及
    身份验证模块,用于判断所述人脸变化数据是否与预设变化数据匹配;当所述人脸变化数据与预设变化数据匹配时,则根据多帧人脸区域图像对相应证件区域图像进行验证,根据验证结果判定是否通过身份验证。
  9. 一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行 以下步骤:
    接收身份验证指令;
    根据所述身份验证指令通过摄像装置获取多帧目标图像;
    提取每帧目标图像中的人脸区域图像和证件区域图像;
    解析多帧目标图像中的人脸区域图像,生成人脸变化数据;
    判断所述人脸变化数据是否与预设变化数据匹配;及
    当所述人脸变化数据与预设变化数据匹配时,则根据多帧人脸区域图像对相应证件区域图像进行验证,根据验证结果判定是否通过身份验证。
  10. 根据权利要求9所述的存储介质,其特征在于,所述处理器所执行的根据所述身份验证指令通过摄像装置获取多帧目标图像的步骤,包括:
    根据所述身份验证指令,在预设时长内按照预设时间间隔通过摄像装置获取多帧图像;
    检测是否每帧图像的分辨率都大于预设分辨率;
    若是,则将每帧图像作为目标图像;及
    若否,则返回继续执行所述在预设时长内按照预设时间间隔通过摄像装置获取多帧图像,直至每帧图像的分辨率都大于预设分辨率。
  11. 根据权利要求9所述的存储介质,其特征在于,所述人脸变化数据包括表情变化度;
    所述处理器所执行的解析多帧目标图像中的人脸区域图像,生成人脸变化数据的步骤,包括:
    提取每帧目标图像中的人脸区域图像的表情特征数据;
    根据每两帧相邻目标图像的表情特征数据,计算得表情变化度;
    所述判断所述人脸变化数据是否与预设变化数据匹配,包括:
    判断每个表情变化度是否均处于预设变化度区间内;及
    当每个表情变化度均处于预设变化度区间内时,则判定人脸变化数据与预设变化数据匹配。
  12. 根据权利要求9所述的存储介质,其特征在于,所述人脸变化数据 包括人脸移动轨迹;
    在所述处理器所执行的根据所述身份验证指令通过摄像装置获取多帧目标图像的步骤之前,还包括:
    获取并显示随机人脸轨迹提示信息;
    所述处理器所执行的解析多帧目标图像中的人脸区域图像生成人脸变化数据的步骤,包括:
    提取每帧目标图像中的人脸区域图像的人脸位置数据;
    根据每帧目标图像的人脸位置数据生成人脸移动轨迹;及
    所述处理器所执行的判断所述人脸变化数据是否与预设变化数据匹配的步骤,包括:
    当所述人脸移动轨迹与所述随机人脸轨迹提示信息匹配时,则判定人脸变化数据与预设变化数据匹配。
  13. 根据权利要求9所述的存储介质,其特征在于,在所述处理器所执行的根据所述身份验证指令通过摄像装置获取多帧目标图像的步骤之前,还包括:
    获取并显示随机证件轨迹提示信息;
    在所述处理器所执行的根据多帧人脸区域图像对相应证件区域图像进行验证的步骤之前,还包括:
    提取每帧目标图像中证件区域图像的证件位置数据;
    根据每帧目标图像中的证件位置数据生成证件移动轨迹;
    判断所述证件移动轨迹是否与所述证件轨迹提示信息匹配;及
    当所述证件移动轨迹与所述随机证件轨迹提示信息匹配,且所述人脸变化数据与预设变化数据匹配时,执行所述根据多帧人脸区域图像对相应证件区域图像进行验证。
  14. 根据权利要求9所述的存储介质,其特征在于,所述处理器所执行的根据多帧人脸区域图像对相应证件区域图像进行验证的步骤,包括:
    提取每帧目标图像中证件区域图像的证件照特征数据和相应目标图像中 人脸区域图像的人脸特征数据;
    判断是否存在所述证件照特征数据与所述人脸特征数据相匹配的目标图像;
    若是,则提取所述证件照特征数据与所述人脸特征数据相匹配的目标图像中包含的身份信息数据;
    获取证件数据库中与所述身份信息数据相对应的验证人脸数据;
    判断所述验证人脸数据与所述证件照特征数据是否匹配;及
    若是,则判定通过身份验证。
  15. 一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
    接收身份验证指令;
    根据所述身份验证指令通过摄像装置获取多帧目标图像;
    提取每帧目标图像中的人脸区域图像和证件区域图像;
    解析多帧目标图像中的人脸区域图像,生成人脸变化数据;
    判断所述人脸变化数据是否与预设变化数据匹配;及
    当所述人脸变化数据与预设变化数据匹配时,则根据多帧人脸区域图像对相应证件区域图像进行验证,根据验证结果判定是否通过身份验证。
  16. 根据权利要求15所述的计算机设备,其特征在于,所述处理器所执行的根据所述身份验证指令通过摄像装置获取多帧目标图像的步骤,包括:
    根据所述身份验证指令,在预设时长内按照预设时间间隔通过摄像装置获取多帧图像;
    检测是否每帧图像的分辨率都大于预设分辨率;
    若是,则将每帧图像作为目标图像;及
    若否,则返回继续执行所述在预设时长内按照预设时间间隔通过摄像装置获取多帧图像,直至每帧图像的分辨率都大于预设分辨率。
  17. 根据权利要求15所述的计算机设备,其特征在于,所述人脸变化数 据包括表情变化度;
    所述处理器所执行的解析多帧目标图像中的人脸区域图像,生成人脸变化数据的步骤,包括:
    提取每帧目标图像中的人脸区域图像的表情特征数据;
    根据每两帧相邻目标图像的表情特征数据,计算得表情变化度;
    所述判断所述人脸变化数据是否与预设变化数据匹配,包括:
    判断每个表情变化度是否均处于预设变化度区间内;及
    当每个表情变化度均处于预设变化度区间内时,则判定人脸变化数据与预设变化数据匹配。
  18. 根据权利要求15所述的计算机设备,其特征在于,所述人脸变化数据包括人脸移动轨迹;
    在所述处理器所执行的根据所述身份验证指令通过摄像装置获取多帧目标图像的步骤之前,还包括:
    获取并显示随机人脸轨迹提示信息;
    所述处理器所执行的解析多帧目标图像中的人脸区域图像生成人脸变化数据的步骤,包括:
    提取每帧目标图像中的人脸区域图像的人脸位置数据;
    根据每帧目标图像的人脸位置数据生成人脸移动轨迹;及
    所述处理器所执行的判断所述人脸变化数据是否与预设变化数据匹配的步骤,包括:
    当所述人脸移动轨迹与所述随机人脸轨迹提示信息匹配时,则判定人脸变化数据与预设变化数据匹配。
  19. 根据权利要求15所述的计算机设备,其特征在于,在所述处理器所执行的根据所述身份验证指令通过摄像装置获取多帧目标图像的步骤之前,还包括:
    获取并显示随机证件轨迹提示信息;
    在所述处理器所执行的根据多帧人脸区域图像对相应证件区域图像进行 验证的步骤之前,还包括:
    提取每帧目标图像中证件区域图像的证件位置数据;
    根据每帧目标图像中的证件位置数据生成证件移动轨迹;
    判断所述证件移动轨迹是否与所述证件轨迹提示信息匹配;及
    当所述证件移动轨迹与所述随机证件轨迹提示信息匹配,且所述人脸变化数据与预设变化数据匹配时,执行所述根据多帧人脸区域图像对相应证件区域图像进行验证。
  20. 根据权利要求15所述的计算机设备,其特征在于,所述处理器所执行的根据多帧人脸区域图像对相应证件区域图像进行验证的步骤,包括:
    提取每帧目标图像中证件区域图像的证件照特征数据和相应目标图像中人脸区域图像的人脸特征数据;
    判断是否存在所述证件照特征数据与所述人脸特征数据相匹配的目标图像;
    若是,则提取所述证件照特征数据与所述人脸特征数据相匹配的目标图像中包含的身份信息数据;
    获取证件数据库中与所述身份信息数据相对应的验证人脸数据;
    判断所述验证人脸数据与所述证件照特征数据是否匹配;及
    若是,则判定通过身份验证。
PCT/CN2017/112485 2017-10-17 2017-11-23 身份验证方法、装置、存储介质和计算机设备 WO2019075840A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710965802.X 2017-10-17
CN201710965802.XA CN107844748B (zh) 2017-10-17 2017-10-17 身份验证方法、装置、存储介质和计算机设备

Publications (1)

Publication Number Publication Date
WO2019075840A1 true WO2019075840A1 (zh) 2019-04-25

Family

ID=61661436

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/112485 WO2019075840A1 (zh) 2017-10-17 2017-11-23 身份验证方法、装置、存储介质和计算机设备

Country Status (2)

Country Link
CN (1) CN107844748B (zh)
WO (1) WO2019075840A1 (zh)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298246A (zh) * 2019-05-22 2019-10-01 深圳壹账通智能科技有限公司 开锁验证方法、装置、计算机设备及存储介质
CN111898536A (zh) * 2019-08-27 2020-11-06 创新先进技术有限公司 证件识别方法及装置
CN112395906A (zh) * 2019-08-12 2021-02-23 北京旷视科技有限公司 人脸活体检测方法和装置、人脸活体检测设备及介质
CN112764824A (zh) * 2019-10-21 2021-05-07 腾讯科技(深圳)有限公司 触发应用程序中身份验证的方法、装置、设备及存储介质
CN112784661A (zh) * 2019-11-01 2021-05-11 宏碁股份有限公司 真实人脸的识别方法与真实人脸的识别装置
CN113283359A (zh) * 2021-06-02 2021-08-20 万达信息股份有限公司 一种手持证件照的认证方法、系统和电子设备
CN113312972A (zh) * 2021-04-26 2021-08-27 国家能源集团新能源有限责任公司 一种自助加氢方法及其装置
WO2022102830A1 (ko) * 2020-11-16 2022-05-19 고큐바테크놀로지 주식회사 사용자를 인증하기 위한 기법
CN115083002A (zh) * 2022-08-16 2022-09-20 深圳市海清视讯科技有限公司 影像处理方法、装置和设备
CN115174138A (zh) * 2022-05-25 2022-10-11 北京旷视科技有限公司 摄像头攻击检测方法、系统、设备、存储介质及程序产品

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359502A (zh) * 2018-08-13 2019-02-19 北京市商汤科技开发有限公司 防伪检测方法和装置、电子设备、存储介质
CN109255299A (zh) * 2018-08-13 2019-01-22 北京市商汤科技开发有限公司 身份认证方法和装置、电子设备和存储介质
JP7165746B2 (ja) * 2018-08-13 2022-11-04 ベイジン・センスタイム・テクノロジー・デベロップメント・カンパニー・リミテッド Id認証方法および装置、電子機器並びに記憶媒体
CN110197108A (zh) * 2018-08-17 2019-09-03 平安科技(深圳)有限公司 身份验证方法、装置、计算机设备及存储介质
CN108830512B (zh) * 2018-08-20 2020-01-21 华润守正招标有限公司 一种电子招标投标平台的用户注册审核方法、装置及设备
CN109299692A (zh) * 2018-09-26 2019-02-01 深圳壹账通智能科技有限公司 一种身份识别方法、计算机可读存储介质及终端设备
CN109410138B (zh) * 2018-10-16 2021-10-01 北京旷视科技有限公司 修饰双下巴的方法、装置和系统
CN109829362A (zh) * 2018-12-18 2019-05-31 深圳壹账通智能科技有限公司 安检辅助分析方法、装置、计算机设备和存储介质
CN109871845B (zh) * 2019-01-10 2023-10-31 平安科技(深圳)有限公司 证件图像提取方法及终端设备
CN111507143B (zh) * 2019-01-31 2023-06-02 北京字节跳动网络技术有限公司 表情图像效果生成方法、装置和电子设备
CN110223710A (zh) * 2019-04-18 2019-09-10 深圳壹账通智能科技有限公司 多重联合认证方法、装置、计算机装置及存储介质
CN112507889A (zh) * 2019-04-29 2021-03-16 众安信息技术服务有限公司 一种校验证件与持证人的方法及系统
CN111866589A (zh) * 2019-05-20 2020-10-30 北京嘀嘀无限科技发展有限公司 一种视频数据验证方法、装置、电子设备及存储介质
CN110176024B (zh) * 2019-05-21 2023-06-02 腾讯科技(深圳)有限公司 在视频中对目标进行检测的方法、装置、设备和存储介质
CN110414454A (zh) * 2019-07-31 2019-11-05 南充折衍智能光电科技有限公司 一种基于机器视觉的人证合一识别系统
CN110730169A (zh) * 2019-09-29 2020-01-24 北京东软望海科技有限公司 一种保障账户安全的处理方法、装置及系统
CN111079712B (zh) * 2019-12-31 2023-04-21 中国银行股份有限公司 基于人脸识别的权限管理方法及装置
KR20210087792A (ko) * 2020-01-03 2021-07-13 엘지전자 주식회사 사용자 인증
TWM614573U (zh) * 2020-10-20 2021-07-21 普匯金融科技股份有限公司 法人實名認證裝置
US11068908B1 (en) * 2020-12-22 2021-07-20 Lucas GC Limited Skill-based credential verification by a credential vault system (CVS)
CN112907206B (zh) * 2021-02-07 2024-06-04 中国工商银行股份有限公司 一种基于视频对象识别的业务审核方法、装置及设备
CN112862458A (zh) * 2021-03-02 2021-05-28 岭东核电有限公司 核电试验工序监管方法、装置、计算机设备和存储介质
TWI810548B (zh) * 2021-04-15 2023-08-01 臺灣網路認證股份有限公司 整合影像處理及深度學習之活體辨識系統及其方法
CN116959064B (zh) * 2023-06-25 2024-04-26 上海腾桥信息技术有限公司 一种证件验证方法、装置、计算机设备和存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005779A (zh) * 2015-08-25 2015-10-28 湖北文理学院 基于交互式动作的人脸验证防伪识别方法及系统
US20160162729A1 (en) * 2013-09-18 2016-06-09 IDChecker, Inc. Identity verification using biometric data
CN106488130A (zh) * 2016-11-15 2017-03-08 上海斐讯数据通信技术有限公司 一种拍摄模式切换方法及其切换系统
CN106548121A (zh) * 2015-09-23 2017-03-29 阿里巴巴集团控股有限公司 一种活体识别的测试方法及装置
CN106599772A (zh) * 2016-10-31 2017-04-26 北京旷视科技有限公司 活体验证方法和装置及身份认证方法和装置
CN106709402A (zh) * 2015-11-16 2017-05-24 优化科技(苏州)有限公司 基于音型像特征的真人活体身份验证方法
CN106778525A (zh) * 2016-11-25 2017-05-31 北京旷视科技有限公司 身份认证方法和装置
CN107181852A (zh) * 2017-07-19 2017-09-19 维沃移动通信有限公司 一种信息发送方法、信息显示方法及移动终端

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8705813B2 (en) * 2010-06-21 2014-04-22 Canon Kabushiki Kaisha Identification device, identification method, and storage medium
CN103678984A (zh) * 2013-12-20 2014-03-26 湖北微模式科技发展有限公司 一种利用摄像头实现用户身份验证的方法
CN104834905A (zh) * 2015-04-29 2015-08-12 河南城建学院 一种人脸图像识别仿真系统及方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160162729A1 (en) * 2013-09-18 2016-06-09 IDChecker, Inc. Identity verification using biometric data
CN105005779A (zh) * 2015-08-25 2015-10-28 湖北文理学院 基于交互式动作的人脸验证防伪识别方法及系统
CN106548121A (zh) * 2015-09-23 2017-03-29 阿里巴巴集团控股有限公司 一种活体识别的测试方法及装置
CN106709402A (zh) * 2015-11-16 2017-05-24 优化科技(苏州)有限公司 基于音型像特征的真人活体身份验证方法
CN106599772A (zh) * 2016-10-31 2017-04-26 北京旷视科技有限公司 活体验证方法和装置及身份认证方法和装置
CN106488130A (zh) * 2016-11-15 2017-03-08 上海斐讯数据通信技术有限公司 一种拍摄模式切换方法及其切换系统
CN106778525A (zh) * 2016-11-25 2017-05-31 北京旷视科技有限公司 身份认证方法和装置
CN107181852A (zh) * 2017-07-19 2017-09-19 维沃移动通信有限公司 一种信息发送方法、信息显示方法及移动终端

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298246A (zh) * 2019-05-22 2019-10-01 深圳壹账通智能科技有限公司 开锁验证方法、装置、计算机设备及存储介质
CN112395906A (zh) * 2019-08-12 2021-02-23 北京旷视科技有限公司 人脸活体检测方法和装置、人脸活体检测设备及介质
CN111898536A (zh) * 2019-08-27 2020-11-06 创新先进技术有限公司 证件识别方法及装置
CN112764824B (zh) * 2019-10-21 2023-10-10 腾讯科技(深圳)有限公司 触发应用程序中身份验证的方法、装置、设备及存储介质
CN112764824A (zh) * 2019-10-21 2021-05-07 腾讯科技(深圳)有限公司 触发应用程序中身份验证的方法、装置、设备及存储介质
CN112784661A (zh) * 2019-11-01 2021-05-11 宏碁股份有限公司 真实人脸的识别方法与真实人脸的识别装置
CN112784661B (zh) * 2019-11-01 2024-01-19 宏碁股份有限公司 真实人脸的识别方法与真实人脸的识别装置
WO2022102830A1 (ko) * 2020-11-16 2022-05-19 고큐바테크놀로지 주식회사 사용자를 인증하기 위한 기법
CN113312972A (zh) * 2021-04-26 2021-08-27 国家能源集团新能源有限责任公司 一种自助加氢方法及其装置
CN113283359A (zh) * 2021-06-02 2021-08-20 万达信息股份有限公司 一种手持证件照的认证方法、系统和电子设备
CN115174138A (zh) * 2022-05-25 2022-10-11 北京旷视科技有限公司 摄像头攻击检测方法、系统、设备、存储介质及程序产品
CN115174138B (zh) * 2022-05-25 2024-06-07 北京旷视科技有限公司 摄像头攻击检测方法、系统、设备、存储介质及程序产品
CN115083002A (zh) * 2022-08-16 2022-09-20 深圳市海清视讯科技有限公司 影像处理方法、装置和设备

Also Published As

Publication number Publication date
CN107844748A (zh) 2018-03-27
CN107844748B (zh) 2019-02-05

Similar Documents

Publication Publication Date Title
WO2019075840A1 (zh) 身份验证方法、装置、存储介质和计算机设备
US10839061B2 (en) Method and apparatus for identity authentication
US11973877B2 (en) Systems and methods for secure tokenized credentials
CN108804884B (zh) 身份认证的方法、装置及计算机存储介质
US10691929B2 (en) Method and apparatus for verifying certificates and identities
US20210064900A1 (en) Id verification with a mobile device
US10339402B2 (en) Method and apparatus for liveness detection
Fathy et al. Face-based active authentication on mobile devices
JP6403233B2 (ja) ユーザー認証方法、これを実行する装置及びこれを保存した記録媒体
CN107077589B (zh) 基于图像的生物计量中的面部假冒检测
US20140380446A1 (en) Method and apparatus for protecting browser private information
BR112015004867B1 (pt) Sistema de prevenção de mistificação de identidade
US11367310B2 (en) Method and apparatus for identity verification, electronic device, computer program, and storage medium
CN115457664A (zh) 一种活体人脸检测方法及装置
JP6505937B1 (ja) 照合システム、照合方法及び照合プログラム
Findling et al. Towards face unlock: on the difficulty of reliably detecting faces on mobile phones
US11816923B2 (en) Face image candidate determination apparatus for authentication, face image candidate determination method for authentication, program, and recording medium
JPWO2017170384A1 (ja) 生体データ処理装置、生体データ処理システム、生体データ処理方法、生体データ処理プログラム、生体データ処理プログラムを記憶する記憶媒体
JP6028453B2 (ja) 画像処理装置、画像処理方法および画像処理プログラム
Singh et al. A novel face liveness detection algorithm with multiple liveness indicators
CN106250755B (zh) 用于生成验证码的方法及装置
Choi et al. A multimodal user authentication system using faces and gestures
McQuillan Is lip-reading the secret to security?
Mishra et al. Integrating State-of-the-Art Face Recognition and Anti-Spoofing Techniques into Enterprise Information Systems
EP4293612A1 (en) Determination method, determination program, and information processing device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17929387

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25/09/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17929387

Country of ref document: EP

Kind code of ref document: A1