CN111126229A - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN111126229A
CN111126229A CN201911303018.8A CN201911303018A CN111126229A CN 111126229 A CN111126229 A CN 111126229A CN 201911303018 A CN201911303018 A CN 201911303018A CN 111126229 A CN111126229 A CN 111126229A
Authority
CN
China
Prior art keywords
user
data
living body
image
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911303018.8A
Other languages
Chinese (zh)
Inventor
吴敏悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCB Finetech Co Ltd
Original Assignee
China Construction Bank Corp
CCB Finetech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp, CCB Finetech Co Ltd filed Critical China Construction Bank Corp
Priority to CN201911303018.8A priority Critical patent/CN111126229A/en
Publication of CN111126229A publication Critical patent/CN111126229A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The invention discloses a data processing method and device, and relates to the technical field of computers. One embodiment of the method comprises: acquiring an infrared image of the face of a user by using terminal equipment, and executing living body judgment on the user according to the infrared image; extracting a plurality of facial feature data from the face image of the user when the living body discrimination passes; comparing the facial feature data with facial feature data of a pre-stored face image so as to determine the identity data of the user; portrait data corresponding to the user's identity data is obtained, and the portrait data is used to determine information objects matching the user for provision to the user. The method and the device can simplify user operation in the scene of providing the information object on the premise of ensuring information safety.

Description

Data processing method and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a data processing method and apparatus.
Background
In the existing financial product transaction, a user needs to verify the identity by verifying a password, a short message or a U-shield and other safety devices, and the verification means has certain potential risks, for example, the potential safety hazards are generated when the password is stolen, the verification short message is intercepted or the safety devices are lost, and meanwhile, the verification means is complex in operation and poor in user experience.
Disclosure of Invention
In view of this, embodiments of the present invention provide a data processing method and apparatus, which can perform living body discrimination and face recognition on a user to provide an information object required by the user, so as to simplify user operations in a scene of providing the information object on the premise of ensuring information security.
To achieve the above object, according to one aspect of the present invention, a data processing method is provided.
The data processing method of the embodiment of the invention comprises the following steps: acquiring an infrared image of the face of a user by using terminal equipment, and executing living body judgment on the user according to the infrared image; extracting a plurality of facial feature data from the face image of the user when the living body discrimination passes; comparing the facial feature data with facial feature data of a pre-stored face image so as to determine the identity data of the user; portrait data corresponding to the user's identity data is obtained, and the portrait data is used to determine information objects matching the user for provision to the user.
Optionally, the terminal device is provided with an infrared camera and a visible light camera; the infrared image of the face of the user is collected by the infrared camera, and the face image of the user is collected by the visible light camera.
Optionally, performing living body discrimination on the user according to the infrared image includes: determining optical flow data in the infrared image; and inputting the optical flow data into a preset first living body detection model to execute living body judgment.
Optionally, performing living body discrimination on the user according to the infrared image includes: performing secondary wavelet decomposition on the infrared image to obtain at least one secondary high-frequency sub-band image of the infrared image; and extracting target characteristic data of any two-level high-frequency sub-band image, and inputting the extracted target characteristic data into a preset second living body detection model to execute living body judgment.
Optionally, the method further comprises: before extracting a plurality of facial feature data from the face image of the user, determining position information of a plurality of marking points from the face image of the user, and performing inclination correction on the face image of the user by using the position information.
Optionally, extracting a plurality of facial feature data from the face image of the user includes: and extracting a plurality of facial feature data from the face image of the user by using a FaceNet network.
Optionally, the identity data comprises: name data, gender data and identification number data; the first living body detection model and the second living body detection model are realized by adopting a decision tree algorithm or a support vector machine algorithm; the target feature comprises at least one of: histogram of oriented gradient HOG, gray level co-occurrence matrix GLCM, local binary pattern LBP; the marking points are at least two of the following points: left eye center, right eye center, nose center, left mouth corner, right mouth corner.
To achieve the above object, according to another aspect of the present invention, there is provided a data processing apparatus.
The data processing apparatus of an embodiment of the present invention may include: the living body distinguishing unit is used for acquiring an infrared image of the face of the user by using the terminal equipment and executing living body distinguishing on the user according to the infrared image; a face recognition unit to: extracting a plurality of facial feature data from the face image of the user when the living body discrimination passes; comparing the facial feature data with facial feature data of a pre-stored face image so as to determine the identity data of the user; and the information object providing unit is used for acquiring portrait data corresponding to the identity data of the user and determining an information object matched with the user to be provided for the user by using the portrait data.
To achieve the above object, according to still another aspect of the present invention, there is provided an electronic apparatus.
An electronic device of the present invention includes: one or more processors; and the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors realize the data processing method provided by the invention.
To achieve the above object, according to still another aspect of the present invention, there is provided a computer-readable storage medium.
A computer-readable storage medium of the present invention has stored thereon a computer program which, when executed by a processor, implements the data processing method provided by the present invention.
According to the technical scheme of the invention, one embodiment of the invention has the following advantages or beneficial effects: the method comprises the steps that an infrared camera installed on a terminal device is used for collecting an infrared image of a user face to execute living body judgment, and when in judgment, optical flow data of the infrared image or target characteristic data of a secondary high-frequency sub-band image obtained through secondary wavelet decomposition are input into a preset living body detection model to obtain a living body judgment result; when the living body identification passes, a visible light camera installed on the terminal equipment can be used for collecting a face image of a user, facial feature data of the face image is extracted after inclination correction and is compared with corresponding feature data of the face image stored in advance, so that the identity of the user is determined, corresponding user portrait data is obtained, and a matched information object is provided for the user by using the user portrait data. Through the arrangement, simple and convenient information object providing can be realized on the premise of ensuring information safety.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of the main steps of a data processing method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an implementation of a data processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a marker of a face image according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating specific steps performed by the data processing method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of the components of a data processing apparatus according to an embodiment of the present invention;
FIG. 6 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 7 is a schematic structural diagram of an electronic device for implementing the data processing method in the embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the embodiments of the present invention and the technical features of the embodiments may be combined with each other without conflict.
Fig. 1 is a schematic diagram of main steps of a data processing method according to an embodiment of the present invention.
As shown in fig. 1, the data processing method according to the embodiment of the present invention may specifically be executed according to the following steps:
step S101: and acquiring an infrared image of the face of the user by using the terminal equipment, and executing living body judgment on the user according to the infrared image.
In the embodiment of the invention, the server can acquire the infrared image and the visible light face image of the face of the user through the terminal equipment so as to respectively execute living body judgment (namely, judge whether the user is a human or a photo, a video, a mask and the like) and face identification. It can be understood that the specific implementation process of living body discrimination and face recognition can be executed in the server. Fig. 2 is a schematic diagram of an execution architecture of a data processing method in an embodiment of the present invention, and as shown in fig. 2, a terminal device may be equipped with an infrared camera and a visible light camera, where the infrared camera may be used to collect an infrared image of a face of a user to perform living body discrimination, and the visible light camera may be used to collect a visible light face image of the user to perform face recognition. In order to enhance the infrared imaging effect, one or more infrared light supplement devices may be configured at the terminal device. In practical application, the terminal device may be an intelligent terminal held by a user, such as a smart phone or a notebook computer, or a desktop computer of the user, or a self-service terminal arranged in a service end entity place. For example, in a scenario of recommending a financial product to a user, the terminal device may be a smartphone or a computer owned by the user, or may be a self-service terminal located at a bank or an institution entity location for recommending a financial product to the user.
In this step, when the user performs a relevant operation (e.g., opens a relevant application), the infrared camera of the terminal device is triggered to start capturing one or more infrared images of the face of the user. It can be understood that, in practical application, the infrared camera can also be automatically triggered when the user is at a certain distance. After the infrared image of the face of the user is acquired, it is general to convert the infrared image into a grayscale image and perform living body discrimination in the following two ways. It is to be understood that the following description is not intended to limit the living body discrimination method of the present invention, and that other methods based on infrared images may be employed for living body discrimination.
In the first method, it is first necessary to directly extract optical flow data from a grayscale image into which an infrared image is converted. Optical flow refers to the instantaneous motion of pixels in an image, and in practice, the motion pattern of a scene or object in a sequence of images may be calculated to determine optical flow data that characterizes the optical flow value. It can be understood that in order to extract the optical flow data, a sequence of images during the movement and rotation of the user needs to be acquired. Since the calculation of optical flow data from images is a known technique, it need not be repeated here. After the optical flow data is extracted, the optical flow data may be input to a first living body detection model trained in advance, so as to determine whether the user is a living body. The first living body detection model can be realized by adopting a supervised machine learning algorithm such as a decision tree, a support vector machine and the like.
In the second method, a two-level wavelet decomposition is first performed on a gray-scale image into which an infrared image is converted to obtain a plurality of two-level high-frequency subband images LH2、HL2、HH2(L represents low frequency, H represents high frequency), then any two-level high-frequency subband image is selected, target characteristic data (such as direction gradient histogram HOG, gray level co-occurrence matrix GLCM and local binary pattern LBP) in the image is extracted, and the target characteristic data is input into a second living body detection model which is trained in advance, so that whether the user is a living body can be judged. The second living body detection model can be realized by adopting a supervised machine learning algorithm such as a decision tree, a support vector machine and the like. Since the above-mentioned two-level wavelet decomposition belongs to the known technology, it is not necessary to describe here in detail.
Step S102: extracting a plurality of facial feature data from the face image of the user when the living body discrimination passes; and comparing the facial feature data with facial feature data of a pre-stored face image so as to determine the identity data of the user.
In this step, when the server determines that the user is a living body through step S101, the server triggers to start the visible light camera to acquire a face image of the user. After the face image of the user is collected, the position information of a plurality of marking points is determined from the face image of the user, and the face image of the user is subjected to inclination correction by utilizing the position information. These mark points may be the left eye center, the right eye center, the nose center, the left mouth corner and the right mouth corner, or at least two of them. The marker points of the face image can be as shown in fig. 3. After the tilt correction, the server may extract a plurality of facial feature data from the face image using FaceNet network (a face feature extraction method), and compare the facial feature data with facial feature data of the face image stored in advance in the database. Specifically, the similarity between facial feature data (such as cosine similarity, jaccard similarity, pearson correlation coefficient, and adjusted cosine similarity) may be calculated, and the comparison is considered successful when the similarity is greater than a threshold. After the comparison is successful, the identity data of the user, such as name data, gender data and identification number data, can be determined. If the comparison fails, it indicates that the user is not registered in the server, and the data processing method of the embodiment of the present invention can be continuously executed after the registration process needs to be executed.
Step S103: portrait data corresponding to the user's identity data is obtained, and information objects matching the user are determined to be provided to the user using the portrait data.
In a scenario in which a financial product is recommended to a user, a server stores basic information (including age, occupation, income, and the like), a history transaction record, and portrait data of the user in addition to identity data of the user in advance. As a preferred scheme, the server can determine and store the portrait data through basic information and historical transaction records of the user. The user representation data may include data of dimensions such as age, gender, occupation, income, risk tolerance level, owned property status, historical transaction profit and loss status, and may also include data of other transaction related dimensions.
In this step, after the portrait data of the user is acquired, an information object matching the portrait data of the user may be selected from a plurality of information objects and provided to the user. Generally, an information object refers to a data object or virtual information in a specific form as an information carrier, for example a financial product recommended to a user in an electronic form or a paper form. Thereafter, if the user issues an instruction to purchase a financial product at the terminal device, face recognition may be performed again to secure the transaction, and a related purchase procedure of the information object may be performed after face recognition is successful.
Fig. 4 is a schematic diagram of specific execution steps of the data processing method in the embodiment of the present invention, and as shown in fig. 4, in a scenario where a financial product is provided to a user, when the user triggers a corresponding function of a terminal device, a server first acquires an infrared image of the user through an infrared camera to perform living body identification. If the judgment is failed, the flow is ended, and if the judgment is passed, a visible light camera is called to collect the face image of the user and execute face recognition. If the identity is not determined, it indicates that the user is not registered in the server, at this time, a user registration process needs to be executed (the data processing method of the embodiment of the present invention may be restarted after the registration is completed); and if the identity is determined, acquiring portrait data corresponding to the user identity data and determining an information object matched with the user according to the portrait data to provide for the user.
In the technical scheme of the embodiment of the invention, the infrared camera arranged on the terminal equipment is used for collecting the infrared image of the face of the user to execute living body judgment, and when the judgment is carried out, the optical flow data of the infrared image or the target characteristic data of the secondary high-frequency sub-band image obtained by secondary wavelet decomposition can be input into a preset living body detection model to obtain a living body judgment result; when the living body identification passes, a visible light camera installed on the terminal equipment can be used for collecting a face image of a user, facial feature data of the face image is extracted after inclination correction and is compared with corresponding feature data of the face image stored in advance, so that the identity of the user is determined, corresponding user portrait data is obtained, and a matched information object is provided for the user by using the user portrait data. Through the arrangement, simple and convenient information object providing can be realized on the premise of ensuring information safety.
It should be noted that, for the convenience of description, the foregoing method embodiments are described as a series of acts, but those skilled in the art will appreciate that the present invention is not limited by the order of acts described, and that some steps may in fact be performed in other orders or concurrently. Moreover, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no acts or modules are necessarily required to implement the invention.
To facilitate a better implementation of the above-described aspects of embodiments of the present invention, the following also provides relevant means for implementing the above-described aspects.
Referring to fig. 5, a data processing apparatus 500 according to an embodiment of the present invention may include: a living body discriminating unit 501, a face recognizing unit 502, and an information object providing unit 503.
The living body discrimination unit 501 may be configured to acquire an infrared image of a face of a user by using a terminal device, and perform living body discrimination on the user according to the infrared image; the face recognition unit 502 may be configured to: extracting a plurality of facial feature data from the face image of the user when the living body discrimination passes; comparing the facial feature data with facial feature data of a pre-stored face image so as to determine the identity data of the user; the information object providing unit 503 may be configured to obtain portrait data corresponding to the identity data of the user, and determine an information object matching the user to provide to the user using the portrait data.
In the embodiment of the invention, the terminal equipment is provided with an infrared camera and a visible light camera; the infrared image of the face of the user is collected by the infrared camera, and the face image of the user is collected by the visible light camera.
In a specific application, the living body discriminating unit 501 may be further configured to: determining optical flow data in the infrared image; and inputting the optical flow data into a preset first living body detection model to execute living body judgment.
In practical applications, the living body discriminating unit 501 may further be configured to: performing secondary wavelet decomposition on the infrared image to obtain at least one secondary high-frequency sub-band image of the infrared image; and extracting target characteristic data of any two-level high-frequency sub-band image, and inputting the extracted target characteristic data into a preset second living body detection model to execute living body judgment.
In some embodiments, the device may further comprise a tilt correction unit for: before extracting a plurality of facial feature data from the face image of the user, determining position information of a plurality of marking points from the face image of the user, and performing inclination correction on the face image of the user by using the position information.
In an alternative implementation, the face recognition unit 502 may be further configured to: and extracting a plurality of facial feature data from the face image of the user by using a FaceNet network.
Furthermore, in an embodiment of the present invention, the identity data includes: name data, gender data and identification number data; the first living body detection model and the second living body detection model are realized by adopting a decision tree algorithm or a support vector machine algorithm; the target feature comprises at least one of: histogram of oriented gradient HOG, gray level co-occurrence matrix GLCM, local binary pattern LBP; the marking points are at least two of the following points: left eye center, right eye center, nose center, left mouth corner, right mouth corner.
In the technical scheme of the embodiment of the invention, the infrared camera arranged on the terminal equipment is used for collecting the infrared image of the face of the user to execute living body judgment, and when the judgment is carried out, the optical flow data of the infrared image or the target characteristic data of the secondary high-frequency sub-band image obtained by secondary wavelet decomposition can be input into a preset living body detection model to obtain a living body judgment result; when the living body identification passes, a visible light camera installed on the terminal equipment can be used for collecting a face image of a user, facial feature data of the face image is extracted after inclination correction and is compared with corresponding feature data of the face image stored in advance, so that the identity of the user is determined, corresponding user portrait data is obtained, and a matched information object is provided for the user by using the user portrait data. Through the arrangement, simple and convenient information object providing can be realized on the premise of ensuring information safety.
Fig. 6 shows an exemplary system architecture 600 of a data processing method or data processing apparatus to which embodiments of the present invention may be applied.
As shown in fig. 6, the system architecture 600 may include terminal devices 601, 602, 603, a network 604 and a server 605 (this architecture is merely an example, and the components included in a specific architecture may be adjusted according to the specific application). The network 604 serves to provide a medium for communication links between the terminal devices 601, 602, 603 and the server 605. Network 604 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
A user may use the terminal devices 601, 602, 603 to interact with the server 605 via the network 604 to receive or send messages or the like. Various client applications, such as an information object recommendation class application (for example only), may be installed on the terminal devices 601, 602, 603.
The terminal devices 601, 602, 603 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 605 may be a server providing various services, such as a server providing support for information object recommendation class applications operated by users using the terminal devices 601, 602, 603 (for example only). The server may process the received information object recommendation request and feed back the processing result (e.g. the information object to be recommended, which is obtained by calculation, for example only) to the terminal device 601, 602, 603.
It should be noted that the data processing method provided by the embodiment of the present invention is generally executed by the server 605, and accordingly, the data processing apparatus is generally disposed in the server 605.
It should be understood that the number of terminal devices, networks, and servers in fig. 6 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
The invention also provides the electronic equipment. The electronic device of the embodiment of the invention comprises: one or more processors; and the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors realize the data processing method provided by the invention.
Referring now to FIG. 7, shown is a block diagram of a computer system 700 suitable for use with the electronic device implementing an embodiment of the present invention. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data necessary for the operation of the computer system 700 are also stored. The CPU701, the ROM 702, and the RAM703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, the processes described in the main step diagrams above may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the invention include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the main step diagram. In the above-described embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by the central processing unit 701, performs the above-described functions defined in the system of the present invention.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a living body discriminating unit, a face recognizing unit, and an information object providing unit. Where the names of these units do not in some cases constitute a limitation on the unit itself, for example, the living body discrimination unit may also be described as a "unit that provides a living body discrimination result to the face recognition unit".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to perform steps comprising: acquiring an infrared image of the face of a user by using terminal equipment, and executing living body judgment on the user according to the infrared image; extracting a plurality of facial feature data from the face image of the user when the living body discrimination passes; comparing the facial feature data with facial feature data of a pre-stored face image so as to determine the identity data of the user; portrait data corresponding to the user's identity data is obtained, and the portrait data is used to determine information objects matching the user for provision to the user.
In the technical scheme of the embodiment of the invention, the infrared camera arranged on the terminal equipment is used for collecting the infrared image of the face of the user to execute living body judgment, and when the judgment is carried out, the optical flow data of the infrared image or the target characteristic data of the secondary high-frequency sub-band image obtained by secondary wavelet decomposition can be input into a preset living body detection model to obtain a living body judgment result; when the living body identification passes, a visible light camera installed on the terminal equipment can be used for collecting a face image of a user, facial feature data of the face image is extracted after inclination correction and is compared with corresponding feature data of the face image stored in advance, so that the identity of the user is determined, corresponding user portrait data is obtained, and a matched information object is provided for the user by using the user portrait data. Through the arrangement, simple and convenient information object providing can be realized on the premise of ensuring information safety.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A data processing method, comprising:
acquiring an infrared image of the face of a user by using terminal equipment, and executing living body judgment on the user according to the infrared image;
extracting a plurality of facial feature data from the face image of the user when the living body discrimination passes; comparing the facial feature data with facial feature data of a pre-stored face image so as to determine the identity data of the user;
portrait data corresponding to the user's identity data is obtained, and the portrait data is used to determine information objects matching the user for provision to the user.
2. The method of claim 1,
the terminal equipment is provided with an infrared camera and a visible light camera;
the infrared image of the face of the user is collected by the infrared camera, and the face image of the user is collected by the visible light camera.
3. The method of claim 1, wherein performing living body discrimination on a user from the infrared image comprises:
determining optical flow data in the infrared image;
and inputting the optical flow data into a preset first living body detection model to execute living body judgment.
4. The method of claim 3, wherein performing living body discrimination on a user from the infrared image comprises:
performing secondary wavelet decomposition on the infrared image to obtain at least one secondary high-frequency sub-band image of the infrared image;
and extracting target characteristic data of any two-level high-frequency sub-band image, and inputting the extracted target characteristic data into a preset second living body detection model to execute living body judgment.
5. The method of claim 4, further comprising:
before extracting a plurality of facial feature data from the face image of the user, determining position information of a plurality of marking points from the face image of the user, and performing inclination correction on the face image of the user by using the position information.
6. The method of claim 1, wherein extracting a plurality of facial feature data from the image of the user's face comprises:
and extracting a plurality of facial feature data from the face image of the user by using a FaceNet network.
7. The method of claim 5,
the identity data includes: name data, gender data and identification number data;
the first living body detection model and the second living body detection model are realized by adopting a decision tree algorithm or a support vector machine algorithm;
the target feature comprises at least one of: histogram of oriented gradient HOG, gray level co-occurrence matrix GLCM, local binary pattern LBP;
the marking points are at least two of the following points: left eye center, right eye center, nose center, left mouth corner, right mouth corner.
8. A data processing apparatus, comprising:
the living body distinguishing unit is used for acquiring an infrared image of the face of the user by using the terminal equipment and executing living body distinguishing on the user according to the infrared image;
a face recognition unit to: extracting a plurality of facial feature data from the face image of the user when the living body discrimination passes; comparing the facial feature data with facial feature data of a pre-stored face image so as to determine the identity data of the user;
and the information object providing unit is used for acquiring portrait data corresponding to the identity data of the user and determining an information object matched with the user to be provided for the user by using the portrait data.
9. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN201911303018.8A 2019-12-17 2019-12-17 Data processing method and device Pending CN111126229A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911303018.8A CN111126229A (en) 2019-12-17 2019-12-17 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911303018.8A CN111126229A (en) 2019-12-17 2019-12-17 Data processing method and device

Publications (1)

Publication Number Publication Date
CN111126229A true CN111126229A (en) 2020-05-08

Family

ID=70498300

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911303018.8A Pending CN111126229A (en) 2019-12-17 2019-12-17 Data processing method and device

Country Status (1)

Country Link
CN (1) CN111126229A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113259872A (en) * 2021-05-19 2021-08-13 中国银行股份有限公司 Product processing method and system for unmanned network

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260731A (en) * 2015-11-25 2016-01-20 商汤集团有限公司 Human face living body detection system and method based on optical pulses
CN105718925A (en) * 2016-04-14 2016-06-29 苏州优化智能科技有限公司 Real person living body authentication terminal equipment based on near infrared and facial micro expression
CN107451575A (en) * 2017-08-08 2017-12-08 济南大学 A kind of face anti-fraud detection method in identity authorization system
CN107992794A (en) * 2016-12-30 2018-05-04 腾讯科技(深圳)有限公司 A kind of biopsy method, device and storage medium
CN108388878A (en) * 2018-03-15 2018-08-10 百度在线网络技术(北京)有限公司 The method and apparatus of face for identification
CN108460266A (en) * 2018-03-22 2018-08-28 百度在线网络技术(北京)有限公司 Method and apparatus for authenticating identity
CN108494778A (en) * 2018-03-27 2018-09-04 百度在线网络技术(北京)有限公司 Identity identifying method and device
CN109033940A (en) * 2018-06-04 2018-12-18 上海依图网络科技有限公司 A kind of image-recognizing method, calculates equipment and storage medium at device
CN109145817A (en) * 2018-08-21 2019-01-04 佛山市南海区广工大数控装备协同创新研究院 A kind of face In vivo detection recognition methods
CN109635760A (en) * 2018-12-18 2019-04-16 深圳市捷顺科技实业股份有限公司 A kind of face identification method and relevant device
CN110032915A (en) * 2018-01-12 2019-07-19 杭州海康威视数字技术股份有限公司 A kind of human face in-vivo detection method, device and electronic equipment
CN110276301A (en) * 2019-06-24 2019-09-24 泰康保险集团股份有限公司 Face identification method, device, medium and electronic equipment

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260731A (en) * 2015-11-25 2016-01-20 商汤集团有限公司 Human face living body detection system and method based on optical pulses
CN105718925A (en) * 2016-04-14 2016-06-29 苏州优化智能科技有限公司 Real person living body authentication terminal equipment based on near infrared and facial micro expression
CN107992794A (en) * 2016-12-30 2018-05-04 腾讯科技(深圳)有限公司 A kind of biopsy method, device and storage medium
CN107451575A (en) * 2017-08-08 2017-12-08 济南大学 A kind of face anti-fraud detection method in identity authorization system
CN110032915A (en) * 2018-01-12 2019-07-19 杭州海康威视数字技术股份有限公司 A kind of human face in-vivo detection method, device and electronic equipment
CN108388878A (en) * 2018-03-15 2018-08-10 百度在线网络技术(北京)有限公司 The method and apparatus of face for identification
CN108460266A (en) * 2018-03-22 2018-08-28 百度在线网络技术(北京)有限公司 Method and apparatus for authenticating identity
CN108494778A (en) * 2018-03-27 2018-09-04 百度在线网络技术(北京)有限公司 Identity identifying method and device
CN109033940A (en) * 2018-06-04 2018-12-18 上海依图网络科技有限公司 A kind of image-recognizing method, calculates equipment and storage medium at device
CN109145817A (en) * 2018-08-21 2019-01-04 佛山市南海区广工大数控装备协同创新研究院 A kind of face In vivo detection recognition methods
CN109635760A (en) * 2018-12-18 2019-04-16 深圳市捷顺科技实业股份有限公司 A kind of face identification method and relevant device
CN110276301A (en) * 2019-06-24 2019-09-24 泰康保险集团股份有限公司 Face identification method, device, medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李德毅等: "《中国科协新一代信息技术系列丛书 人工智能导论》", 北京:中国科学技术出版社 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113259872A (en) * 2021-05-19 2021-08-13 中国银行股份有限公司 Product processing method and system for unmanned network

Similar Documents

Publication Publication Date Title
CN110825765B (en) Face recognition method and device
US11443559B2 (en) Facial liveness detection with a mobile device
CN108509915B (en) Method and device for generating face recognition model
US11244435B2 (en) Method and apparatus for generating vehicle damage information
CN108491805B (en) Identity authentication method and device
WO2020006961A1 (en) Image extraction method and device
US20190050641A1 (en) Methods and apparatus for capturing, processing, training, and detecting patterns using pattern recognition classifiers
CN109086834B (en) Character recognition method, character recognition device, electronic equipment and storage medium
US11126827B2 (en) Method and system for image identification
US11367310B2 (en) Method and apparatus for identity verification, electronic device, computer program, and storage medium
CN108494778A (en) Identity identifying method and device
CN108549848B (en) Method and apparatus for outputting information
CN109697388A (en) Face identification method and device
CN112464803A (en) Image comparison method and device
WO2019178753A1 (en) Payment method, device and system
CN112418167A (en) Image clustering method, device, equipment and storage medium
CN110795714A (en) Identity authentication method and device, computer equipment and storage medium
US11348415B2 (en) Cognitive automation platform for providing enhanced automated teller machine (ATM) security
CN110852193A (en) Face recognition method and device
CN110619281A (en) Identity recognition method and device
CN111126229A (en) Data processing method and device
CN111783677A (en) Face recognition method, face recognition device, server and computer readable medium
CN112487943B (en) Key frame de-duplication method and device and electronic equipment
CN113936329A (en) Iris recognition method, iris recognition device, electronic equipment and computer readable medium
WO2021164122A1 (en) Intelligent user recognition method and device, and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220926

Address after: 12 / F, 15 / F, 99 Yincheng Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 200120

Applicant after: Jianxin Financial Science and Technology Co.,Ltd.

Address before: 25 Financial Street, Xicheng District, Beijing 100033

Applicant before: CHINA CONSTRUCTION BANK Corp.

Applicant before: Jianxin Financial Science and Technology Co.,Ltd.