CN111738199A - Image information verification method, image information verification device, image information verification computing device and medium - Google Patents

Image information verification method, image information verification device, image information verification computing device and medium Download PDF

Info

Publication number
CN111738199A
CN111738199A CN202010616687.7A CN202010616687A CN111738199A CN 111738199 A CN111738199 A CN 111738199A CN 202010616687 A CN202010616687 A CN 202010616687A CN 111738199 A CN111738199 A CN 111738199A
Authority
CN
China
Prior art keywords
image
face
sample
images
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010616687.7A
Other languages
Chinese (zh)
Other versions
CN111738199B (en
Inventor
李桂锋
陈永录
张飞燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202010616687.7A priority Critical patent/CN111738199B/en
Publication of CN111738199A publication Critical patent/CN111738199A/en
Application granted granted Critical
Publication of CN111738199B publication Critical patent/CN111738199B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Abstract

The present disclosure provides an image information verification method, including: acquiring a target video sequence of a target user when inputting information to be verified, wherein the target video sequence comprises a plurality of frame images containing human faces; respectively segmenting a face image from each frame image in the plurality of frame images to obtain a plurality of face images; extracting target features of a target video sequence from a plurality of face images according to a local binary pattern algorithm; and processing the target features using the trained classification model to verify the authenticity of the information to be verified. The present disclosure also provides an image information verification apparatus, a computing apparatus, and a medium.

Description

Image information verification method, image information verification device, image information verification computing device and medium
Technical Field
The present disclosure relates to the field of computer vision, and more particularly, to an image information verification method, apparatus, computing apparatus, and medium.
Background
The micro expression is an instinctive behavior of human beings, belongs to a part of psychological stress response, and is a non-language behavior of expressing self emotional information by the human beings. The micro-expression is usually subconscious, uncontrolled by thoughts, unmasked, and camouflaged. The duration of micro-expressions is short, typically between 1/25 seconds and 1/2 seconds, and the amplitude of facial muscle movement is small when micro-expressions occur.
The prior banking business is mainly offline, and when a user transacts banking business, the user needs to go to an offline website of a bank for transaction, which wastes time and labor for the user. With the advent of 5G, online customer service has gone to the era arena. In the past, services which need to go to a website for face-to-face processing can be handled on line through an online customer service, so that a user can handle required services quickly without going to the website.
However, when a user transacts complex services such as "revocation and loss report", "information maintenance", or "document information is expired and uploaded again" on line, the user needs to verify the personal identity information. The online system of the bank is difficult to distinguish the authenticity of the information, so that certain potential safety hazards are brought to the user account.
Disclosure of Invention
One aspect of the present disclosure provides an image information verification method, including: acquiring a target video sequence of a target user when inputting information to be verified, wherein the target video sequence comprises a plurality of frame images containing human faces; respectively segmenting a face image from each frame image in the plurality of frame images to obtain a plurality of face images; extracting target features of the target video sequence from the plurality of face images according to a local binary pattern algorithm; and processing the target features using the trained classification model to verify the authenticity of the information to be verified.
Optionally, the method further comprises: training the classification model, wherein the training the classification model comprises: acquiring a plurality of sample video sequences and authenticity labels corresponding to the sample video sequences, wherein each sample video sequence in the sample video sequences comprises a plurality of sample images; extracting sample features from sample images of each sample video sequence according to a local binary pattern algorithm; inputting the classification model according to the sample characteristics to obtain a classification result; and adjusting parameters of a local binary pattern algorithm and parameters of the classification model according to the classification result and the authenticity label.
Optionally, the method further comprises, before the segmenting the face image from each of the plurality of frame images respectively: determining a key point set according to each frame image; and carrying out normalization processing on each frame image according to the key point set.
Optionally, the normalizing, according to the key point set, each frame image includes: acquiring a template frame image, and determining a plurality of first face key points from the template image; determining, for each of the plurality of frame images, a plurality of second face keypoints from the frame image; determining the weight of the frame image according to the plurality of first face key points and the plurality of second face key points; and weighting each pixel value of the frame image by using the weight to obtain the frame image after normalization processing.
Optionally, the method further comprises, after said segmenting the face image from each of the plurality of frame images respectively: and carrying out interpolation operation on the plurality of face images according to a time interpolation algorithm so as to normalize the image sequence frame number.
Optionally, wherein the segmenting the face image from each of the plurality of frame images respectively comprises: acquiring two pupil coordinates in the template frame image; determining an interested area according to the two pupil coordinates; and segmenting the face image from each frame image according to the region of interest.
Another aspect of the present disclosure provides an image information verification apparatus including: the system comprises an acquisition module, a verification module and a verification module, wherein the acquisition module is used for acquiring a target video sequence of a target user when inputting information to be verified, and the target video sequence comprises a plurality of frame images containing human faces; the segmentation module is used for segmenting a face image from each frame image in the plurality of frame images to obtain a plurality of face images; the characteristic extraction module is used for extracting target characteristics of the target video sequence from the plurality of face images according to a local binary pattern algorithm; and the classification module is used for processing the target characteristics by utilizing the trained classification model so as to verify the authenticity of the information to be verified.
Optionally, the apparatus further comprises: a training module for training the classification model, wherein the training module comprises: the system comprises a sample acquisition sub-module, a sampling analysis sub-module and a verification sub-module, wherein the sample acquisition sub-module is used for acquiring a plurality of sample video sequences and authenticity labels corresponding to the sample video sequences, and each sample video sequence in the sample video sequences comprises a plurality of sample images; the extraction submodule is used for extracting sample characteristics from sample images of each sample video sequence according to a local binary pattern algorithm; the input submodule is used for inputting the classification model according to the sample characteristics to obtain a classification result; and the adjusting submodule is used for adjusting the parameters of the local binary pattern algorithm and the parameters of the classification model according to the classification result and the authenticity label.
Another aspect of the disclosure provides a computing device comprising: one or more processors; storage means for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method as described above.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.
Another aspect of the disclosure provides a computer program comprising computer executable instructions for implementing the method as described above when executed.
According to the embodiment of the disclosure, the authenticity of the information to be verified is verified by acquiring the target video sequence of the target user when inputting the information to be verified, extracting features from the target video sequence, and processing the target features by using the trained classification model. The user can carry out the information verification required by the business without going to the appointed offline website, the range of the online business handling of the bank is expanded, the working efficiency of the related business of the bank information verification is improved, and the potential safety hazard of the customer account is reduced.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
fig. 1 schematically shows a system architecture of an image information authentication method and an image information authentication apparatus according to an embodiment of the present disclosure;
FIG. 2 schematically shows a flow diagram of an image information verification method according to an embodiment of the present disclosure;
FIG. 3 schematically shows a flow diagram of training a classification model according to an embodiment of the present disclosure;
FIG. 4 schematically shows a flow chart of an image information verification method according to another embodiment of the present disclosure;
FIG. 5 schematically illustrates a flow diagram of a normalization process for each frame image according to a set of keypoints, according to an embodiment of the disclosure;
FIG. 6 schematically illustrates a flow chart for segmenting a face image from each of a plurality of frame images according to an embodiment of the present disclosure;
FIG. 7 schematically shows a flow chart of an image information verification method according to another embodiment of the present disclosure;
FIG. 8 schematically shows a schematic diagram of a sequence of face images according to an embodiment of the present disclosure;
FIG. 9 schematically shows a schematic diagram of a set of face images according to an embodiment of the disclosure;
FIG. 10 schematically illustrates a diagram of dividing a face image set into feature blocks according to an embodiment of the disclosure;
FIG. 11 schematically illustrates a schematic diagram of obtaining three-dimensional LBP features using an LBP-TOP algorithm for a set of face images, in accordance with an embodiment of the present disclosure;
FIG. 12 schematically shows a schematic diagram of a confusion matrix according to an embodiment of the present disclosure;
FIG. 13 schematically illustrates an online identity verification system operational interface diagram according to an embodiment of the present disclosure;
fig. 14 schematically shows a block diagram of an image information authentication apparatus according to an embodiment of the present disclosure;
fig. 15 schematically shows a block diagram of an image information authentication apparatus according to an embodiment of the present disclosure; and
FIG. 16 schematically illustrates a block diagram of a computer system suitable for implementing the above-described method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon for use by or in connection with an instruction execution system.
Embodiments of the present disclosure provide an image information verification method and an image information verification apparatus to which the method can be applied. The method comprises the steps of obtaining a target video sequence of a target user when inputting information to be verified, wherein the target video sequence of the user comprises a plurality of frame images containing human faces; respectively segmenting a face image from each frame image in a plurality of frame images of a user to obtain a plurality of face images; extracting target features of a user target video sequence from a plurality of face images of a user according to a local binary pattern algorithm; and processing the user target features by using the trained classification model so as to verify the truth of the information to be verified of the user.
Fig. 1 schematically shows a system architecture of an image information authentication method and an image information authentication apparatus according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 may include a terminal device 101, a server 102, and a network 103. Network 103 is the medium used to provide communication links between terminal devices 101 and server 102. Network 103 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use terminal device 101 to interact with server 102 over network 103 to receive or send messages and the like. Various communication client applications, such as a mobile banking client, a shopping application, a web browser application, a search application, an instant messaging tool, a mailbox client, social platform software, etc. (by way of example only) may be installed on the terminal device 101.
The terminal device 101 may be various electronic devices having an image capture device (e.g., a camera) and an input device (e.g., a keyboard, a touch screen, etc.), including but not limited to a smartphone, a tablet computer, a laptop portable computer, a desktop computer, and the like. The terminal apparatus 101 may collect a face image of a user when the user inputs information through an input device, and transmit the information input by the user and the collected face image to the server 102 through the network 103.
The server 102 may be a server that provides various services, such as a background management server that provides support for a website browsed by a user using the terminal apparatus 101. The background management server can analyze and verify the received face image of the user, obtain a verification result and determine the authenticity of the information input by the user according to the verification result.
It should be noted that the image information verification method provided by the embodiment of the present disclosure may be generally executed by the server 102. Accordingly, the image information verification apparatus provided by the embodiment of the present disclosure may be generally disposed in the server 102. The image information verification method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 102 and is capable of communicating with the terminal device 101 and/or the server 102. Accordingly, the image information verification apparatus provided in the embodiment of the present disclosure may also be provided in a server or a server cluster different from the server 102 and capable of communicating with the terminal device 101 and/or the server 102.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically shows a flowchart of an image information verification method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S210 to S240.
In operation S210, a target video sequence of a target user when inputting information to be verified is acquired.
Wherein the target video sequence comprises a plurality (frames) of frame images comprising a human face.
According to the embodiment of the present disclosure, the target user may be recorded as a video segment during the process of inputting the information to be verified, and then the video segment is serialized to obtain the target video sequence.
In operation S220, a face image is segmented from each of the plurality of frame images, resulting in a plurality of face images.
According to the embodiment of the disclosure, the region which does not contain the face information in the frame image can be deleted, and only the region containing the face information is segmented, so that the face image is obtained for subsequent processing.
In operation S230, a target feature of a target video sequence is extracted from a plurality of face images according to a local binary pattern algorithm.
According to an embodiment of the present disclosure, the Local binary pattern algorithm may include, for example, an LBP-TOP (Local binary patterns on Three Orthogonal Planes, Local binary pattern features) algorithm. Based on this, operation S230 may include, for example, dividing the plurality of face images in the target video sequence into a × B × C feature blocks in three-dimensional space by using an LBP-TOP algorithm, where a is the number of column-wise blocks determined by the number of pixels in a unit column of the face image, B is the number of row-wise blocks determined by the number of pixels in a unit row of the face image, and C is the number of time-wise blocks determined by the duration of the target video sequence.
In operation S240, the target features are processed using the trained classification model in order to verify the authenticity of the information to be verified.
When a person is on the false, the facial expression changes slightly. Based on this principle, a classification model according to an embodiment of the present disclosure may be used to identify a facial microexpressive feature in an image after training, thereby determining whether a user in the image is making a fake. The classification model may be, for example, LSVM (Linear Support Vector Machine).
Based on this, operation S240 may include, for example, inputting the target features of the target video sequence into the trained LSVM to obtain the degree of truth of the user, and then verifying the degree of truth of the information to be verified input by the user according to the degree of truth of the user.
According to the embodiment of the disclosure, the authenticity of the information to be verified is verified by acquiring the target video sequence of the target user when inputting the information to be verified, extracting features from the target video sequence, and processing the target features by using the trained classification model. The user can carry out the information verification required by the business without going to the appointed offline website, the range of the online business handling of the bank is expanded, the working efficiency of the related business of the bank information verification is improved, and the potential safety hazard of the customer account is reduced.
In addition to operations S210 to S240, the method may further include training a classification model in operation S310. Operation S310 may be performed, for example, before operation S210.
FIG. 3 schematically shows a flow diagram for training a classification model according to an embodiment of the disclosure.
As shown in FIG. 3, training the classification model may include operations S311-S314, for example.
In operation S311, a plurality of sample video sequences and authenticity labels corresponding to the plurality of sample video sequences are obtained. Wherein each sample video sequence of the plurality of sample video sequences comprises a plurality of sample images.
According to the embodiment of the disclosure, a plurality of video segments of a face under false condition and video segments of a face without false condition can be collected, and the video segments containing face information are serialized into a sample video sequence. Each sample video sequence has an authenticity label corresponding thereto, and if the sample video sequence contains face information that is false, the authenticity label is false (false), and if the sample video sequence contains face information that is not false, the authenticity label is true (true).
In operation S312, for each sample video sequence, sample features are extracted from sample images of the sample video sequence according to a local binary pattern algorithm.
According to the embodiment of the present disclosure, a frame image containing face information may be extracted from a sample video sequence. And obtaining three-dimensional LBP characteristics by using the frame image sets by adopting an LBP-TOP algorithm.
In operation S313, a classification result is obtained according to the sample feature input classification model.
According to embodiments of the present disclosure, the classification model may be, for example, a linear support vector machine. Operation S313 may include, for example, creating a classification model, and inputting the three-dimensional LBP features obtained in operation S312 into a linear support vector machine to obtain a classification result. According to an embodiment of the present disclosure, the classification result may include true (true) and false (false), for example.
In operation S314, parameters of the local binary pattern algorithm and parameters of the classification model are adjusted according to the classification result and the authenticity label.
According to an embodiment of the present disclosure, the parameters of the local binary pattern algorithm may include, for example, any one or more of the number of column-wise blocks, the number of row-wise blocks, and the number of time-wise blocks.
According to an embodiment of the present disclosure, operations S312 to S314 may be repeatedly performed until the accuracy of the obtained classification result meets a preset requirement.
In this embodiment, the precision of the classification result may be represented in the form of a confusion matrix, each column of the confusion matrix represents a prediction category, each row represents a true attribution category of the data, and a value of the confusion matrix on a diagonal represents a correct rate of model prediction. Illustratively, the confusion matrix is shown in Table 1, for example. The diagonal values of the confusion matrix are 71.60 and 68.60 respectively, that is, the correctness of the model prediction true is 71.60%, and the correctness of the prediction false is 68.60%.
Figure BDA0002563006420000091
Figure BDA0002563006420000101
TABLE 1
According to the embodiment of the present disclosure, if the values of the confusion matrix on the diagonal line reach the local optimal solution after the operations S312 to S314 are repeatedly performed for several times, that is, the accuracy of the subsequently obtained classification result is not increased, the accuracy of the classification result meets the preset requirement.
Fig. 4 schematically shows a flowchart of an image information verification method according to another embodiment of the present disclosure.
As shown in fig. 4, the method may further include operations S410 to S420 in addition to operations S210 to S240. Wherein operation S410 may be performed before the face image is segmented from each of the plurality of frame images, respectively.
In operation S410, a set of key points is determined from each frame image.
According to the embodiment of the disclosure, a preset number of face key points can be calibrated for each frame image by using an ASM (Active Shape Model) algorithm, and all face key points corresponding to each frame image are used as a key point set of the frame image. For example, in this embodiment, the preset number may be 68, for example.
According to an embodiment of the present disclosure, when multiple sets of keypoints are identified in a frame, the sets of keypoints need to be filtered. Furthermore, the utility modelSpecifically, for k sets of keypoints marked in a frame (k ≧ 1), the relative distance between any two keypoints in each keypoint set can be calculated according to formula I, and L is selected(k)(Ii) And taking the key point set to which the largest two key points belong as the key point set of the face to be detected.
Figure BDA0002563006420000102
Wherein, IiRepresents the ith frame image in a video clip, i ≧ 1, L(k)(Ii) Representing the relative distance between any two key points in the kth key point set of the ith frame image, p and q refer to any two key points in the same key point set, p and q ∈ [1, 68 ]]And is
Figure BDA0002563006420000111
In operation S420, each frame image is normalized according to the key point set.
FIG. 5 schematically shows a flow diagram of a normalization process for each frame image according to a set of keypoints, according to an embodiment of the disclosure.
As shown in fig. 5, operations S521 to S524 may be included in addition to operation S420, for example.
In operation S521, a template frame image is acquired, and a plurality of first face key points are determined from the template image.
In operation S522, for each of the plurality of frame images, a plurality of second face key points are determined from the frame image.
In operation S523, a weight of the frame image is determined according to the plurality of first face key points and the plurality of second face key points.
In operation S524, each pixel value of the frame image is weighted by the weight to obtain a normalized frame image.
According to the embodiment of the disclosure, a frame image with a centered face position and a correct angle can be determined from all frame images and used as a template frame image. A plurality of first face keypoints is then determined from the set of keypoints for the template frame image. Illustratively, in the present embodiment, all the keypoints in the keypoint set of the template frame image are taken as the first keypoints.
According to the embodiment of the present disclosure, a plurality of second face keypoints may be determined from each of the keypoint sets in the frame images other than the template frame image. Illustratively, in the present embodiment, all the keypoints in each of the keypoint sets in the other frame images are taken as the second keypoints.
According to an embodiment of the present disclosure, a correspondence T between a set of keypoints of a template frame image and a set of keypoints of a first frame image in a video sequence may be established using an LWM (Local Weighted Mean) function:
Figure BDA0002563006420000112
wherein, ImodAs template frame image, I1Is the first frame image of the video sequence,
Figure BDA0002563006420000113
is a set of key points for the template frame image,
Figure BDA0002563006420000121
LWM () is an LWM function as a key point set of the first frame image.
According to an embodiment of the present disclosure, when ImodT is constant 1 for the first frame of a video sequence.
According to an embodiment of the present disclosure, T may be taken as a weight of a frame image, and then frame images in the same video sequence may be normalized into a model-like frame image pattern according to the following formula (formula III):
I′i=T·Ii(formula III)
Wherein, I'iThe normalized image is the ith frame image.
Fig. 6 schematically shows a flowchart for segmenting a face image from each of a plurality of frame images according to an embodiment of the present disclosure.
As shown in fig. 6, operation S220 may include, for example, operations S610 to S630.
In operation S610, two pupil coordinates in the template frame image are acquired.
In operation S620, a region of interest is determined according to the two pupil coordinates.
In operation S630, a face image is segmented from each frame image according to the region of interest.
According to the embodiment of the present disclosure, the two pupil coordinates may be obtained through image recognition or calibrated manually.
According to the embodiment of the disclosure, the pupil distance between two pupils can be calculated according to the two pupil coordinates, and then the region where the human face is located, namely the region of interest, is calculated according to the pupil distance. Illustratively, in the present embodiment, the width and height of the region of interest are twice the interpupillary distance.
According to the embodiment of the disclosure, the image in the region of interest is segmented from each frame image to obtain the face image, the background region which does not contain the face information in the frame image can be deleted, only the region containing the face information is reserved, the interference of environmental factors on the identification can be reduced, and the accuracy of micro-expression identification is improved.
Fig. 7 schematically shows a flowchart of an image information verification method according to another embodiment of the present disclosure.
As shown in fig. 7, the method may further include an operation S710 of performing an interpolation operation on a plurality of human face images according to a temporal interpolation algorithm to normalize the image sequence frame number, in addition to the operations S210 to S240 and S310.
According to an embodiment of the present disclosure, if the frame number of each video sequence is different, a difference operation may be performed on each video sequence using a TIM (temporal interpolation Model) to unify the video sequences into the same frame number.
According to an embodiment of the present disclosure, operation S710 may be performed, for example, after segmenting a face image from each of a plurality of frame images, respectively.
The method of embodiments of the present disclosure is further described below in conjunction with fig. 8-13 and the specific embodiments. Those skilled in the art will appreciate that the following example embodiments are only for the understanding of the present disclosure, and the present disclosure is not limited thereto.
Step 1: collecting a plurality of video segments containing face information, serializing each video segment into a video sequence, and processing each video sequence by using the steps 1.1 to 1.3 to obtain a preprocessed face image sequence.
Step 1.1: selecting a model frame image of a video clip, and calibrating a key point set of a face to be recognized by using an ASM face calibration algorithm for the model frame image and the first frame image of a video sequence respectively.
Step 1.2: and (3) establishing a corresponding relation model for the key point set of the model frame image obtained in the step (1.1) and the first frame image, and respectively inputting the rest frame images in the video sequence into the corresponding relation model to obtain the video sequence with uniform posture.
Step 1.3: and (3) performing background segmentation on the video sequence with uniform posture obtained in the step (1.2) according to the interpupillary distance of the face to be recognized to obtain a plurality of segmented face images, namely a face image sequence, as shown in fig. 8.
Step 2: unifying the frame number of the face image sequence obtained in the step 1 by using a TIM algorithm to obtain a video sequence after the difference, namely a face image set, as shown in FIG. 9.
And step 3: and (3) obtaining three-dimensional LBP characteristics of the face image set obtained in the step (2) by adopting an LBP-TOP algorithm.
More specifically, as shown in fig. 10, the face image set is divided into a plurality of feature blocks (also called cubes) of 8 × 8 × 1 in three-dimensional space, wherein the first parameter (8) is the number of column-wise blocks, the second parameter (8) is the number of row-wise blocks, and the third parameter (1) is the number of time-wise blocks, as shown in fig. 11, features are extracted from the face image set of fig. 10 by using an LBP model with a coding mode of uniform code, a radius value R of 2, and a sampling number P of 8, first, LBP features in three directions of XY, XT, and YT are extracted from a single cube, and the three LBP features are combined in series to form an LBP-TOP feature (LBP-TOP) of the single cube1) Then, the complete set is traversed to obtain the LBP-TOP feature of the whole body (LBP-TOP)2)。
And 4, step 4: and inputting the three-dimensional LBP characteristics into an LSVM classifier to obtain micro-expression categories which are contained in the face image set and correspond to the face information.
And 5: and determining and analyzing a confusion matrix according to the LSVM classification result obtained in the step 3, and changing LBP-TOP parameters to repeat the step 3 until the confusion matrix is clear in area, as shown in FIG. 12.
Step 6: and establishing an online information verification system according to the model face key point set, the corresponding relation, the face segmentation size, the TIM algorithm parameter, the LSVM algorithm parameter and the block parameter of the LBP-TOP feature extraction algorithm.
And 7: when a user transacts online business, the communication process between the customer service staff and the client is recorded in real time, the online information verification system is utilized to analyze the micro-expression change of the user in the conversation process according to the recorded video segments to obtain an analysis result, the analysis result is displayed on a system interface in real time, and the customer service staff can judge the authenticity of the information provided by the user according to the analysis result.
FIG. 13 illustrates an online identity verification system operational interface diagram according to an embodiment of the present disclosure. As shown in fig. 13, the left side is a video acquisition and analysis result display area, and the right side is an audit service handling area. The auditor can determine the authenticity of the information provided by the user when transacting business by taking the video image of the user and the analysis result displayed in the display area as reference.
Fig. 14 schematically shows a block diagram of an image information authentication apparatus according to an embodiment of the present disclosure.
As shown in fig. 14, the image information verification apparatus 1400 includes an acquisition module 1410, a segmentation module 1420, a feature extraction module 1430, and a classification module 1440.
Specifically, the obtaining module 1410 is configured to obtain a target video sequence when the target user inputs information to be verified, where the target video sequence includes a plurality of frame images including a human face.
And a segmentation module 1420, configured to segment the facial image from each of the plurality of frame images to obtain a plurality of facial images.
The feature extraction module 1430 is configured to extract a target feature of the target video sequence from the plurality of face images according to a local binary pattern algorithm.
A classification module 1440 for processing the target features using the trained classification model to verify the authenticity of the information to be verified.
According to the embodiment of the disclosure, the authenticity of the information to be verified is verified by acquiring the target video sequence of the target user when inputting the information to be verified, extracting features from the target video sequence, and processing the target features by using the trained classification model. The user can carry out the information verification required by the business without going to the appointed offline website, the range of the online business handling of the bank is expanded, the working efficiency of the related business of the bank information verification is improved, and the potential safety hazard of the customer account is reduced.
Fig. 15 schematically shows a block diagram of an image information authentication apparatus according to an embodiment of the present disclosure.
As shown in fig. 15, the image information verification apparatus 1500 may further include 1510 in addition to the acquisition module 1410, the segmentation module 1420, the feature extraction module 1430, and the classification module 1440.
Specifically, the training module 1510 is configured to train the classification model, wherein the training module may include:
the sample obtaining sub-module 1511 is configured to obtain a plurality of sample video sequences and authenticity labels corresponding to the plurality of sample video sequences, where each sample video sequence in the plurality of sample video sequences includes a plurality of sample images.
The extracting sub-module 1512 is configured to, for each sample video sequence, extract sample features from sample images of the sample video sequence according to a local binary pattern algorithm.
And the input sub-module 1513 is used for inputting the classification model according to the sample characteristics to obtain a classification result.
And an adjusting submodule 1514, configured to adjust parameters of the local binary pattern algorithm and parameters of the classification model according to the classification result and the authenticity label.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any of the acquisition module 1410, the segmentation module 1420, the feature extraction module 1430, the classification module 1440, and the training module 1510 may be combined in one module to be implemented, or any of them may be split into multiple modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the obtaining module 1410, the segmenting module 1420, the feature extracting module 1430, the classifying module 1440, and the training module 1510 may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in any one of three implementations of software, hardware, and firmware, or in a suitable combination of any of them. Alternatively, at least one of the acquisition module 1410, segmentation module 1420, feature extraction module 1430, classification module 1440, and training module 1510 may be implemented at least in part as a computer program module that, when executed, may perform corresponding functions.
FIG. 16 schematically illustrates a block diagram of a computer system suitable for implementing the above-described method according to an embodiment of the present disclosure. The computer system illustrated in FIG. 16 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 16, the image information verification apparatus 1600 includes a processor 1610 and a computer-readable storage medium 1620. The computer system 1600 may perform a method according to an embodiment of the disclosure.
In particular, processor 1610 may comprise, for example, a general-purpose microprocessor, an instruction set processor and/or related chip set and/or a special-purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 1610 may also include on-board memory for caching purposes. Processor 1610 may be a single processing unit or multiple processing units for performing different actions of a method flow according to embodiments of the disclosure.
Computer-readable storage media 1620, for example, may be non-volatile computer-readable storage media, specific examples include, but are not limited to: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and so on.
The computer-readable storage medium 1620 may comprise a computer program 1621, which computer program 1621 may comprise code/computer-executable instructions that, when executed by the processor 1610, cause the processor 1610 to perform a method according to an embodiment of the disclosure, or any variant thereof.
The computer programs 1621 may be configured with computer program code, for example, including computer program modules. For example, in an example embodiment, code in computer program 1621 may include one or more program modules, including, for example, 1621A, modules 1621B, … …. It should be noted that the division and number of modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, which when executed by the processor 1610, enable the processor 1610 to perform the method according to the embodiment of the present disclosure or any variation thereof.
According to an embodiment of the present invention, at least one of the obtaining module 1410, the segmentation module 1420, the feature extraction module 1430, the classification module 1440, and the training module 1510 may be implemented as a computer program module described with reference to fig. 16, which when executed by the processor 1610 may implement the respective operations described above.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
While the disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents. Accordingly, the scope of the present disclosure should not be limited to the above-described embodiments, but should be defined not only by the appended claims, but also by equivalents thereof.

Claims (10)

1. An image information verification method, comprising:
acquiring a target video sequence of a target user when inputting information to be verified, wherein the target video sequence comprises a plurality of frame images containing human faces;
respectively segmenting a face image from each frame image in the plurality of frame images to obtain a plurality of face images;
extracting target features of the target video sequence from the plurality of face images according to a local binary pattern algorithm; and
processing the target features using a trained classification model in order to verify the authenticity of the information to be verified.
2. The method of claim 1, further comprising:
training the classification model, wherein the training the classification model comprises:
acquiring a plurality of sample video sequences and authenticity labels corresponding to the sample video sequences, wherein each sample video sequence in the sample video sequences comprises a plurality of sample images;
extracting sample features from sample images of each sample video sequence according to a local binary pattern algorithm;
inputting the classification model according to the sample characteristics to obtain a classification result; and
and adjusting parameters of a local binary pattern algorithm and parameters of the classification model according to the classification result and the authenticity label.
3. The method of claim 1, further comprising, prior to said segmenting the face image from each of the plurality of frame images separately:
determining a key point set according to each frame image; and
and carrying out normalization processing on each frame image according to the key point set.
4. The method of claim 3, wherein said normalizing said each frame image according to said set of keypoints comprises:
acquiring a template frame image, and determining a plurality of first face key points from the template image;
determining, for each of the plurality of frame images, a plurality of second face keypoints from the frame image;
determining the weight of the frame image according to the plurality of first face key points and the plurality of second face key points; and
and weighting each pixel value of the frame image by using the weight to obtain the frame image after normalization processing.
5. The method of claim 1, further comprising, after said segmenting the face image from each of the plurality of frame images, respectively:
and carrying out interpolation operation on the plurality of face images according to a time interpolation algorithm so as to normalize the image sequence frame number.
6. The method of claim 4, wherein said segmenting the face image from each of the plurality of frame images, respectively, comprises:
acquiring two pupil coordinates in the template frame image;
determining an interested area according to the two pupil coordinates; and
and segmenting the face image from each frame image according to the region of interest.
7. An image information authentication apparatus comprising:
the system comprises an acquisition module, a verification module and a verification module, wherein the acquisition module is used for acquiring a target video sequence of a target user when inputting information to be verified, and the target video sequence comprises a plurality of frame images containing human faces;
the segmentation module is used for segmenting a face image from each frame image in the plurality of frame images to obtain a plurality of face images;
the characteristic extraction module is used for extracting target characteristics of the target video sequence from the plurality of face images according to a local binary pattern algorithm; and
and the classification module is used for processing the target characteristics by utilizing the trained classification model so as to verify the authenticity of the information to be verified.
8. The apparatus of claim 7, further comprising:
a training module for training the classification model, wherein the training module comprises:
the system comprises a sample acquisition sub-module, a sampling analysis sub-module and a verification sub-module, wherein the sample acquisition sub-module is used for acquiring a plurality of sample video sequences and authenticity labels corresponding to the sample video sequences, and each sample video sequence in the sample video sequences comprises a plurality of sample images;
the extraction submodule is used for extracting sample characteristics from sample images of each sample video sequence according to a local binary pattern algorithm;
the input submodule is used for inputting the classification model according to the sample characteristics to obtain a classification result; and
and the adjusting submodule is used for adjusting parameters of the local binary pattern algorithm and parameters of the classification model according to the classification result and the authenticity label.
9. A computing device, comprising:
one or more processors;
a memory for storing one or more computer programs,
wherein the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1 to 6.
10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 6.
CN202010616687.7A 2020-06-30 2020-06-30 Image information verification method, device, computing device and medium Active CN111738199B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010616687.7A CN111738199B (en) 2020-06-30 2020-06-30 Image information verification method, device, computing device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010616687.7A CN111738199B (en) 2020-06-30 2020-06-30 Image information verification method, device, computing device and medium

Publications (2)

Publication Number Publication Date
CN111738199A true CN111738199A (en) 2020-10-02
CN111738199B CN111738199B (en) 2023-12-19

Family

ID=72653897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010616687.7A Active CN111738199B (en) 2020-06-30 2020-06-30 Image information verification method, device, computing device and medium

Country Status (1)

Country Link
CN (1) CN111738199B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686232A (en) * 2021-03-18 2021-04-20 平安科技(深圳)有限公司 Teaching evaluation method and device based on micro expression recognition, electronic equipment and medium
CN112699236A (en) * 2020-12-22 2021-04-23 浙江工业大学 Deepfake detection method based on emotion recognition and pupil size calculation
CN113570689A (en) * 2021-07-28 2021-10-29 杭州网易云音乐科技有限公司 Portrait cartoon method, apparatus, medium and computing device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650555A (en) * 2015-11-02 2017-05-10 苏宁云商集团股份有限公司 Real person verifying method and system based on machine learning
CN109977769A (en) * 2019-02-21 2019-07-05 西北大学 A kind of method of micro- Expression Recognition under low resolution environment
CN110427899A (en) * 2019-08-07 2019-11-08 网易(杭州)网络有限公司 Video estimation method and device, medium, electronic equipment based on face segmentation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650555A (en) * 2015-11-02 2017-05-10 苏宁云商集团股份有限公司 Real person verifying method and system based on machine learning
CN109977769A (en) * 2019-02-21 2019-07-05 西北大学 A kind of method of micro- Expression Recognition under low resolution environment
CN110427899A (en) * 2019-08-07 2019-11-08 网易(杭州)网络有限公司 Video estimation method and device, medium, electronic equipment based on face segmentation

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699236A (en) * 2020-12-22 2021-04-23 浙江工业大学 Deepfake detection method based on emotion recognition and pupil size calculation
CN112699236B (en) * 2020-12-22 2022-07-01 浙江工业大学 Deepfake detection method based on emotion recognition and pupil size calculation
CN112686232A (en) * 2021-03-18 2021-04-20 平安科技(深圳)有限公司 Teaching evaluation method and device based on micro expression recognition, electronic equipment and medium
CN113570689A (en) * 2021-07-28 2021-10-29 杭州网易云音乐科技有限公司 Portrait cartoon method, apparatus, medium and computing device
CN113570689B (en) * 2021-07-28 2024-03-01 杭州网易云音乐科技有限公司 Portrait cartoon method, device, medium and computing equipment

Also Published As

Publication number Publication date
CN111738199B (en) 2023-12-19

Similar Documents

Publication Publication Date Title
CN109726624B (en) Identity authentication method, terminal device and computer readable storage medium
US20230081645A1 (en) Detecting forged facial images using frequency domain information and local correlation
CN106897658B (en) Method and device for identifying human face living body
CN107690657B (en) Trade company is found according to image
CN108280477B (en) Method and apparatus for clustering images
Hoang Ngan Le et al. Robust hand detection and classification in vehicles and in the wild
JP2020504358A (en) Image-based vehicle damage evaluation method, apparatus, and system, and electronic device
CN108717663B (en) Facial tag fraud judging method, device, equipment and medium based on micro expression
CN111222500B (en) Label extraction method and device
CN108229376B (en) Method and device for detecting blinking
US11126827B2 (en) Method and system for image identification
CN111738199B (en) Image information verification method, device, computing device and medium
CN107545241A (en) Neural network model is trained and biopsy method, device and storage medium
CN109919244B (en) Method and apparatus for generating a scene recognition model
JP2022521038A (en) Face recognition methods, neural network training methods, devices and electronic devices
CN110503099B (en) Information identification method based on deep learning and related equipment
CN111241989A (en) Image recognition method and device and electronic equipment
CN108491823B (en) Method and device for generating human eye recognition model
CN113723288B (en) Service data processing method and device based on multi-mode hybrid model
CN108388889B (en) Method and device for analyzing face image
CN108509994B (en) Method and device for clustering character images
Chandran et al. Missing child identification system using deep learning and multiclass SVM
CN108399401B (en) Method and device for detecting face image
CN113515988A (en) Palm print recognition method, feature extraction model training method, device and medium
CN109064464B (en) Method and device for detecting burrs of battery pole piece

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant