CN114742561A - Face recognition method, device, equipment and storage medium - Google Patents

Face recognition method, device, equipment and storage medium Download PDF

Info

Publication number
CN114742561A
CN114742561A CN202110017428.7A CN202110017428A CN114742561A CN 114742561 A CN114742561 A CN 114742561A CN 202110017428 A CN202110017428 A CN 202110017428A CN 114742561 A CN114742561 A CN 114742561A
Authority
CN
China
Prior art keywords
face
image frame
human
features
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110017428.7A
Other languages
Chinese (zh)
Inventor
王少鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110017428.7A priority Critical patent/CN114742561A/en
Publication of CN114742561A publication Critical patent/CN114742561A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The application discloses a face recognition method, a face recognition device, face recognition equipment and a storage medium, and belongs to the field of face recognition. The method comprises the following steps: acquiring a sequence of image frames; responding to the appearance of a human face in the image frame sequence, and extracting human face time sequence characteristics from the image frame sequence, wherein the human face time sequence characteristics are human face characteristics contained in continuous image frames; and responding to the condition that the human face time sequence characteristics meet the human face recognition conditions, and performing identity recognition on the human face characteristics contained in the image frame sequence. By continuously acquiring the image frame sequence, when a face appears in the image frame sequence, the identity recognition is carried out according to the extracted face time sequence characteristics meeting the face recognition condition, the user does not need to manually select a recognition mode, and the face recognition efficiency is improved.

Description

Face recognition method, device, equipment and storage medium
Technical Field
The present application relates to the field of face recognition, and in particular, to a face recognition method, apparatus, device, and storage medium.
Background
The face payment mode is a mode of carrying out payment transaction based on a face recognition technology, a user only needs to face a camera of the cash register device, the cash register device matches recognized face information with personal information stored in advance by the user, and the payment transaction is completed when the matching is successful.
Taking the cashier device as the desk type cashier device as an example, when a customer checks out after purchasing commodities, the desk type cashier device generates a purchase order according to the commodities selected by the user, a cashier selects a payment mode, such as a face payment mode, in the desk type cashier device, and a customer stands in front of a camera of the desk type cashier device and pays in a face scanning mode.
In the technical scheme, the desktop cash register needs a cash register to manually select a face payment mode, so that the camera of the desktop cash register can be started to identify the face of a customer, the operation is complex, and the settlement efficiency of a purchase order is reduced.
Disclosure of Invention
The embodiment of the application provides a face recognition method, a face recognition device, a face recognition equipment and a storage medium, wherein through continuous acquisition of an image frame sequence, when a face appears in the image frame sequence, identity recognition is carried out according to extracted face time sequence characteristics meeting face recognition conditions, a user does not need to manually select a recognition mode, and the face recognition efficiency is improved. The technical scheme comprises the following scheme:
according to an aspect of the present application, there is provided a face recognition method, including the steps of:
acquiring a sequence of image frames;
in response to the appearance of a face in the image frame sequence, extracting face time sequence features from the image frame sequence, wherein the face time sequence features are face features contained in continuous image frames;
and responding to the condition that the human face time sequence characteristics meet the human face recognition conditions, and performing identity recognition on the human face characteristics contained in the image frame sequence.
According to another aspect of the present application, there is provided a face recognition apparatus, the apparatus including:
the acquisition module is used for acquiring an image frame sequence;
the extraction module is used for responding to the appearance of a human face in the image frame sequence and extracting human face time sequence characteristics from the image frame sequence, wherein the human face time sequence characteristics are human face characteristics contained in continuous image frames;
and the processing module is used for responding that the human face time sequence characteristics meet the human face recognition conditions and carrying out identity recognition on the human face characteristics contained in the image frame sequence.
In an optional embodiment, the condition that the face time-series feature satisfies the face recognition condition includes at least one of the following cases:
the human face features in the human face time sequence features belong to the same human face;
the area of the face region in the face time sequence feature is continuously increased;
and the human eyes in the human face time sequence features are in a continuous watching state.
In an optional embodiment, the acquisition module is configured to acquire a 1 st image frame in which a human face appears in the image frame sequence, where the 1 st image frame includes a first human face feature;
the processing module is used for responding to matching of second human face features and the first human face features in n continuous image frames behind the 1 st image frame in the image frame sequence, and accumulating the number n of image frames with human face features belonging to the same human face, wherein n is a positive integer; and determining that the face features in the face time sequence features belong to the same face in response to the number of image frames of which the face features belong to the same face reaching a first count threshold.
In an optional embodiment, the acquisition module is configured to acquire a similarity between the first facial feature and the second facial feature;
the processing module is configured to determine that the second facial feature matches the first facial feature in response to the similarity reaching a similarity threshold.
In an optional embodiment, the acquisition module is configured to acquire a j frame image frame and a j +1 frame image frame in which a face appears in the image frame sequence, where the j frame image frame includes a first face region, the j +1 frame image frame includes a second face region, the first face region corresponds to an area of the first face region, the second face region corresponds to an area of the second face region, and j is a positive integer;
the processing module is used for responding to the fact that the area of the second face region is larger than that of the first face region, and accumulating the number of image frames with the continuously increased face region area; repeating the steps of obtaining the jth frame image frame and the jth +1 frame image frame and the step of accumulating the number of the image frames with the continuously increased face area; determining that the face region area in the face time series features continues to increase in response to the number of image frames for which the face region area continues to increase reaching a second count threshold.
In an optional embodiment, the acquiring module is configured to acquire an image frame in which a human face appears in the image frame sequence, where the image frame in which the human face appears includes a line of sight direction of the human eye; the processing module is used for responding to the sight line direction included by the continuous n frames of image frames in the image frame sequence belonging to the front view direction, and accumulating the number n of the image frames of the human eyes in the continuous watching state, wherein n is a positive integer; determining that the human eye in the human face time series feature is in the persistence state in response to the number of image frames for which the human eye is in the persistence state reaching a third count threshold.
In an optional embodiment, the apparatus comprises a reading module, a first sending module and a first receiving module;
the reading module is used for reading the equipment identifier of the terminal;
the first sending module is configured to send an information obtaining request to a server, where the information obtaining request carries the device identifier;
the first receiving module is configured to receive configuration information sent by the server, where the configuration information is obtained by the server according to the device identifier and an association relationship, the association relationship is used to represent a correspondence between the device identifier and the configuration information, and the configuration information includes the face recognition condition.
According to another aspect of the present application, there is provided a computer device comprising: a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the face recognition method as described above.
According to another aspect of the present application, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the face recognition method as described above.
According to another aspect of the application, a computer program product or computer program is provided, comprising computer instructions stored in a computer readable storage medium. A processor of a computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions to cause the computer device to perform the face recognition method as described above.
The beneficial effects that technical scheme that this application embodiment brought include following effect at least:
through continuously acquiring the image frame sequence, when a face appears in the image frame sequence, identity recognition is carried out according to extracted face time sequence characteristics meeting face recognition conditions, so that the identity recognition mode does not need to be manually selected by a user, and the identity information of the user can be recognized in a non-inductive manner only by positioning the face of the user in front of the camera, so that the face recognition efficiency and the recognition accuracy are improved, and the terminal can accurately recognize the identity information of the user. For example, in an unmanned convenience store, a user carries a large number of articles (both hands are not in an idle state), and self-service settlement equipment of the unmanned convenience store can recognize the identity of the user by using the face recognition method provided by the embodiment of the application, so that the non-inductive payment of commodities is completed, manual operation of the user is not needed, and the convenience degree of the user during payment is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 2 is a flow chart of a face recognition method provided in an exemplary embodiment of the present application;
FIG. 3 is a flow chart of a face recognition method provided by another exemplary embodiment of the present application;
FIG. 4 is a flowchart of a face recognition method provided in another exemplary embodiment of the present application;
FIG. 5 is a system block diagram of face recognition provided by an exemplary embodiment of the present application;
FIG. 6 is a block diagram of a face recognition apparatus provided in an exemplary embodiment of the present application;
FIG. 7 is a block diagram of a face recognition apparatus according to another exemplary embodiment of the present application;
FIG. 8 is a block diagram of a server provided by an exemplary embodiment of the present application;
FIG. 9 is a block diagram of a computer device provided in an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are described:
three-dimensional camera (3-Dimension, 3D camera): distance information of a shooting space is detected by a camera, and living body face detection is usually realized by adding a living body detection program. The method has the functions of face recognition, gesture recognition, human skeleton recognition, three-dimensional measurement, environment perception, three-dimensional map reconstruction and the like. The intelligent home system can be widely applied to the fields of intelligent terminals, intelligent robots, unmanned planes, unmanned driving, automatic driving, Virtual Reality technologies (VR), Augmented Reality technologies (AR), intelligent homes, intelligent medical treatment, intelligent customer service and the like.
String sequence Number (Serial Number, SN): a concept introduced to verify the legitimate identity of a product is to uniquely identify a device. In the embodiment of the application, the terminal is uniquely identified by the character string sequence number, so that the server can determine the configuration information corresponding to the terminal, and the terminal is used for collecting the image frame sequence.
SQLite database: the method refers to a light database, and the relational data management system complies with the ACID. ACID refers to the abbreviation of four basic elements for correct execution of database transactions, including Atomicity, Consistency, Isolation, and persistence. In the embodiment of the application, the terminal stores configuration information through the SQLite database, and the configuration information comprises face recognition conditions.
Computer Vision technology (CV) is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or is transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. The computer vision technology generally includes technologies such as image processing, image Recognition, image semantic understanding, image retrieval, Optical Character Recognition (OCR), video processing, video semantic understanding, video content/behavior Recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronous positioning, map construction and the like.
The face recognition method provided by the embodiment of the application can be applied to the following scenes:
firstly, a face payment scene.
In this application scenario, the face recognition method provided by the embodiment of the present application may be applied to a cash register device with a camera, where the cash register device acquires an image frame sequence through the camera, and when a face appears in the image frame sequence, extracts face time sequence features from the image frame sequence, and when the face features in the face time sequence features belong to the same face, or when the face area in the face time sequence features continuously increases, or when the eyes in the face time sequence features are in a continuous watching state, performs identity recognition on the face features contained in the image frame sequence. The purchase order is paid according to the recognized payment account, so that the payment mode is not required to be manually selected by a user, the non-inductive face payment can be carried out, and the settlement efficiency of the purchase order is improved.
And II, identity verification scene.
In the application scenario, the face recognition method provided by the embodiment of the application can be applied to an identity verification device with a camera, the identity verification device acquires an image frame sequence through the camera, when a face appears in the image frame sequence, face time sequence features are extracted from the image frame sequence, and when the face features in the face time sequence features belong to the same face, or when the area of a face region in the face time sequence features is continuously increased, or when eyes in the face time sequence features are in a continuous watching state, the face features contained in the image frame sequence are subjected to identity recognition. For example, in a ticket checking channel of a subway, a ticket checking gate is provided with a camera, and identity verification is performed on a face appearing through the camera, so that the riding identity of a user is determined (for example, a riding account of the user is determined by scanning the face); for another example, in a ticket checking channel of a train ticket, a ticket checking gate is provided with a camera, and the identity of a face appearing through the camera is checked, so that the information of the ticket purchased by a user is determined.
The above description is only given by taking two application scenarios as examples, the face recognition method provided in the embodiment of the present application may also be applied to other face recognition scenarios (for example, an application scenario in which attendance is performed on an employee through face recognition), and the embodiment of the present application is not limited to a specific application scenario.
The face recognition method provided by the embodiment of the application can be applied to computer equipment with stronger data processing capacity. In a possible implementation manner, the face recognition method provided by the embodiment of the present application may be applied to a personal computer, a workstation, or a server, that is, payment for a purchase order may be realized through the personal computer, the workstation, or the server. Illustratively, the face recognition method is applied to a server, which is a background server of an application program, so that a terminal installed with the application program realizes a face recognition function for a face of a user by means of the server.
FIG. 1 illustrates a schematic diagram of a computer system provided by an exemplary embodiment of the present application. The computer system 100 includes a terminal 110 and a server 120, wherein the terminal 110 and the server 120 perform data communication through a communication network, optionally, the communication network may be a wired network or a wireless network, and the communication network may be at least one of a local area network, a metropolitan area network, and a wide area network.
An application program supporting face recognition is installed and operated in the terminal 110, when the application program is started, a camera of the terminal 110 is called to collect a face image frame sequence, and face time sequence features are extracted from the face image frame sequence, wherein the face time sequence features are face features contained in continuous image frames. When the face time series characteristics meet the face recognition conditions, the face time series characteristics are sent to the server 120, and the face time series characteristics are subjected to identity recognition through the server 120.
Optionally, the terminal 110 may be a mobile terminal such as a cash register device with a camera, a smart phone, a tablet computer, a laptop portable notebook computer, and a smart robot, or may also be a terminal such as a desktop computer and a projection computer, and the type of the terminal is not limited in the embodiment of the present application.
The server 120 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like. In one possible implementation, the server 120 is a backend server for applications in the terminal 110.
As shown in fig. 1, in this embodiment, the terminal 110 is a terminal having a camera or a terminal connected to the camera, an application program supporting face recognition is run in the terminal 110, and when the application program is started, a camera assembly of the terminal 110 may be called for image acquisition. The image frame sequence acquired by the terminal 110 may or may not contain a face, and the terminal 110 extracts a face time sequence feature from the image frame containing the face.
And when the face time sequence characteristics meet the face recognition conditions, the face time sequence characteristics are subjected to identity recognition.
Case 1: the camera acquires a 1 st image frame with a human face, wherein the 1 st image frame contains a first human face feature. When the first face features are matched with the second face features, the first face features and the second face features belong to the same face, the number of the image frames with the face features belonging to the same face is accumulated, and when the number of the image frames with the face features belonging to the same face reaches a first counting threshold value, the face features in the face time sequence features are determined to belong to the same face.
Case 2: the camera acquires two adjacent image frames with human faces, such as a jth image frame and a j +1 th image frame, wherein j is a positive integer. The jth frame image frame contains a first face area, and the first face area corresponds to the area of the first face area; the j +1 th frame image frame contains a second face area, and the second face area corresponds to the area of the second face area. And when the area of the second face region is larger than that of the first face region, determining that the area of the face region is continuously increased, repeating the process, accumulating the number of the image frames with the continuously increased area of the face region, and when the number of the image frames with the continuously increased area of the face region reaches a second counting threshold value, determining that the area of the face region in the face time sequence feature is continuously increased.
Case 3: the method comprises the steps that a camera acquires n continuous frames of image frames with human faces, the n frames of image frames comprise the sight direction of human eyes (n is a positive integer), the number of the image frames with the sight direction being the front view direction (namely the image frames acquired by a user facing the camera) is accumulated, and when the number of the image frames with the sight direction being the front view direction (the human eyes are in a continuous watching state) reaches a third counting threshold value, the human eyes in the human face time sequence characteristics are determined to be in the continuous watching state.
Illustratively, when the terminal 110 is a cash register device with a camera, the server 120 is configured to perform the following steps: step 11, receiving human face time sequence characteristics; step 12, identifying the face time sequence characteristics; and step 13, paying the purchase order according to the identification result. The server 120 queries a pre-stored payment account according to the face time series feature, where the payment account has a corresponding relationship with the face time series feature, and pays for the purchase order through the payment account. The server 120 sends the payment result 14 to the terminal 110, and the terminal 110 displays the prompt message: the payment is successful.
It is understood that the above embodiment only takes the case that an application Program supporting face recognition is installed in the terminal as an example, and in the actual use process, the above face recognition method can also be applied to an applet (a Program running depending on a host Program) or a web page, where a so-called applet (Mini Program) is an application Program that can be used without downloading and installing. In order to provide more diversified business services for users, developers can develop corresponding applets for applications (such as instant messaging applications, shopping applications, mail applications, and the like) of a terminal, the applets can be embedded into the applications of the terminal as sub-applications, the application programs of the terminal are host programs of the applets, and the corresponding business services can be provided for the users by running the sub-applications (i.e., the corresponding applets) in the applications. This is not limited in the examples of the present application.
Fig. 2 shows a flowchart of a face recognition method according to an exemplary embodiment of the present application. The embodiment is described by taking the method as an example applied to the terminal 110 in the computer system 100 shown in fig. 1, and the method includes the following steps:
step 201, an image frame sequence is acquired.
The image frame sequence is collected through a camera, the camera is connected with a terminal, or the camera belongs to a part of the terminal, and the terminal comprises at least one of cash register equipment, an intelligent camera (AI camera), an intelligent mobile phone and a platform computer. The image frame sequence is a sequence formed by a plurality of continuous image frames collected by a camera, and the image frame sequence may or may not include a human face.
In one example, a camera is connected to the cash register device, and a cash register application program is installed in the cash register device and can call the camera for image acquisition. When the application program is started, the camera starts to collect images, and when the application program is closed or the cash register device is closed, the camera stops collecting the images.
Step 202, in response to the occurrence of a face in the image frame sequence, extracting a face time sequence feature from the image frame sequence, wherein the face time sequence feature is a face feature contained in continuous image frames.
When a human face appears in the image frame sequence, the image frame sequence contains the human face time sequence characteristics. That is, when the camera captures an image and a face appears in front of the camera, the camera captures a plurality of continuous image frames containing the face to form a face image frame sequence containing the face, wherein the face image frame sequence is a subset of the image frame sequence or a partial image frame sequence, and each image frame in the face image frame sequence contains the face. The human faces appearing in each image frame correspond to human face features, and the human face features are also in time sequence because the human face image sequence frames are arranged according to the acquisition time of the image frames. The human face contained in each image frame is the same or different, and the extracted human face features in each image frame are the same, similar or different.
Illustratively, the time series features of the human face are extracted by means of geometric features. The human face is composed of eyes, a nose, a mouth, a chin and the like, the human face characteristics can be represented according to the geometric relationship of the shapes and the structural relationship of the parts, for example, a triangle formed by connecting lines between three points of two eyes and a nose represents the human face characteristics, the human face characteristics of each frame of image frame in the human face image frame sequence are extracted through the method, and the human face time sequence characteristics are formed.
Illustratively, the time series features of the human face are extracted in a template matching mode. The terminal comprises a face feature database which is obtained from the server in advance, and the face feature database comprises a face feature template. And matching the extracted face features in the face image frame sequence with the face feature template, and determining the face features according to the face feature template if the extracted face features are matched with the face feature template. The method extracts the face characteristics of each frame of the image frame in the face image frame sequence to form the face time sequence characteristics.
The embodiment of the present application does not limit the manner of extracting the face features.
And step 203, responding to the condition that the human face time sequence characteristics meet the human face recognition conditions, and performing identity recognition on the human face characteristics contained in the image frame sequence.
The condition that the face time sequence feature meets the face recognition condition comprises at least one of the following conditions:
the face features in the face time sequence features belong to the same face;
the area of a face region in the face time sequence feature is continuously increased;
the human eyes in the human face time series feature are in a continuous watching state.
The fact that the human face features belong to the same human face means that the human face features extracted from a plurality of continuous image frames containing the human face represent that the human face belongs to the same human face.
The continuous increase of the area of the face region means that the area of the face region in a plurality of continuous image frames containing faces is continuously increased, for example, the area of the face region in the j +1 th image frame is larger than that of the face region in the j th image frame, j is a positive integer, and the acquisition sequence of the j +1 th image frame is after the j th image frame.
The fact that the human eyes are in the continuous watching state means that the watching directions (sight directions) of the human eyes in a plurality of continuous image frames containing human faces point to the same direction, if the human eyes in the jth image frame watch right ahead, the human eyes in the (j + 1) th image frame also watch right ahead, and j is a positive integer, the human eyes in the image frames are determined to be in the watching state according to the two image frames. The embodiment of the present application takes the direction in which the line of sight direction points to the camera as an example, that is, the direction in which the user faces the camera.
The terminal sends the face time sequence characteristics to the server, and the server determines the identity information of the user according to the face time sequence characteristics.
In summary, according to the method provided by this embodiment, by continuously acquiring the image frame sequence, when a face appears in the image frame sequence, the identity is identified according to the extracted face time sequence features meeting the face identification condition, so that the user does not need to manually select an identity identification manner, and the identity information of the user can be identified without sensing only by locating the face of the user in front of the camera, thereby improving the face identification efficiency and the identification accuracy, and enabling the terminal to accurately identify the identity information of the user. For example, in an unmanned convenience store, a user carries a large number of articles (both hands are not in an idle state), and self-service settlement equipment of the unmanned convenience store can recognize the identity of the user by using the face recognition method provided by the embodiment of the application, so that the non-inductive payment of commodities is completed, manual operation of the user is not needed, and the convenience degree of the user during payment is improved.
Fig. 3 shows a flowchart of a face recognition method according to another exemplary embodiment of the present application, and this embodiment takes the example that the method is applied to the terminal 110 in the computer system 100 shown in fig. 1 as an example. The method comprises the following steps:
step 301, reading the device identifier of the terminal.
Illustratively, the device identifier is a serial number (SN code) of a character string as the device identifier. In some embodiments, the device identifier is further accompanied by a barcode or two-dimensional code corresponding to the SN code, and the SN code of the device can be determined by scanning the barcode or two-dimensional code.
Step 302, an information acquisition request is sent to a server, where the information acquisition request carries an equipment identifier.
And the terminal sends an information acquisition request carrying the SN code to the server, wherein the information acquisition request is used for acquiring the configuration information of each terminal. The configuration information is parameter information that satisfies the face recognition condition, for example, if the number of image frames in which the face features in the accumulated face time series features belong to the same face reaches 5 (configuration information), it is determined that the image frames containing the face time series features satisfy the face recognition condition.
Step 303, receiving configuration information sent by the server, where the configuration information is obtained by the server according to the device identifier and the association relationship, the association relationship is used to represent a corresponding relationship between the device identifier and the configuration information, and the configuration information includes a face recognition condition.
The server stores the configuration information and the association relationship in advance, and the configuration information corresponding to the terminal can be determined according to the equipment identifier and the association relationship. The association relationship includes a form of a comparison table and a functional relationship form, and the form of the association relationship is not limited in the embodiment of the present application.
And the first table represents the association relationship between the equipment identification and the configuration information.
Watch 1
Figure BDA0002887464390000111
The SN1 represents a terminal with SN code 1, and the embodiment of the present application does not limit the type of the device identifier.
Illustratively, identity recognition can be performed on the face features when at least one of the three conditions is met, or weights are set for each condition, a result obtained by adding the weights of each condition is calculated, and when the result obtained by adding the weights is greater than a recognition condition threshold value, the acquired image frame is considered to meet the face recognition condition, and identity recognition can be performed on the face features.
The configuration information is determined through the equipment identification, so that a user can realize face recognition on different terminals.
Step 304, a sequence of image frames is acquired.
Taking a terminal as a cash register with a camera as an example, an application program is installed in the cash register, and the application program calls the camera to acquire an image frame sequence. Illustratively, when the application program is started, the application program calls the camera to perform image acquisition, and the acquired image may or may not contain a human face.
Step 305, in response to the occurrence of a face in the image frame sequence, extracting a face time sequence feature from the image frame sequence, wherein the face time sequence feature is a face feature contained in continuous image frames.
When a human face appears in the image frame sequence, the collected image frame sequence contains the human face time sequence characteristics.
Case 1: the human face features in the human face time sequence features belong to the same human face.
Determining that the face features in the face time sequence features belong to the same face through steps 306a to 308 a:
step 306a, a 1 st frame image frame with a human face appearing in the image frame sequence is obtained, wherein the 1 st frame image frame comprises a first human face feature.
When the camera collects the 1 st frame image frame with the human face, the image frame is cached, and the image frame comprises the first human face feature.
Step 307a, in response to matching of a second face feature with a first face feature in n consecutive image frames after the 1 st image frame in the image frame sequence, accumulating the number n of image frames with face features belonging to the same face, where n is a positive integer.
Taking the example of collecting the face of a user, since the camera continuously collects the image frame sequence, after the 1 st image frame of the face appears, the 2 nd, 3 rd and nth image frames (n is a positive integer) also appear.
Extracting second face features from the 2 nd frame image frame containing the face, matching the first face features with the second face features, and determining that the second face features are matched with the first face features in response to the similarity reaching a similarity threshold value by acquiring the similarity between the first face features and the second face features.
Illustratively, the similarity can be calculated through the Euclidean distance, if the Euclidean distance between two eyes of the first face is 12 and the Euclidean distance between two eyes of the second face is 13, the first face is determined to be similar to the second face, and the first face features are matched with the second face features.
Illustratively, the similarity can also be calculated by cosine distance, a cosine value of an included angle between two vectors in a vector space is used as a measure for measuring the difference between the two individuals, and when the included angle between the two vectors approaches 0, the closer the two vectors are, the smaller the difference is, which indicates that the two faces are more similar.
It should be noted that, when the facial feature in one image frame of the n consecutive image frames does not match the first facial feature, the image frames containing the second facial feature are counted again. For example, when the second face features in the 2 nd frame image frame are matched with the first face features, the counter counts for 1, and the second face features in the 3 rd frame image frame are not matched with the first face features (if the faces appearing in the previous two frame image frames are the user a, and the faces appearing in the 3 rd frame image frame are the user B), the image frames with the face features belonging to the same face are counted again, and the counter returns to zero. If the second face features in the 4 th frame image frame are matched with the first face features, the counter counts for 1, the second face features in the 5 th frame image frame are matched with the first face features, the counter counts for 2, and so on. The number of image frames of which the accumulated face features belong to the same face does not include the 1 st image frame.
And 308a, in response to the number of the image frames belonging to the same face reaching a first counting threshold value, determining that the face features in the face time sequence feature belong to the same face.
The first counting threshold is a threshold obtained from the configuration information, and when the number of image frames belonging to the same face reaches the counting threshold, the terminal determines that the face features in the face time series features belong to the same face.
Case 2: the face region area in the face time series feature continues to increase.
Determining that the area of the face region in the face time series feature is continuously increased through the steps 306b to 309 b:
step 306b, a j frame image frame and a j +1 frame image frame of a face appearing in the image frame sequence are obtained, the j frame image frame contains a first face region, the j +1 frame image frame contains a second face region, the first face region corresponds to the area of the first face region, the second face region corresponds to the area of the second face region, and j is a positive integer.
Illustratively, each image frame contains a face region, in the image frames of n consecutive frames, each image frame contains a face region, the face region has a face region area, and the face region areas in any two adjacent image frames are compared, for example, the area of a first face region in the 1 st image frame is 1, and the area of a second face region in the 2 nd image frame is 2.
Step 307b, in response to the second face region area being greater than the first face region area, accumulating the number of image frames having a continuously increasing face region area.
When the second face area is larger than the first face area, the counter counts 1. If the area of the first face region in the 1 st frame image frame is smaller than the area of the second face region in the 2 nd frame image frame, the counter counts 1. And if the area of the face region in the 2 nd frame image frame is larger than that in the 3 rd frame image frame, counting the image frames with the continuously increased areas of the face regions again, and resetting the counter to zero. If the area of the face region in the image frame 3 is smaller than that in the image frame 4, the counter counts for 1, if the area of the face region in the image frame 4 is smaller than that in the image frame 5, the counter counts for 2, and so on.
And 308b, repeating the steps of removing the jth frame image frame and the jth +1 frame image frame and accumulating the number of the image frames with the continuously increased face area.
Repeating the step 306b and the step 307b, and calculating the number of image frames with the continuously increased area of the human face region. When the area of the face region in a certain frame of image frame is in a non-increasing state, the counter is cleared, and the number of the image frames with the continuously increasing face region is accumulated again.
And 309b, in response to the number of the image frames with the continuously increased face area reaching the second counting threshold, determining that the face area in the face time sequence feature is continuously increased.
The second counting threshold is a threshold obtained from the configuration information, and when the number of the image frames reaches the counting threshold when the area of the face region continuously increases, the terminal determines that the face region in the face time series feature continuously increases. The increase in the area of the face region indicates that the user is approaching the camera.
Case 3: the human eyes in the human face time series feature are in a continuous watching state.
Determining that the human eyes in the human face time series characteristic are in a continuously increasing state through steps 306c to 308 c:
step 306c, acquiring an image frame with a human face in the image frame sequence, wherein the image frame with the human face comprises the sight line direction of human eyes.
When the collected image frame sequence comprises an image frame with a face, the face in the image frame collected by the camera faces the camera, or the side face of the face faces the camera. When the front face of the human face faces the camera, the sight line direction belongs to the front view direction, and the image frame of which the sight line direction belongs to the front view direction is determined from the image frame sequence collected by the camera.
Step 307c, in response to that the sight line direction included in n consecutive image frames in the image frame sequence belongs to the front view direction, accumulating the number n of image frames with the human eyes in the continuous watching state, wherein n is a positive integer.
Illustratively, the line-of-sight direction in the 1 st frame image frame to the 5 th frame image frame belongs to the front-view direction, and the counter counts 5.
Illustratively, the sight line direction in the 1 st image frame and the 2 nd image frame belongs to a front view direction, the counter counts by 2, the sight line direction in the 3 rd image frame is a side view direction (for example, the side of the face faces the camera, and the sight line direction points to one side), the number of the image frames of which the eyes are in a continuous watching state is counted again, and the counter counts to zero. If the sight line direction in the 4 th frame image frame to the 6 th frame image frame belongs to the front view direction, the counter counts to 3, and so on.
And 308c, in response to the number of the image frames of which the human eyes are in the continuous watching state reaching a third counting threshold value, determining that the human eyes in the human face time series characteristic are in the continuous watching state.
And the third counting threshold value is a threshold value obtained from the configuration information, and when the number of the image frames of the human eyes in the continuous watching state reaches the third counting threshold value, the terminal determines that the human eyes in the human face time sequence characteristics are in the continuous watching state.
And step 310, responding to the condition that the time sequence characteristics of the human face meet the human face recognition conditions, and performing identity recognition on the human face characteristics contained in the image frame sequence.
When the face time sequence characteristics meet the requirements of the terminal, the face time sequence characteristics are sent to the server, and the server determines the identity information of the user according to the face time sequence characteristics. The human eyes are in a continuous watching state, so that the phenomenon that the human faces are mistakenly identified when other human faces are contained in the camera is avoided.
In summary, in the method of this embodiment, by continuously acquiring the image frame sequence, when a face appears in the image frame sequence, the identity is identified according to the extracted face time sequence features meeting the face identification condition, so that the identity identification mode does not need to be manually selected by the user, and the identity information of the user can be identified without sensing only by positioning the face of the user in front of the camera, thereby improving the face identification efficiency and the identification accuracy, and enabling the terminal to accurately identify the identity information of the user. For example, in an unmanned convenience store, a user carries a large number of articles (both hands are not in an idle state), and self-service settlement equipment of the unmanned convenience store can recognize the identity of the user by using the face recognition method provided by the embodiment of the application, so that the non-inductive payment of commodities is completed, manual operation of the user is not needed, and the convenience degree of the user during payment is improved.
In the method of this embodiment, when the face features in the face time series features satisfy the face recognition condition, the method includes at least one of the following cases: the face features belong to the same face, the area of the face region is continuously increased, and the eyes are in a continuous watching state, so that the face recognition process is started, and the terminal can realize the non-inductive recognition of the identity information according to the conditions.
In the method of this embodiment, the number of image frames with face features belonging to the same face is accumulated by matching the face features in the 1 st frame image frame in which the face appears with the face features in the consecutive n frame image frames located after the 1 st frame image frame, and when the number of image frames with face features belonging to the same face reaches a first counting threshold, it is determined that the face features belong to the same face. The face recognition accuracy is improved, and face false recognition is avoided.
In the method of the embodiment, whether the first face feature is matched with the second face feature is judged by judging whether the similarity between the first face feature and the second face feature reaches a similarity threshold value, so that the recognition accuracy of face recognition is improved, and false recognition of the face is avoided.
The method of this embodiment further compares the areas of the face regions in any two consecutive image frames, accumulates the image frames with the continuously increased area of the face regions, and determines that the area of the face region is continuously increased when the number of the image frames with the continuously increased area of the face region reaches a second count threshold. The face recognition accuracy is improved, and face false recognition is avoided.
In the method of this embodiment, when the line of sight direction in the image frames of n consecutive frames belongs to the front view direction, the number of image frames in which the human eye is in the continuous watching state is accumulated, and when the number of image frames in which the human eye is in the continuous watching state reaches the third counting threshold value, it is determined that the human eye is in the continuous watching state. The face recognition accuracy is improved, and face false recognition is avoided.
In the method of the embodiment, the server acquires the equipment identifier and the server acquires the configuration information of the terminal according to the equipment identifier, so that the terminal realizes the non-inductive identification of the face of the user according to the configuration information. The character string serial number of the unique identification terminal is used as the equipment identification, so that each terminal is distinguished, and the server can configure corresponding configuration information for each terminal.
It is to be understood that the above embodiments may be implemented individually or in any combination.
A face recognition method provided in an embodiment of the present application is described with reference to a server, and fig. 4 shows a flowchart of a face recognition method provided in another exemplary embodiment of the present application. The embodiment is described by taking the method as an example applied to the server 120 in the computer system 100 shown in fig. 1. The method comprises the following steps:
step 401, receiving a face time sequence feature sent by a terminal, where the face time sequence feature is a face feature contained in a continuous image frame.
And the server receives the face time sequence characteristics sent by the terminal.
And 402, identifying the face time sequence characteristics.
The server stores the identity information of the user in advance, and establishes a corresponding relation between the identity information of the user and the face time sequence characteristics. The server searches the identity information of the user according to the corresponding relation met by the face time sequence characteristics, wherein the identity information of the user comprises at least one of a payment account, an identity card number and a mobile phone number. In some embodiments, the identity information of the user may also be member information of the merchant.
In summary, in the method of the embodiment, the server receives the face time series feature sent by the cash register device, and the user identity is determined according to the face time series feature, so that the terminal realizes the non-inductive identification of the identity information of the user, and the face identification efficiency is improved.
In an alternative embodiment based on fig. 4, the server also sends configuration information to the terminal.
Step 411, receiving an acquisition request sent by the terminal, where the acquisition request carries the device identifier.
Illustratively, the terminal typically sends an acquisition request to the server at the time of initial use to obtain the configuration information. And when the user performs face recognition through different terminals, the server issues the configuration information to different terminals according to the equipment identification.
In step 412, an association relationship is determined, where the association relationship is used to represent a corresponding relationship between the device identifier and configuration information, and the configuration information includes face recognition conditions.
The association relationship can be implemented with reference to the implementation of table one, and is not described herein again.
And 413, acquiring configuration information according to the equipment identifier and the association relation.
Step 414, sending the configuration information to the terminal.
See the implementation of step 303 in the above embodiments for steps 413 to 414, which are not described again.
In summary, in the method of this embodiment, the server further obtains the device identifier, and the server obtains the configuration information of the terminal according to the device identifier, so that the terminal realizes the non-sensory recognition of the user's face according to the configuration information. The character string serial number of the unique identification terminal is used as the equipment identification, so that each terminal is distinguished, and the server can configure corresponding configuration information for each terminal.
In an alternative embodiment based on fig. 4, the server also pays for the purchase order based on the identification result obtained from the identification.
Step 421, obtaining the payment account according to the identification result obtained by the identity identification.
Illustratively, the terminal is used as a cash register, when the cash register sends the face time sequence feature to the server, the server determines a payment account corresponding to the face according to the face time sequence feature, and determines a payment account of the user in the payment application program according to the face time sequence feature. In some embodiments, the identification result includes member information of the merchant, and the member information includes at least one of points, remaining amount, and coupon.
And 422, paying the purchase order according to the payment account to obtain a successful payment result.
The server transferring funds from the payment account to the merchant account through the payment account; in other embodiments, the server sends the payment account and the purchase order to a payment server, pays the purchase order through the payment server, and returns the payment result to the server.
Step 423, sending the successful payment result to the terminal.
Illustratively, after the payment is successful, the server generates payment success information (payment success result) and sends the payment success information to the terminal.
In summary, in the method of the embodiment, the server sends the successful payment result to the terminal, so that the terminal completes the settlement process of the purchase order, and the non-sensitive payment of the user is realized.
In some embodiments, if the terminal is a cash register of a merchant, and the cash register first identifies the face of the user a, the cash register queries whether the user a has member information at the merchant after identifying the identity information of the user a. If the user A does not have the member information, under the condition that the user A permits, one image frame is randomly selected from the collected image frame sequence containing the face of the user A to be used as a photo of the registered member information, and the mobile phone number of the user A does not need to be recorded. The next time user a consumes at the merchant, the member information for user a may be determined by facial recognition.
In an example, a terminal with a camera is taken as a cash register with a cash register function, and a face recognition method of the cash register is described in conjunction with a payment process of a user.
Fig. 5 shows a frame diagram of a system for face recognition provided in an exemplary embodiment of the present application, where the system includes a server 120 and a cash register device 111.
The cashier device 111 is a cashier device comprising a 3D camera 21, the 3D camera 21 being adapted to capture a sequence of image frames. An application program is also installed in the cash register device 111, and is used for collecting cash, if a customer presents a payment code, the application program calls the 3D camera 21 to identify a payment account corresponding to the payment code, so as to collect a purchase order of the customer; for another example, the application calls the 3D camera 21 to identify the face of the customer, and identifies the identity of the customer according to the face of the customer, thereby realizing collection of the purchase order.
When the application program runs, the application program monitors a system start broadcast of the cashier device 111, after the system starts, the system calls the application program, after the application program starts, a character string serial number (SN code) of the cashier device 111 is read, and the SN code is used for uniquely identifying the cashier device 111, that is, the SN code of each cashier device 111 is different. The cashier device 111 sends the SN code to the server 120, and the cashier device 111 further requests the server 120 to acquire the face features in the face feature database.
The server 120 stores configuration information of each cash register device 111 in advance, where the configuration information includes face recognition conditions including the number of continuous gazing times, the number of continuous increasing times, and the number of continuous same person times. The server 120 queries the corresponding configuration information according to the received SN code, and sends the configuration information corresponding to the SN code to the cashier device 111. The server 120 also sends the facial features in the facial feature database to the cashier device 111.
The cashier device 111 caches the face features, and writes the configuration information into a local SQLite database (configuration database) 22, wherein the SQLite database is a light database, complies with the ACID relational data management system, and is used for calling and acquiring subsequent non-sensory identification services. After the configuration information is written into the SQLite database, the application program starts a non-inductive face recognition process.
The application calls the 3D camera 21 to acquire a sequence of image frames, which may or may not have a face. Illustratively, when a human face exists in the image frame sequence, three image frame sequences are output, wherein the three image frame sequences comprise a color image frame sequence, an infrared image frame sequence and a depth image frame sequence, the three image frame sequences are sent to the human face acquisition module 23, and the human face acquisition module 23 sends the three image frame sequences to the human face preference module 24.
The face optimization module 24 is configured to perform quality detection on each frame of the acquired image frame sequence including a face, where the quality detection content includes a range in which the face is blocked, a blur range, an illumination range, a pose angle range, a face size, and the like. Illustratively, the face optimization module 24 outputs the face image frame a, the face image frame B, and the face image frame C after detecting the quality of the image frames. The face preference module 24 sends the output image frames to the face biopsy module.
The face biopsy module 25 is configured to perform a live body detection on the preferred face image frame, and the embodiment takes a depth image selected by the live body detection as an example. The living body detection is a method for determining the real physiological characteristics of an object in some identity verification scenes, and in the application of face recognition, the living body detection can verify whether the object is operated by the real living body through the combined actions of blinking, mouth opening, head shaking, head nodding and the like by using the technologies of face key point positioning, face tracking and the like, and can effectively resist common attack means such as photos, face changing, face shielding, screen copying and the like, so that a user is helped to discriminate cheating behaviors, and the personal information safety of the user is guaranteed. In vivo detection is a mature technology in the field, and the method of in vivo detection is not limited in the embodiment of the present application.
The method comprises the steps of extracting face features from a face image frame subjected to living body detection, namely extracting face time sequence features from an image frame sequence, and when the face time features meet face recognition conditions, carrying out identity recognition on the face features in the image frame sequence. The extraction mode of the features comprises an extraction mode of geometric features and an extraction mode of template matching, and then the face image information is converted into feature character string information which uniquely identifies a certain user. For example, the face feature data a is obtained by picture a.
After the face time series feature extraction, the face feature data is cached in the face feature cache pool 26. The condition that the face time sequence feature meets the face recognition condition comprises at least one of the following conditions:
1. user identity judgment (whether the same person persists): illustratively, the face feature data a of a certain user a is not included in the face feature data, when the face feature data a is extracted, the face feature data a is stored in the face feature database, when the face feature data b in the same image frame sequence is extracted, the face feature data a is compared with the face feature data b, and when the similarity of the faces corresponding to the face feature data a and the face feature data b reaches a threshold value, the faces corresponding to the face feature data a and the face feature data b belong to the same face. And adding 1 to the counter, accumulating the number of the image frames belonging to the same face, and identifying the identity of the face features when the number of the image frames belonging to the same face reaches a first counting threshold, wherein the first counting threshold is related to the configuration information.
2. Face area judgment (whether to continuously increase): illustratively, the face feature data a is stored in the face feature database, the face feature data a corresponds to a first face region area, when a subsequent image frame with a face appears is acquired, a second face region area in the image frame is compared with the first face region area, and if the second face region area is larger than the first face region area, the face region continuously increases, which indicates that the user is approaching the cash register device 111. And when the number of the image frames with the continuously increased face area reaches a second counting threshold value, performing identity recognition on the face features, wherein the second counting threshold value is related to the configuration information.
3. Attention judgment (whether or not to continue watching): illustratively, the human eye gazing state is detected through a face recognition framework Openface, and the sight line directions of human eyes in a plurality of continuous image frames are compared to determine whether the human eyes are in a continuous gazing state. If the sight line directions of the human eyes in the j frame image frame and the j +1 frame image frame point to the same direction (in this embodiment, the sight line direction points to the cash register device, that is, the user faces the image frame collected by the cash register device 111), it indicates that the human eyes are in the watching state, and when the sight line directions in the consecutive multi-frame image frames all point to the same direction, it indicates that the human eyes are in the continuous watching state. The more successive image frames, the longer the human eye is in the fixation state. And accumulating the number of the image frames of which the sight line directions point to the same direction, and identifying the identity of the face features when the number of the image frames of which the sight line directions point to the same direction reaches a third counting threshold, wherein the third counting threshold is related to the configuration information.
When the face recognition condition is satisfied, the cashier device 111 transmits the face time series feature to the server 120. The server 120 performs identity recognition according to the received face time series characteristics, the face payment service prestores a corresponding relationship between a payment account corresponding to the user and the face time series characteristics corresponding to the user, and determines a user identity (payment account) according to the corresponding relationship and the face time series characteristics, so that payment is performed on the purchase order through the payment account.
The following are embodiments of an apparatus of the present application that may be used to perform embodiments of the methods of the present application. For details which are not disclosed in the device embodiments of the present application, reference is made to the method embodiments of the present application.
Fig. 6 shows a block diagram of a face recognition apparatus provided in an exemplary embodiment of the present application, where the apparatus includes the following components:
an acquisition module 610 for acquiring a sequence of image frames;
an extracting module 620, configured to, in response to a face appearing in the image frame sequence, extract a face time sequence feature from the image frame sequence, where the face time sequence feature is a face feature included in consecutive image frames;
and the processing module 630 is configured to perform identity recognition on the face features contained in the image frame sequence in response to that the face time sequence features satisfy the face recognition condition.
In an optional embodiment, the condition that the face time series feature satisfies the face recognition condition includes at least one of the following cases:
the face features in the face time sequence features belong to the same face;
the area of a face region in the face time sequence feature is continuously increased;
the human eyes in the human face time series feature are in a continuous watching state.
In an optional embodiment, the acquiring module 610 is configured to acquire a 1 st image frame in which a human face appears in an image frame sequence, where the 1 st image frame includes a first human face feature;
the processing module 630 is configured to accumulate the number n of image frames with face features belonging to the same face in response to matching of second face features and first face features in n consecutive image frames after the 1 st image frame in the image frame sequence, where n is a positive integer; and determining that the face features in the face time sequence features belong to the same face in response to the number of image frames of which the face features belong to the same face reaching a first count threshold.
In an optional embodiment, the acquiring module 610 is configured to acquire a similarity between a first face feature and a second face feature;
the processing module 630 is configured to determine that the second face features match the first face features in response to the similarity reaching a similarity threshold.
In an optional embodiment, the acquiring module 610 is configured to acquire a j frame image frame and a j +1 frame image frame in which a face appears in an image frame sequence, where the j frame image frame includes a first face region, the j +1 frame image frame includes a second face region, the first face region corresponds to an area of the first face region, the second face region corresponds to an area of the second face region, and j is a positive integer;
the processing module 630, configured to accumulate the number of image frames with continuously increased face area in response to the second face area being larger than the first face area; repeating the steps of acquiring the jth frame image frame and the jth +1 th frame image frame and the step of accumulating the number of the image frames with the continuously increased face area; and determining that the area of the face region in the face time series characteristic continuously increases in response to the number of image frames with the continuously increased area of the face region reaching a second counting threshold value.
In an optional embodiment, the acquiring module 610 is configured to acquire an image frame in which a human face appears in an image frame sequence, where the image frame in which the human face appears includes a line of sight direction of human eyes; the processing module is used for responding to the fact that the sight line direction included by n continuous image frames in the image frame sequence belongs to the front view direction, and accumulating the number n of the image frames of which the human eyes are in the continuous watching state, wherein n is a positive integer; and determining that the human eyes in the human face time sequence feature are in the continuous state in response to the number of the image frames with the human eyes in the continuous state reaching a third counting threshold value.
In an alternative embodiment, the apparatus comprises a reading module 640, a first transmitting module 650, and a first receiving module 660;
the reading module 640 is configured to read a device identifier of the terminal;
the first sending module 650 is configured to send an information obtaining request to the server, where the information obtaining request carries the device identifier;
the first receiving module 660 is configured to receive configuration information sent by the server, where the configuration information is obtained by the server according to the device identifier and the association relationship, the association relationship is used to represent a corresponding relationship between the device identifier and the configuration information, and the configuration information includes a face recognition condition.
In summary, in the apparatus of this embodiment, by continuously acquiring the image frame sequence, when a face appears in the image frame sequence, the identity is identified according to the extracted face time sequence features meeting the face identification condition, so that the identity identification manner does not need to be manually selected by the user, and the identity information of the user can be identified without sensing only by locating the face of the user in front of the camera, thereby improving the face identification efficiency and the identification accuracy, and enabling the terminal to accurately identify the identity information of the user. For example, in an unmanned convenience store, a user carries a large number of articles (both hands are not in an idle state), and self-service settlement equipment of the unmanned convenience store can recognize the identity of the user by using the face recognition method provided by the embodiment of the application, so that the non-inductive payment of commodities is completed, manual operation of the user is not needed, and the convenience degree of the user during payment is improved.
The apparatus of this embodiment, when the face features in the face time series features satisfy the face recognition condition, includes at least one of the following cases: the face features belong to the same face, the area of the face region is continuously increased, and the eyes are in a continuous watching state, so that the face recognition process is started, and the terminal can realize the non-inductive recognition of the identity information according to the conditions.
The device of the embodiment also integrates the number of the image frames with the same face by matching the face features in the 1 st frame image frame with the face features in the n continuous frame image frames behind the 1 st frame image frame, and determines that the face features belong to the same face when the number of the image frames with the same face reaches a first counting threshold. The face recognition accuracy is improved, and face false recognition is avoided.
The device of this embodiment also judges whether the first face feature matches the second face feature by judging whether the similarity between the first face feature and the second face feature reaches a similarity threshold, improves the recognition accuracy of face recognition, and avoids the false recognition of the face.
The device of the embodiment also accumulates the image frames with the continuously increased face area by comparing the face area in any two continuous image frames, and determines that the face area continuously increases when the number of the image frames with the continuously increased face area reaches the second counting threshold. The face recognition accuracy is improved, and the face is prevented from being recognized by mistake.
The device of the embodiment accumulates the number of the image frames of which the human eyes are in the continuous watching state when the sight line direction in the image frames of the continuous n frames belongs to the front sight direction, and determines that the human eyes are in the continuous watching state when the number of the image frames of which the human eyes are in the continuous watching state reaches a third counting threshold value. The face recognition accuracy is improved, and face false recognition is avoided.
The device of the embodiment further obtains the device identifier from the server, and the server obtains the configuration information of the terminal according to the device identifier, so that the terminal can realize the non-inductive identification of the face of the user according to the configuration information. The character string serial number of the unique identification terminal is used as the equipment identification, so that each terminal is distinguished, and the server can set corresponding configuration information for each terminal.
Fig. 7 is a block diagram of a face recognition apparatus according to another exemplary embodiment of the present application, where the apparatus includes the following components:
a second receiving module 710, configured to receive a face time sequence feature sent by a terminal, where the face time sequence feature is a face feature included in a continuous image frame;
and the identification module 720 is configured to identify the face time sequence features.
In an alternative embodiment, the apparatus includes a second sending module 730 and an obtaining module 740;
the second sending module 730 is configured to receive an acquisition request sent by a terminal, where the acquisition request carries an equipment identifier;
the obtaining module 740 is configured to determine an association relationship, where the association relationship is used to represent a corresponding relationship between an apparatus identifier and configuration information, and the configuration information includes a face recognition condition; acquiring configuration information according to the equipment identifier and the association relation;
the second sending module 730 is configured to send the configuration information to the terminal.
In an alternative embodiment, the apparatus includes a payment module 750;
the identification module 720 is configured to obtain a payment account according to an identification result obtained by identity identification;
the payment module 750 is configured to pay the purchase order according to the payment account to obtain a successful payment result;
the second sending module 730 is configured to send the successful payment result to the terminal.
In summary, the device provided in this embodiment receives, by the server, the face time series feature sent by the cash register device, and determines the user identity according to the face time series feature, so that the terminal realizes the non-inductive identification of the identity information of the user, and the face identification efficiency is improved.
The device provided by the embodiment also obtains the equipment identifier from the server, and the server obtains the configuration information of the terminal according to the equipment identifier, so that the terminal realizes the non-inductive identification of the face of the user according to the configuration information. The character string serial number of the unique identification terminal is used as the equipment identification, so that each terminal is distinguished, and the server can configure corresponding configuration information for each terminal.
The device provided by the implementation also sends the successful payment result to the terminal through the server, so that the terminal completes the settlement process of the purchase order and the non-sensitive payment of the user is realized.
Fig. 8 shows a schematic structural diagram of a server according to an exemplary embodiment of the present application. The server may be the server 120 in the computer system 100 shown in fig. 1. Specifically, the following sections are included.
The server 800 includes a Central Processing Unit (CPU) 801, a system Memory 804 including a Random Access Memory (RAM) 802 and a Read Only Memory (ROM) 803, and a system bus 805 connecting the system Memory 804 and the CPU 801. The server 800 also includes a basic input/output system (I/O system) 806, which facilitates transfer of information between devices within the computer, and a mass storage device 807 for storing an operating system 813, application programs 814, and other program modules 815.
The basic input/output system 806 includes a display 808 for displaying information and an input device 809 such as a mouse, keyboard, etc. for user input of information. Wherein a display 808 and an input device 809 are connected to the central processing unit 801 through an input output controller 810 connected to the system bus 805. The basic input/output system 806 may also include an input/output controller 810 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 810 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 807 is connected to the central processing unit 801 through a mass storage controller (not shown) connected to the system bus 805. The mass storage device 807 and its associated computer-readable media provide non-volatile storage for the server 800. That is, the mass storage device 807 may include a computer readable medium (not shown) such as a hard disk or Compact disk Read Only Memory (CD-ROM) drive.
Computer-readable media may include computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other Solid State Memory technology, CD-ROM, Digital Versatile Disks (DVD), or Solid State Drives (SSD), other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). Of course, those skilled in the art will appreciate that computer storage media is not limited to the foregoing. The system memory 804 and mass storage 807 described above may be collectively referred to as memory.
According to various embodiments of the application, the server 800 may also operate as a remote computer connected to a network through a network, such as the Internet. That is, the server 800 may be connected to the network 812 through the network interface unit 811 coupled to the system bus 805, or may be connected to other types of networks or remote computer systems (not shown) using the network interface unit 811.
The memory further includes one or more programs, and the one or more programs are stored in the memory and configured to be executed by the CPU.
Fig. 9 shows a block diagram of a computer device 900 provided in an exemplary embodiment of the present application. The computer device 900 may be: the system comprises a cash register device, an identity verification device, an intelligent camera, an intelligent mobile phone, a tablet computer, a notebook computer or a desktop computer. The computer device 900 may also be referred to by other names such as user device, portable computer device, laptop computer device, desktop computer device, and so forth. Such as the computer device may be the terminal 110 shown in fig. 1.
Generally, computer device 900 includes: a processor 901 and a memory 902.
Processor 901 may include one or more processing cores, such as a 9-core processor, an 8-core processor, and so forth. The processor 901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 901 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 901 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 901 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 902 may include one or more computer-readable storage media, which may be non-transitory. The memory 902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 902 is used to store at least one instruction for execution by processor 901 to implement a face recognition method provided by method embodiments herein.
In some embodiments, computer device 900 may also optionally include: a peripheral interface 903 and at least one peripheral. The processor 901, memory 902, and peripheral interface 903 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 903 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 904, a display screen 905, a camera assembly 906, an audio circuit 907, a positioning assembly 908, and a power supply 909.
The peripheral interface 903 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 901 and the memory 902. In some embodiments, the processor 901, memory 902, and peripheral interface 903 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 901, the memory 902 and the peripheral interface 903 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 904 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 904 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 904 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 904 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, etc. The radio frequency circuitry 904 may communicate with other computer devices via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or Wi-Fi (Wireless-Fidelity) networks. In some embodiments, the radio frequency circuit 904 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 905 is a touch display screen, the display screen 905 also has the ability to capture touch signals on or over the surface of the display screen 905. The touch signal may be input to the processor 901 as a control signal for processing. At this point, the display 905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 905 may be one, providing the front panel of the computer device 900; in other embodiments, the number of the display screens 905 may be at least two, and each of the display screens may be disposed on a different surface of the computer device 900 or may be in a foldable design; in other embodiments, the display 905 may be a flexible display, disposed on a curved surface or on a folded surface of the computer device 900. Even more, the display screen 905 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display panel 905 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 906 is used to capture images or video. Optionally, camera assembly 906 includes a front camera and a rear camera. Generally, a front camera is disposed on a front panel of a computer apparatus, and a rear camera is disposed on a rear surface of the computer apparatus. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 906 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuit 907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 901 for processing, or inputting the electric signals to the radio frequency circuit 904 for realizing voice communication. The microphones may be multiple and placed at different locations on the computer device 900 for stereo sound acquisition or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert the electrical signals from the processor 901 or the radio frequency circuit 904 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuit 907 may also include a headphone jack.
The Location component 908 is used to locate the current geographic Location of the computer device 900 for navigation or LBS (Location Based Service). The Positioning component 908 may be a Positioning component based on the GPS (Global Positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
The power supply 909 is used to supply power to the various components in the computer device 900. The power source 909 may be ac, dc, disposable or rechargeable. When power source 909 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
Those skilled in the art will appreciate that the configuration illustrated in FIG. 9 is not intended to be limiting of the computer device 900 and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components may be employed.
Embodiments of the present application further provide a computer device, including: a processor and a memory, the computer device memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, the at least one instruction, at least one program, set of codes, or set of instructions being loaded and executed by the processor to implement the face recognition method in the above embodiments.
Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the face recognition method in the above embodiments.
Embodiments of the present application also provide a computer program product or a computer program, which includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the face recognition method as in the above embodiments.
It should be understood that reference herein to "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is intended only to illustrate the alternative embodiments of the present application, and should not be construed as limiting the present application, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A face recognition method, comprising:
acquiring a sequence of image frames;
in response to the appearance of a face in the image frame sequence, extracting face time sequence features from the image frame sequence, wherein the face time sequence features are face features contained in continuous image frames;
and responding to the condition that the human face time sequence characteristics meet the human face recognition conditions, and performing identity recognition on the human face characteristics contained in the image frame sequence.
2. The method of claim 1, wherein the face time-series feature satisfying the face recognition condition comprises at least one of:
the human face features in the human face time sequence features belong to the same human face;
the area of the face region in the face time sequence feature is continuously increased;
and the human eyes in the human face time sequence characteristics are in a continuous watching state.
3. The method of claim 2, further comprising:
acquiring a 1 st frame image frame with a human face in the image frame sequence, wherein the 1 st frame image frame comprises a first human face feature;
in response to matching of second face features and the first face features in n continuous image frames located after the 1 st image frame in the image frame sequence, accumulating the number n of image frames with face features belonging to the same face, wherein n is a positive integer;
and determining that the face features in the face time sequence features belong to the same face in response to the number of image frames of which the face features belong to the same face reaching a first count threshold.
4. The method of claim 3, further comprising:
acquiring the similarity between the first face feature and the second face feature;
determining that the second facial features match the first facial features in response to the similarity reaching a similarity threshold.
5. The method of claim 2, further comprising:
acquiring a j frame image frame and a j +1 frame image frame of a face appearing in the image frame sequence, wherein the j frame image frame contains a first face region, the j +1 frame image frame contains a second face region, the first face region corresponds to the area of the first face region, the second face region corresponds to the area of the second face region, and j is a positive integer;
in response to the second face region area being greater than the first face region area, accumulating a number of image frames in which the face region area continues to increase;
repeating the steps of obtaining the jth frame image frame and the jth +1 frame image frame and the step of accumulating the number of the image frames with the continuously increased face area;
determining that the face region area in the face time series features continues to increase in response to the number of image frames for which the face region area continues to increase reaching a second count threshold.
6. The method of claim 2, further comprising:
acquiring an image frame with a human face in the image frame sequence, wherein the image frame with the human face comprises the sight line direction of the human eyes;
in response to that the sight line direction included in n consecutive image frames in the image frame sequence belongs to an orthographic direction, accumulating the number n of the image frames of the human eye in the continuous watching state, wherein n is a positive integer;
determining that the human eye in the human face time series feature is in the persistence state in response to the number of image frames for which the human eye is in the persistence state reaching a third count threshold.
7. The method of any of claims 1 to 6, wherein the acquiring the sequence of image frames is preceded by:
reading the equipment identification of the terminal;
sending an information acquisition request to a server, wherein the information acquisition request carries the equipment identification;
receiving configuration information sent by the server, wherein the configuration information is obtained by the server according to the equipment identifier and an incidence relation, the incidence relation is used for representing a corresponding relation between the equipment identifier and the configuration information, and the configuration information comprises the face recognition condition.
8. An apparatus for face recognition, the apparatus comprising:
the acquisition module is used for acquiring an image frame sequence;
the extraction module is used for responding to the appearance of a human face in the image frame sequence and extracting human face time sequence characteristics from the image frame sequence, wherein the human face time sequence characteristics are human face characteristics contained in continuous image frames;
and the processing module is used for responding that the human face time sequence characteristics meet the human face recognition conditions and carrying out identity recognition on the human face characteristics contained in the image frame sequence.
9. The apparatus of claim 8, wherein the face time-series feature satisfying the face recognition condition comprises at least one of:
the human face features in the human face time sequence features belong to the same human face;
the area of the face region in the face time sequence feature is continuously increased;
and the human eyes in the human face time sequence features are in a continuous watching state.
10. The apparatus of claim 9,
the acquisition module is used for acquiring a 1 st frame of image frame with a human face in the image frame sequence, wherein the 1 st frame of image frame comprises a first human face feature;
the processing module is used for responding to matching of second face features and the first face features in n continuous image frames behind the 1 st image frame in the image frame sequence, and accumulating the number n of image frames with face features belonging to the same face, wherein n is a positive integer;
and the processing module is used for determining that the human face features in the human face time sequence features belong to the same human face in response to the fact that the number of the image frames of which the human face features belong to the same human face reaches a first counting threshold value.
11. The apparatus of claim 10,
the acquisition module is used for acquiring the similarity between the first human face features and the second human face features;
the processing module is used for determining that the second face features are matched with the first face features in response to the similarity reaching a similarity threshold.
12. The apparatus of claim 9,
the acquisition module is used for acquiring a jth frame image frame and a jth +1 frame image frame of a face appearing in the image frame sequence, wherein the jth frame image frame contains a first face region, the jth +1 frame image frame contains a second face region, the first face region corresponds to the area of the first face region, the second face region corresponds to the area of the second face region, and j is a positive integer;
the processing module is used for responding to the fact that the area of the second face region is larger than that of the first face region, and accumulating the number of image frames with the continuously increased face region area;
the processing module is used for repeating the steps of acquiring the jth frame image frame and the jth +1 frame image frame and the step of accumulating the number of the image frames with the continuously increased face area;
the processing module is used for responding to the fact that the number of the image frames with the continuously increased face region area reaches a second counting threshold value, and determining that the face region area in the face time sequence feature is continuously increased.
13. The apparatus of claim 9,
the acquisition module is used for acquiring image frames with faces in the image frame sequence, wherein the image frames with the faces comprise the sight line directions of the human eyes;
the processing module is used for responding to the sight line direction included by the continuous n frames of image frames in the image frame sequence belonging to the front view direction, and accumulating the number n of the image frames of the human eyes in the continuous watching state, wherein n is a positive integer;
the processing module is configured to determine that the human eyes in the human face time series feature are in the persistent state in response to the number of image frames in which the human eyes are in the persistent state reaching a third count threshold.
14. A computer device, characterized in that the computer device comprises: a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the face recognition method according to any one of claims 1 to 7.
15. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, implements the face recognition method according to any one of claims 1 to 7.
CN202110017428.7A 2021-01-07 2021-01-07 Face recognition method, device, equipment and storage medium Pending CN114742561A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110017428.7A CN114742561A (en) 2021-01-07 2021-01-07 Face recognition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110017428.7A CN114742561A (en) 2021-01-07 2021-01-07 Face recognition method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114742561A true CN114742561A (en) 2022-07-12

Family

ID=82274153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110017428.7A Pending CN114742561A (en) 2021-01-07 2021-01-07 Face recognition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114742561A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI822261B (en) * 2022-08-17 2023-11-11 第一商業銀行股份有限公司 Product checkout system and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI822261B (en) * 2022-08-17 2023-11-11 第一商業銀行股份有限公司 Product checkout system and method

Similar Documents

Publication Publication Date Title
CN110705983B (en) Method, device, equipment and storage medium for code scanning payment processing
CN111368811B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN112036331A (en) Training method, device and equipment of living body detection model and storage medium
CN112257552B (en) Image processing method, device, equipment and storage medium
AU2020309094B2 (en) Image processing method and apparatus, electronic device, and storage medium
CN111062248A (en) Image detection method, device, electronic equipment and medium
CN111432245B (en) Multimedia information playing control method, device, equipment and storage medium
JP7103229B2 (en) Suspiciousness estimation model generator
CN110929159B (en) Resource release method, device, equipment and medium
CN113035196A (en) Non-contact control method and device for self-service all-in-one machine
CN110659895A (en) Payment method, payment device, electronic equipment and medium
CN110503416B (en) Numerical value transfer method, device, computer equipment and storage medium
CN114742561A (en) Face recognition method, device, equipment and storage medium
CN111754272A (en) Advertisement recommendation method, recommended advertisement display method, device and equipment
CN115206305B (en) Semantic text generation method and device, electronic equipment and storage medium
CN110782602A (en) Resource transfer method, device, system, equipment and storage medium
CN113987326B (en) Resource recommendation method and device, computer equipment and medium
CN111654717B (en) Data processing method, device, equipment and storage medium
CN109872470A (en) A kind of self-help teller machine working method, system and device
CN113378705B (en) Lane line detection method, device, equipment and storage medium
CN114078582A (en) Method, device, terminal and storage medium for associating service information
CN112001442A (en) Feature detection method and device, computer equipment and storage medium
CN111080630A (en) Fundus image detection apparatus, method, device, and storage medium
CN111243605A (en) Service processing method, device, equipment and storage medium
CN111325083A (en) Method and device for recording attendance information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40070428

Country of ref document: HK