CN114529959A - Application method of face recognition - Google Patents

Application method of face recognition Download PDF

Info

Publication number
CN114529959A
CN114529959A CN202011210990.3A CN202011210990A CN114529959A CN 114529959 A CN114529959 A CN 114529959A CN 202011210990 A CN202011210990 A CN 202011210990A CN 114529959 A CN114529959 A CN 114529959A
Authority
CN
China
Prior art keywords
computing device
face recognition
picture
value information
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011210990.3A
Other languages
Chinese (zh)
Inventor
陈普
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Cloud Computing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co Ltd filed Critical Huawei Cloud Computing Technologies Co Ltd
Priority to CN202011210990.3A priority Critical patent/CN114529959A/en
Priority to PCT/CN2021/128348 priority patent/WO2022095884A1/en
Publication of CN114529959A publication Critical patent/CN114529959A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses an application method, a system, a device, equipment and a nonvolatile memory for face recognition, and the method of the embodiment of the application comprises the following steps: acquiring a picture comprising a human face; carrying out face recognition on the picture to obtain value information bound with a face recognition result; the computing device inputs the value information into an application through an operating system of the computing device.

Description

Application method of face recognition
Technical Field
The present application relates to the field of Face Recognition Service (Face Recognition Service), and in particular, to a method, a system, an apparatus, a device, and a non-volatile memory for Face Recognition.
Background
Face Recognition Service (FRS) is an intelligent Service for performing identity Recognition by processing, analyzing and understanding a Face image with a computer based on Face feature information of a person. The face recognition is provided for the user in an open API (Application Programming Interface) manner, and the user obtains a face processing result by accessing and calling the API in real time, so that the user is helped to automatically perform face recognition, comparison, similarity query and the like, an intelligent service system is created, and the service efficiency is improved. Current face recognition provides the following sub-services: the system comprises face detection, face comparison, face search, living body detection and the like, and is suitable for a plurality of application scenes such as enterprises, security protection, electronic identity, safety management of houses, public security criminal investigation and the like. In the prior art, the face recognition function is completed by integrating the face recognition cloud service in the application, so that the client of the application software integrated face recognition cloud service needs to be modified, and potential safety hazards can be brought to the application.
Disclosure of Invention
The following describes in detail an application method, system, apparatus, device and non-volatile memory for face recognition provided in an embodiment of the present application with reference to the accompanying drawings, so as to solve the security problem of face recognition implemented by a client in an application.
In a first aspect of the present application, a method for applying face recognition is provided. The method comprises the following steps: acquiring a picture comprising a human face; carrying out face recognition on the picture to obtain value information bound with a face recognition result; the computing device inputs the value information into an application through its own operating system.
According to the application method of the face recognition, the communication between the face recognition system and the application is communicated through the operating system, and the face recognition client or the face recognition system is integrated on the computing equipment on the premise of avoiding modifying the application.
In a possible design of the first aspect, the process of obtaining the face picture may be automatic, and the computing device may continuously read in a video stream. In one possible implementation, when the computing device detects a face in the video stream, the action of acquiring a picture of the face is automatically triggered.
In a possible design of the first aspect, the process of obtaining the face picture may be triggered by a user, and a triggering manner may be set in advance.
In one possible design of the first aspect, the application method of face recognition provided by the present application is to input the value information into an application through a simulated input event of an operating system of a computing device.
The operating system of the computing device may provide a program interface for implementing the simulated input event. After receiving the simulation input event, the operating system informs the application to execute. In the process, the application of the computing device need not communicate directly with the providing of the face recognition client or system.
In one possible design of the first aspect, when one or more applications are running in the computing device, the computing device detects an application located in a top-most window of a desktop of an operating system of the computing device and notifies the application of the simulated event.
In one possible design of the first aspect, the computing device may trigger different applications to execute the simulated input event according to different triggering manners.
In one possible design of the first aspect, the computing device notifies a specific application, which is set in advance, of the input event to execute.
In one possible design of the first aspect, the picture of the face may be obtained by the server or obtained by the computing device.
In one possible design of the first aspect, the computing device performs face recognition on the picture and obtains value information bound to a face recognition result.
In a possible design of the first aspect, the performing face recognition on the picture and acquiring value information bound to a face recognition result includes: and the server performs face recognition on the picture and acquires value information bound with a face recognition result.
In a possible design of the first aspect, the performing face recognition on the picture and acquiring value information bound to a face recognition result includes: the server carries out face recognition on the picture; the computing device obtains value information bound with the face recognition result.
In one possible design of the first aspect, the value information includes: any one or combination of a plurality of information such as identification number, mobile phone number, member number, bank card number, order number, reservation number, pick-up number, payment account number, logistics number, license plate number, address, account number, service number and the like.
In a second aspect, the present application provides a system comprising a computing device and a server: the computing device is used for acquiring a picture comprising a human face; the server carries out face recognition on the picture to acquire value information bound with a face recognition result; the computing device inputs the value information into an application through an operating system of the computing device.
In a third aspect, the present application provides a system comprising a computing device and a server: the computing device obtains a picture including a face; the server carries out face recognition on the picture; the computing equipment acquires value information bound with a face recognition result; the computing device inputs the value information into an application through an operating system of the computing device.
In a fourth aspect, the present application provides a system comprising a computing device and a server: the server acquires a picture comprising a human face; the server carries out face recognition on the picture; the computing equipment acquires value information bound with a face recognition result; the computing device inputs the value information into an application through an operating system of the computing device.
In a fifth aspect, the present application provides a system comprising a computing device and a server: the server acquires a picture comprising a human face; the server carries out face recognition on the picture to acquire value information bound with a face recognition result; the computing device inputs the value information into an application through an operating system of the computing device.
In a sixth aspect, the present application provides a system comprising a computing device and a server: the computing device obtains a picture including a face; the server carries out face recognition on the picture to acquire value information bound with a face recognition result; the computing device inputs the value information into an application through an operating system of the computing device.
In a seventh aspect, the present application provides an application apparatus for face recognition, where the apparatus includes a face processing unit and an operating system. The face processing unit is used for acquiring a picture comprising a face, carrying out face recognition on the picture and acquiring value information bound with a face recognition result; the operating system is used for inputting the value information into an application.
In one possible design of the seventh aspect, the operating system is configured to input the value information to an application by simulating an input event.
In an eighth aspect, the present application provides a computing device comprising a processor and a memory. The processor executes the instructions in the memory to cause the computing device to implement the method implemented by the computing device in the first aspect of the present application or any of the possible designs of the first aspect.
In a ninth aspect, the present application provides a server comprising a processor and a memory. The processor executes the instructions in the memory to cause the computing device to implement the method of the first aspect of the present application or of any possible design of the first aspect, as implemented by a server.
In a tenth aspect, the present application provides a non-volatile memory comprising instructions that instruct a computer device to perform the method of the first aspect or any of the possible designs of the first aspect.
Drawings
Fig. 1 is a schematic diagram of an example of an application system architecture of face recognition provided in an embodiment of the present application.
Fig. 2 is a schematic diagram illustrating an example of a face recognition system architecture provided in an embodiment of the present application
Fig. 3 is a flowchart illustrating an example of a face recognition method according to an embodiment of the present application.
Fig. 4 is a schematic diagram of an example application scenario provided in the embodiment of the present application
Fig. 5 is a schematic diagram of an example of a structure of a face recognition system according to an embodiment of the present application.
Fig. 6 is a schematic diagram illustrating an example of a structure of a face recognition apparatus 300 according to an embodiment of the present application.
Fig. 7 is a flowchart illustrating an example of a face recognition method according to an embodiment of the present application.
Fig. 8 is a block diagram of an example of a computing device 500 according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and principles of the present application clearer, embodiments of the present application will be described in further detail below with reference to fig. 1 to 7.
Fig. 1 is a schematic diagram illustrating an example architecture of a face recognition system according to an embodiment of the present disclosure. As shown in fig. 1, the system may include a computing device 101 and a camera 102. In one embodiment, the computing device 101 may be an operating system-mounted computing device, the computing device 101 having a memory, a processor, and a display. In one embodiment, camera 102 is a camera-equipped device that may be separate from computing device 101 or integrated into computing device 101. For example, data may be transmitted between the camera 102 and the computing device 101 through a Serial port (Serial port), a Universal Serial Bus (USB), a Transmission Control Protocol/Internet Protocol (TCP/IP), Bluetooth (Bluetooth), or other protocols. Optionally, in another possible embodiment, the camera 102 may further have an automatic face acquisition capability, for example, after the camera 102 detects a face in a video stream, a picture containing the face is automatically acquired.
Fig. 2 is a schematic diagram illustrating an example architecture of a face recognition system according to an embodiment of the present application, and the face recognition system includes a computing device 101, a camera 102, and a server 103, where the server 103 is a device capable of providing a face recognition service.
In some possible embodiments, the server 103 may also be a server of a cluster or data center. A server (e.g., server 103) in the cluster or data center deploys an instance that is used to provide a face recognition cloud service.
The server 103 deploys an instance for implementing a face instance, which may be a virtual machine or a container. Alternatively, the server 103 does not deploy an instance, but directly provides the face recognition function. Whether the server 103 provides face recognition capabilities by way of example or directly, the present application refers collectively to the server 103 providing face recognition capabilities, such as providing face recognition services to the computing device 101.
Fig. 3 is a flowchart illustrating a method for face recognition according to an embodiment of the present application, where the method may be applied to the system shown in fig. 1 or fig. 2. Fig. 3 is a flowchart illustrating integration of face recognition on a computing device according to an embodiment, where the steps of an embodiment of the method may include:
step 110, obtaining a picture including a human face.
In one possible implementation, the computing device 101 obtains a video stream from the camera 102, and processes the video stream to obtain a picture containing a human face. For example, the processing may include detecting a face, grabbing a face.
Optionally, the above method may further include: the server 103 acquires a video stream from the photographing device 102, processes the video stream, and obtains a picture including a human face.
In a possible implementation, the local camera 102 may support the face detection and face capture functions, and directly send the captured face picture to the computing device 101, so that the computing device 101 does not need to process a video stream to obtain a face picture.
In a possible implementation, the above manner may also be: the local camera 102 directly sends the captured face picture to the server 103.
In a possible implementation scenario, the above process of obtaining the face picture may be automatic, and the capturing device 102 may continuously read in a video stream to obtain the face picture. In one possible implementation, the camera 102 acquires a human face and then performs human face recognition.
Optionally, the process of acquiring the face picture may also be triggered by the user. For example, the acquisition and recognition of the face picture can be triggered by setting a shortcut key or other methods.
And 120, carrying out face recognition on the picture to acquire value information bound with a face recognition result.
In one possible implementation, the computing device 101 sends the face picture to a server 103 that can provide face recognition service, and the server 103 obtains identification information of face binding. Alternatively, the computing device 101 obtains the identification information of the face binding based on the face recognition result fed back by the server 103. The server 103 may be deployed in a public cloud cluster and a data center, or may be any other device that can provide remote face recognition services.
In one possible implementation, the computing device 101 locally implements face recognition and obtains value information bound to the face recognition result.
For example, the value information may be one or more of the user name, the identification number, the mobile phone number, the member number, the bank card number, the order number, the reservation number, the pick-up number, the payment account number, the logistics number, the license plate number, the address, the account number, the service number, and other valuable information contents.
Step 130, the computing device 101 inputs the value information into an application through an operating system of the computing device 101.
After the computing device 101 obtains the value information, its operating system inputs the content of the value information into an inputtable User Interface (UI) of an application by simulating an input event. For example, in the Windows operating system, an API function keybd _ event () is provided that simulates the operation of a keyboard, and this function can simulate the corresponding keyboard action. The keybd _ event () function can trigger a key event, i.e. a WM _ KEYDOWN or WM _ KEYUP message is generated.
In one possible implementation, the event message sent by the operating system may be received by the top application, where the top application is the application where the window located at the top layer of the system desktop is located. For example, when one or more applications are running in a computing device, the computing device detects the application located in the top window of the desktop of the operating system of the computing device and notifies the application of the simulation event.
Alternatively, the computing device may trigger different applications to execute the analog input event according to different trigger modes. For an example of an application scenario, different trigger modes may correspond to different applications, for example, a shortcut key "K" corresponds to an application a, and a shortcut key "M" corresponds to an application B.
Optionally, the computing device notifies the specific application, which is set in advance, of the input event to execute. For example, the user sets in advance to notify the "XX gym member management system" application of the simulation input event to execute.
An example of an application scenario is shown in fig. 4. The camera 102 may be an external camera device with face capture capability, and the computing device 101 is configured to process the face photos and present the results in an application. For example, as shown in fig. 4, the shooting device 102 sends a captured face picture to the computing device 101, and the computing device 101 performs recognition processing on the face image to acquire bound value information according to a face recognition result.
For example, the value information may be one or more of the user name, the identification number, the mobile phone number, the member number, the bank card number, the order number, the reservation number, the pick-up number, the payment account number, the logistics number, the license plate number, the address, the account number, the service number, and other valuable information contents.
The computing device 101 generates content data that simulates input events, event carrying and value information, to an operating system. For example, the Windows system may implement analog input by keybd _ event (), the first argument of the keybd _ event () function is a virtual key, and the virtual key corresponding to a character in the value information may be set as the first argument of the function. After an input event of the operating system is received by an application on the computing device, the event is forwarded to a UI component of the inputtable user interface that is in processing focus for processing. For example, the text entry box in focus in fig. 4, which may display the content of the entered value information into the component.
In other possible embodiments, during the process of simulating the input, if some related configuration is added, new input may be added, for example, after the mobile phone number is input, the input of "enter" may be simulated, so that the input of the application may be better adapted. It is also possible to execute a configured front-end input shortcut key (typically a combination shortcut key) to trigger some kind of input box to be in focus before inputting the phone number.
Fig. 5 is a schematic diagram illustrating an example structure of a face recognition system of a computing device 101 according to an embodiment of the present application. Inside the computing device 101, the face processing unit 202 is configured to obtain a picture including a face, perform face recognition on the picture, and obtain value information bound to a face recognition result. Operating system 207 is used to input the value information to application 201. The application 201 is an application installed and running in the computing device 101, and in the embodiment of the present application, is configured to receive value information of an input of an operating system and present a result of the input.
By way of example, the face processing unit 202 may be a local application on the computing device 101, a Web browser-based plug-in, a Web-based Javascript program, or other similar means. The application 201 may be an application requiring face recognition, such as a library management system or a gymnasium member management system, or may be an application based on a Web browser. The camera 102 is an external device with image or video stream acquisition capabilities or a device integrated on the computing device 103. Optionally, the camera 102 may have a face detection and acquisition function.
In some possible implementations, as shown in fig. 5, the face processing unit 202 may further include some sub-units: a face acquisition unit 203, a face recognition unit 204, a simulation input unit 205, and a face import unit 206. Four units are described below.
The face acquiring unit 203 is configured to acquire a video stream from the shooting device 102, process the video stream to obtain a picture including a face, and send the acquired face picture to the face identifying unit 204 for face identification. Alternatively, the face acquisition unit 203 may also acquire a captured face picture directly from the camera 102.
The face recognition unit 204 is configured to recognize a face picture and acquire value information corresponding to a face. In an application scenario, for example, the face recognition unit 204 accesses the server 103 providing the face recognition cloud service, and obtains value information corresponding to a face returned by the server 103. In this example scenario, the face recognition unit 204 may be configured to package and send data, call a cloud face recognition function, obtain identification information returned by cloud face recognition, and the like. Optionally, the face recognition cloud service may be a face recognition cloud service provided by a public cloud or a private cloud, or may be a remote face recognition service provided by a set of physical servers. Optionally, the service may also be a module in local computing, integrated in the face recognition unit. In some possible embodiments, after acquiring the identification information, the face recognition unit 204 triggers the analog input of the analog input unit 205.
And the analog input unit 205 is used for generating analog input events of the operating system 207, and then the operating system 207 inputs value information corresponding to the human face into an application. For example, the analog input unit 205 may trigger an analog input event through the interface capability of the operating system 207 by using the obtained value information, for example, the interface provided by the operating system may be used to implement analog keyboard input.
The face importing unit 206 is an optional unit, and is configured to bind the face picture and the identification information corresponding to the face, and upload the bound face picture and the identification information to a local or cloud face database. Optionally, the face picture may be acquired by the face acquisition unit 203, or may be uploaded locally from the computing device.
For a possible implementation scenario example, a face recognition cloud service cluster establishes a face database based on cloud services, and introduces one or more pieces of face information into the face database. Optionally, the face database may be deployed in a local or other remote device. The face information includes a face picture and other information bound thereto, such as face identification information and the like. The number of faces that can be imported into the face library depends on the actual application scene, for example, a small store, and members may be enough for hundreds of people; large chain stores may have hundreds of thousands of members in a city. In practice, corresponding to the face importing function, the module should also have additional functions such as face query, face update, face deletion, and face deletion in batch.
Fig. 6 is a face recognition apparatus 300 provided in the present application, which includes a face processing unit 202 and an operating system 207. The description of this section in fig. 5 is referred to for the face processing unit 202 and the operating system 207.
Fig. 7 is a diagram illustrating an example flow of a method for a face recognition application according to an embodiment of the present application. For example, before starting face recognition, the face recognition device 202 is configured, for example, cloud internet of things information (cloud address, port), authentication information (authentication Token, AK/SK, username and password, etc.), face recognition configuration information (recognition trigger mode, trigger shortcut key, whether to perform live body detection, etc.), and the like. Then, optionally, uploading the face image and the identification information or other related operations is implemented by the face importing module 206 in fig. 5, please refer to the foregoing description. Finally, the process of one face recognition as shown in steps 410-470 in FIG. 7 may be implemented on a computing device.
Step 410, triggering face recognition
For example, the user may actively trigger the face obtaining unit 203 of the face processing unit 202 to start face obtaining by a shortcut key or other means.
Optionally, the manner of triggering the recognition may also be automatic. For example, the face obtaining unit 203 continuously reads in the video stream, and after obtaining the face picture from the camera, the face identifying unit 204 is triggered to identify and perform subsequent operations.
The face recognition unit 204 recognizes the current application by adding a front detection module, so that the cooperative service of the face recognition unit 204 and multiple applications can be realized.
Step 411, reading the camera video stream.
For example, when the trigger recognition mode is active triggering, in this step, the face acquisition unit 203 turns on the shooting function of the shooting device 102 and starts to read in the video stream.
And step 420, detecting the human face and grabbing the human face.
For example, when the camera 102 does not have the face detection and capture function, the face acquisition unit 203 will complete the function in this step. The face acquisition unit 203 performs face detection on the video stream and captures a face picture including a clearly identifiable face.
And step 430, face recognition.
For example, the face recognition unit 204 accesses a face recognition cloud service, such as a service interface of the called face recognition cloud service. Optionally, if the face recognition calculation is performed locally, a local calculation module is called to recognize the current face picture.
And step 440, returning the bound value information.
For example, in the scene, after the face recognition is successful, the server providing the face recognition cloud service returns value information bound to the face. For example, in a scene, face recognition may fail, for example, if there is no binding information of a face in a current picture in a face database, error information is returned. Optionally, after the recognition fails, the face picture may be continuously acquired and recognized, or the acquisition and recognition may be stopped.
Step 450, triggering analog input.
For example, after monitoring a return message from the cloud server, the face recognition unit 204 sends the return message to the analog input unit, and immediately triggers the analog input unit 205 to generate an analog input.
Step 460, generating a simulation input through the operating system.
For example, the simulation input unit may input the value information or other related information into the user interface of the application through the simulation key input of the operating system, and the specific implementation may refer to the foregoing example description.
The embodiment of the present application further provides a computing device 500, as shown in fig. 7, where the computing device 500 may be the computing device 101 or the server 103. Computing device 500 includes bus 501, processor 502, memory 503, and when computing device 500 is representative of computing device 101, may also include optional components: a display 504. The processor 502, the memory 503, and the display 504 communicate via the bus 501.
The bus 501 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 8, but this is not intended to represent only one bus or type of bus.
For the computing device 101, the processor 502 may be a Central Processing Unit (CPU), and may also include other processor chips such as a Graphics Processing Unit (GPU). The memory 503 may be a Random Access Memory (RAM) or a solid-state drive (SSD), or other device or memory instance with storage capability. In some possible implementations, the processor 502 may also control other interfaces to receive data. Wherein, the other interfaces can be cameras and the like.
For the server 103, the processor 502 may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Neural Network Processing Unit (NPU), an fpga (field Programmable Gate array), or the like.
The memory 503 may be a Random Access Memory (RAM) or a solid-state drive (SSD) or other device or instance with storage capability. The memory 503 stores executable program code, and the processor 502 executes the executable program code to realize the functions of the aforementioned computing device 101 or the aforementioned server 103, or to execute the steps executed by the computing device 101 or the server 103 in the methods described in the aforementioned embodiments. Alternatively, for the computing device 101, the processor 502 controls the display 504 to present the relevant results to the user.
The display 504 is an input/output (I/O) device. The device can display electronic documents such as images and characters on a screen for a user to view. The display 504 may be classified into a Liquid Crystal Display (LCD), an Organic Light Emitting Diode (OLED) display, and the like, according to a manufacturing material.
The descriptions of the flows or structures corresponding to the above-mentioned drawings have their respective emphasis, and a part that is not described in detail in a certain flow or structure may refer to the related descriptions of other flows or structures.

Claims (17)

1. An application method of face recognition, the method comprising:
acquiring a picture comprising a human face;
carrying out face recognition on the picture to obtain value information bound with a face recognition result;
the computing device inputs the value information to an application through an operating system of the computing device.
2. The method of claim 1, wherein the entering the value information into an application by an operating system of the computing device comprises:
inputting the value information to an application through a simulated input event of an operating system of the computing device.
3. The method of claim 1 or 2, wherein obtaining a picture comprising a human face comprises:
the server acquires a picture comprising a human face; alternatively, the first and second electrodes may be,
the computing device obtains a picture including a human face.
4. The method according to any one of claims 1 to 3, wherein the performing face recognition on the picture and obtaining value information bound with a face recognition result comprises:
and the computing equipment performs face recognition on the picture and acquires value information bound with a face recognition result.
5. The method according to any one of claims 1 to 3, wherein the performing face recognition on the picture and obtaining value information bound with a face recognition result comprises:
and the server performs face recognition on the picture and acquires value information bound with a face recognition result.
6. The method according to any one of claims 1 to 3, wherein the performing face recognition on the picture and obtaining value information bound with a face recognition result comprises:
the server carries out face recognition on the picture;
the computing device obtains value information bound with the face recognition result.
7. The method according to any one of claims 1 to 6, wherein the value information comprises any one or a combination of:
user name, identification number, mobile phone number, member number, bank card number, order number, reservation number, pick-up number, payment account number, logistics number, license plate number, address, account number, and service number.
8. A system comprising a computing device and a server;
the computing device obtaining a picture including a face;
the server carries out face recognition on the picture to acquire value information bound with a face recognition result;
the computing device inputs the value information into an application through an operating system of the computing device.
9. A system comprising a computing device and a server;
the computing device obtaining a picture including a face;
the server carries out face recognition on the picture;
the computing equipment acquires value information bound with a face recognition result;
the computing device inputs the value information into an application through an operating system of the computing device.
10. A system comprising a computing device and a server;
the server acquires a picture comprising a human face;
the server carries out face recognition on the picture;
the computing equipment acquires value information bound with a face recognition result;
the computing device inputs the value information into an application through an operating system of the computing device.
11. A system comprising a computing device and a server;
the server acquires a picture comprising a human face;
the server carries out face recognition on the picture to acquire value information bound with a face recognition result;
the computing device inputs the value information into an application through an operating system of the computing device.
12. A system comprising a computing device and a server;
the computing device obtaining a picture comprising a human face;
the server carries out face recognition on the picture to acquire value information bound with a face recognition result;
the computing device inputs the value information into an application through an operating system of the computing device.
13. An apparatus for applying face recognition, the apparatus comprising:
the face processing unit is used for acquiring a picture comprising a face, carrying out face recognition on the picture and acquiring value information bound with a face recognition result;
an operating system for inputting the value information to an application.
14. The apparatus of claim 13, wherein the operating system is configured to input the value information to an application by simulating an input event.
15. A computing device, wherein the computing device comprises a processor and a memory; the processor executes the instructions in the memory to cause the computing device to implement the method of any of claims 1 to 7 as implemented by the computing device.
16. A server, wherein the server comprises a processor and a memory; the processor executing the instructions in the memory causes the server to implement the server-implemented method of any of claims 1 to 7.
17. A non-volatile memory, characterized in that the non-volatile memory comprises instructions for implementing the method of any of claims 1 to 7.
CN202011210990.3A 2020-11-03 2020-11-03 Application method of face recognition Pending CN114529959A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011210990.3A CN114529959A (en) 2020-11-03 2020-11-03 Application method of face recognition
PCT/CN2021/128348 WO2022095884A1 (en) 2020-11-03 2021-11-03 Facial recognition application method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011210990.3A CN114529959A (en) 2020-11-03 2020-11-03 Application method of face recognition

Publications (1)

Publication Number Publication Date
CN114529959A true CN114529959A (en) 2022-05-24

Family

ID=81456956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011210990.3A Pending CN114529959A (en) 2020-11-03 2020-11-03 Application method of face recognition

Country Status (2)

Country Link
CN (1) CN114529959A (en)
WO (1) WO2022095884A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107360119A (en) * 2016-05-09 2017-11-17 中兴通讯股份有限公司 A kind of cloud desktop Sign-On authentication method, cloud desktop control system and client
CN107798307A (en) * 2017-10-31 2018-03-13 努比亚技术有限公司 A kind of public transport expense quick payment method, apparatus and computer-readable recording medium
CN110378090A (en) * 2019-06-19 2019-10-25 深圳壹账通智能科技有限公司 Account logon method, device, computer readable storage medium and computer equipment
CN110784628B (en) * 2019-08-14 2022-04-05 腾讯科技(深圳)有限公司 Image data acquisition processing method and system, intelligent camera and server
CN110647728A (en) * 2019-08-27 2020-01-03 武汉烽火众智数字技术有限责任公司 Convenient login method and device

Also Published As

Publication number Publication date
WO2022095884A1 (en) 2022-05-12

Similar Documents

Publication Publication Date Title
US11394708B2 (en) Account information obtaining method, terminal, server and system
CN109241709B (en) User behavior identification method and device based on slider verification code verification
US10623522B2 (en) Uploading a form attachment
CN103229126A (en) Moving information between computing devices
CN109191635B (en) Passenger judging method and device based on face recognition technology and storage medium
Vazquez-Fernandez et al. Built-in face recognition for smart photo sharing in mobile devices
CN112394864B (en) Information acquisition method and device, computer equipment and storage medium
CN109271762B (en) User authentication method and device based on slider verification code
CN110031697B (en) Method, device, system and computer readable medium for testing target identification equipment
CN112348089A (en) Working state identification method, server, storage medium and device
CN110413859A (en) Webpage information search method, apparatus, computer equipment and storage medium
CN115270156A (en) Video desensitization method, access system, device and medium
CN111079687A (en) Certificate camouflage identification method, device, equipment and storage medium
CN112307464A (en) Fraud identification method and device and electronic equipment
CN114612986A (en) Detection method, detection device, electronic equipment and storage medium
TW201909014A (en) Verifying method of specified condition, verifying software of specified condition, device and server for executing verification of specified condition
CN114529959A (en) Application method of face recognition
CN109672710B (en) File uploading method, system and equipment
CN116189200A (en) Hard disk identity character recognition method, system, terminal and storage medium
CN115410201A (en) Method, device and related equipment for processing verification code characters
JP2022100522A (en) Person identifying method, program and information system
CN114465738A (en) Application program evidence obtaining method, system, device and storage medium
CN114092166A (en) Information recommendation processing method, device, equipment and computer readable storage medium
CN112685588B (en) Resource recommendation method, device, equipment and storage medium
CN114821845B (en) Card punching method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination