WO2020135096A1 - 基于表情组别的操作确定方法、装置及电子设备 - Google Patents
基于表情组别的操作确定方法、装置及电子设备 Download PDFInfo
- Publication number
- WO2020135096A1 WO2020135096A1 PCT/CN2019/125062 CN2019125062W WO2020135096A1 WO 2020135096 A1 WO2020135096 A1 WO 2020135096A1 CN 2019125062 W CN2019125062 W CN 2019125062W WO 2020135096 A1 WO2020135096 A1 WO 2020135096A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face image
- current
- expression
- instruction
- facial
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/38—Payment protocols; Details thereof
- G06Q20/40—Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/38—Payment protocols; Details thereof
- G06Q20/40—Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
- G06Q20/401—Transaction verification
- G06Q20/4014—Identity check for transactions
- G06Q20/40145—Biometric identity checks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/50—Maintenance of biometric data or enrolment thereof
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present application relates to the field of image processing technology, and in particular, to an operation determination method, device, and electronic device based on expression groups.
- mobile phone transfers and bank ATM machines can provide users with services such as transfer and cash deposit and withdrawal according to user instructions.
- mobile phone transfer or bank ATM determines the legitimacy of the user's identity based on the user ID and the password entered by the user, and then follows various instructions issued by the user to perform operations corresponding to the instructions.
- most of them need to confirm the user's identity and ensure that the user is a legitimate user before performing the operation corresponding to the instruction.
- the existing operation determination method is very simple, Most of them only use digital/text passwords, passwords, fingerprints or faces to determine the user's identity, and then perform the operation corresponding to the instruction issued by the user.
- the security and reliability of this simple way of using passwords or passwords is still low, and it is easy to be stolen by criminals.
- the operation corresponding to the issued instruction brings certain losses to the legal user.
- the purpose of the present application is to provide an operation determination method, device and electronic device based on face recognition and expression group, which can effectively improve the safety and reliability of the determination operation of the electronic device.
- an embodiment of the present application provides an operation determination method based on expression groups.
- the method includes: acquiring a current face image of a target object; performing live face recognition on the target object based on the current face image, based on the recognition The result judges whether the identity of the target object is legal; living face recognition includes living body recognition and face recognition; if it is legal, obtains the current expression group of the current face image; determines the instruction to be executed corresponding to the current expression group; execution and execution The operation corresponding to the instruction.
- performing live facial recognition on the target object based on the current facial image includes: performing live recognition on the current facial image, Determine whether the current face image information directly comes from a real living body; when the current face image information directly comes from a real living body, perform face recognition on the current face image to determine whether the current face image is in the list of pre-stored face images Each pre-stored face image matches; if so, confirm that the identity of the target object is legal.
- the embodiments of the present application provide a second possible implementation manner of the first aspect, wherein the step of acquiring the current expression group of the face image includes: based on the current person List of face images and pre-stored face images to determine the current expression group of face images.
- the embodiments of the present application provide a third possible implementation manner of the first aspect, wherein the face image is determined based on the current face image and the pre-stored face image list
- the steps of the current expression group include: obtaining the first expression feature model corresponding to the current face image; and obtaining the second expression feature model corresponding to each pre-stored face image in the pre-stored face image list; combining the first expression feature model and Each second expression feature model is compared to determine the similarity value between the current facial image and each pre-stored facial image; based on the similarity value, the target facial image corresponding to the current facial image is determined; the user corresponding to the target facial image is acquired Account; determine the current expression group corresponding to the current face image according to the user account.
- the embodiments of the present application provide a fourth possible implementation manner of the first aspect, wherein the first expression feature model corresponding to the current face image is acquired; and the pre-stored person is acquired
- the step of the second expression feature model corresponding to each pre-stored face image in the face image list includes: determining the first position coordinate set of multiple key facial feature points on the current face image according to the current face image; setting the first position The coordinate set is used as the first expression feature model corresponding to the current face image; according to the second position coordinate set of multiple key facial feature points of each pre-stored face image in the pre-stored face image list; the second position coordinate set is used as The second expression feature model corresponding to each pre-stored face image in the pre-stored face image list.
- the embodiments of the present application provide a fifth possible implementation manner of the first aspect, in which a first expression feature model corresponding to a current face image is acquired; and a pre-stored person is acquired
- the step of the second expression feature model corresponding to each pre-stored face image in the face image list further includes: inputting the current face image to the expression recognition neural network, so that the expression feature recognition network determines the first expression corresponding to the current face image Feature model; input each face image in the face image list to the expression recognition neural network, so that the expression recognition neural network determines the second expression feature model corresponding to each pre-stored face image in the pre-stored face image list.
- the embodiments of the present application provide a sixth possible implementation manner of the first aspect, wherein, according to the user account, the step of determining the current expression group corresponding to the current face image , Including: searching for multiple expression groups corresponding to the user account in the pre-established group database; obtaining the expression group corresponding to the current face image; determining the expression group corresponding to the current face image as the current Emoji group.
- the embodiments of the present application provide a seventh possible implementation manner of the first aspect, wherein the step of determining the instruction to be executed corresponding to the current expression group includes: looking up the current instruction in the pre-established instruction database The instruction to be executed corresponding to the expression group; wherein, the instruction database stores the correspondence between the expression group and the instruction to be executed; the instruction to be executed corresponds to at least one expression group.
- the embodiments of the present application provide an eighth possible implementation manner of the first aspect, wherein the instruction database includes at least a pass instruction, a payment instruction, and/or an alarm instruction; wherein, The alarm instruction includes at least one alarm instruction; each alarm instruction corresponds to one alarm mode; different types of alarm instructions correspond to different expression groups; the payment instruction includes at least one payment instruction; each payment instruction corresponds to one payment amount; Different types of payment instructions correspond to different expression groups.
- the embodiments of the present application provide a ninth possible implementation manner of the first aspect, wherein the method further includes: when the user registers, acquiring the user's user account and collecting the user's pre-stored face image; determination The second facial expression feature model of the pre-stored facial image stores the correspondence between the user account and the second facial expression feature model; and the correspondence between the user account and the pre-stored facial image; based on each second facial expression feature model, determines each facial image Expression group; store the correspondence between the expression group set by the user and the instruction to be executed.
- an embodiment of the present application further provides an operation determination device based on expression groups, which is executed by an electronic device, and the device includes: a facial image acquisition module configured to acquire a current facial image of a target object;
- the living body recognition module is configured to determine whether the current face image is directly derived from a real living body;
- the face recognition module is configured to perform face recognition on the target object based on the current face image, and determine whether the target object's identity is legal based on the recognition result;
- the feature acquisition module is configured to acquire the current expression group of the current facial image when the recognition result of the face recognition module is legal;
- the instruction determination module is configured to determine the instruction to be executed corresponding to the current expression group;
- the operation execution module Configured to perform the operation corresponding to the instruction to be executed.
- an embodiment of the present application provides an electronic device, including: an image acquisition device, a processor, and a storage device; the image acquisition device is configured to acquire image information; a computer program is stored on the storage device, and the computer program is processed by the processor.
- the method according to any one of the first aspect to the ninth possible implementation manner of the first aspect is executed during runtime.
- an embodiment of the present application provides a chip that stores a program on the chip, and when the program is executed by a processor, the method steps of any one of the foregoing first aspect to the ninth possible implementation manner of the first aspect are performed. .
- Embodiments of the present application provide an operation determination method, device, and electronic device based on expression groups, capable of acquiring a face image of a target object, performing live face recognition on the target object based on the face image, and then determining the target object Whether the identity of is, if it is legal, determine the instruction to be executed corresponding to the current facial expression characteristic of the acquired target object, and then execute the operation corresponding to the instruction to be executed.
- this method of determining instructions to be executed based on expression groups and performing corresponding operations is more secure and reliable, and can effectively avoid theft by criminals
- the password brings economic losses to legal users.
- the use of face recognition technology while continuing the identity authentication function of face recognition, plus user-defined expression actions, can ensure that the user will not display these in unconscious states such as work, sleep or coma. The action greatly protects the safety of the user's face.
- FIG. 1 shows a flowchart of a method for determining an operation based on an expression group provided by an embodiment of the present application
- FIG. 2 shows a flowchart of another method for determining an operation based on an expression group provided by an embodiment of the present application
- FIG. 3 shows a schematic structural diagram of a terminal device provided by an embodiment of the present application
- FIG. 4 is a schematic structural diagram of an operation device based on expression groups provided by an embodiment of the present application.
- FIG. 5 shows a schematic structural diagram of another operation device based on expression groups provided by an embodiment of the present application
- FIG. 6 shows a schematic structural diagram of another operation device based on expression groups provided by an embodiment of the present application.
- FIG. 7 shows a schematic structural diagram of another operation determination device based on expression groups provided by an embodiment of the present application.
- FIG. 8 shows a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- the current face payment technology uses face recognition as a payment method, so it can impersonate the user's identity through photos and videos to perform payment transfer or some kind of authentication behavior, which harms the user's interests; in addition, due to the use of Zhengren Facial gestures are used as a means of payment. Therefore, it is easy for users to unknowingly stolen face information for payment transfer or some kind of authentication behavior, which greatly harms the interests of users. Therefore, considering the instructions of existing electronic devices The operation confirmation method has low security and reliability, and is easily used by criminals.
- an operation determination method, device, and electronic device based on expression groups provided in the embodiments of the present application can confirm that the user is The real person confirms the different operation instructions preset by the user through different expressions of the user, thereby greatly improving the safety and reliability of the determination operation of the electronic device.
- the user due to the use of living face technology, the user must operate himself to be authenticated, which greatly protects the interests of the user; and because the expression can only be completed to complete the specified command and action, usually the user is at work, entertainment, sleep, coma , Drunkenness, daily life or uninformed circumstances will rarely show these expressions, so it can effectively prevent the misappropriation of face information.
- the method may be performed by an electronic device, where the electronic device may be a camera, a live face camera, a bank ATM machine, a self-service terminal, or Camera USB Shield, Bank USB Shield with Camera, Tax Control Panel with Camera, Mobile Phone, Smart TV, Personal Computer, Laptop, Tablet PC, PC with Camera Device Connected, IPC with Camera Device Connected , PDA, handheld devices, smart watches, smart glasses, smart POS machines, smart scanners, smart robots, smart cars, smart homes, smart payment terminals and smart TVs with cameras, etc.
- the method includes the following steps:
- Step S102 Acquire the current face image of the target object.
- image acquisition devices include camera devices such as cameras and live face cameras, as well as camera-equipped devices such as mobile phones, U-shields with cameras, and tax control panels with cameras.
- Step S104 Perform live face recognition on the target object based on the current face image, and determine whether the identity of the target object is legal according to the recognition result.
- live face recognition In order to judge whether the identity of the target object is legal, it is necessary to perform live face recognition on the current face image. By combining the live recognition and face recognition, the accuracy and safety of judging whether the identity is legal is further improved. For specific applications, first use live recognition to determine whether the current face image is directly derived from a real living body, and then use face recognition technology to perform face recognition on the collected face images. Specifically, the current face image can be combined with the pre-stored face The images are compared one by one, to determine whether the current facial image matches at least one pre-stored facial image, and to determine whether the identity information of the target object is legal.
- the pre-stored face image may be a face image or a face image set of a specified user, a face image set of several users, or a face image set of all users.
- living body recognition may be performed to prevent others from misusing the user's face information through photographs and other items.
- Step S106 if it is legal, obtain the current expression group of the current face image.
- the identity of the target object is legal, it is necessary to further obtain the current expression group of the current face image to complete the corresponding operation based on the current expression group.
- the target facial image corresponding to the facial image can be used to obtain the current expression group corresponding to the current facial image through the target facial image.
- a similarity threshold can be preset, and when the similarity value is greater than the preset similarity threshold, the target face image can be determined.
- Step S108 Determine the instruction to be executed corresponding to the current expression group.
- the pre-established instruction database may be used to search for the instruction to be executed corresponding to the expression group; the instruction database stores the correspondence between the expression group and the instruction to be executed; wherein the instruction to be executed includes at least an authentication pass instruction, Payment instructions and/or alarm instructions.
- the authentication pass instruction may be an identity authentication completion instruction, or an authority opening instruction of an electronic device, etc.
- the payment instruction may include multiple payment instructions, each payment instruction corresponds to a payment amount, and different payment instructions correspond to Different expression groups, the payment limit can be divided into: small amount, large amount and super large amount, etc.
- alarm instructions include multiple alarm instructions, each alarm instruction corresponds to an alarm method, and different types of alarm instructions correspond to the expression group Differently, the alarm method can be divided into freezing fund account and alarm, false transfer and alarm, and real transfer and alarm, etc.
- the expression group of the target object can be determined based on the corresponding relationship of the difference in the position of the key point, and the information of the expression group can be input into the pre-established instruction database to find the instruction to be executed corresponding to the expression group.
- Step S110 Perform an operation corresponding to the instruction to be executed.
- the operation corresponding to the authentication-passed instruction is an authority opening operation.
- the authority opening operation may include allowing the user to specify an interface and allowing the user to use specific functions of the electronic device, etc.;
- the instruction to be executed is
- the corresponding operation can be a transaction operation that allows small-value transfers or small-value deposits and withdrawals;
- the pending instruction is a text message alarm instruction, the corresponding operation can be that the electronic device sends a text message warning message to the associated terminal.
- Embodiments of the present application provide an operation determination method based on expression groups.
- the method can obtain a face image of a target object, and perform face recognition on the target object based on the face image, thereby determining whether the identity of the target object is legal If it is legal, the current expression group corresponding to the current face image is acquired, and then the instruction to be executed corresponding to the acquired current expression group of the target object is determined, and then the operation corresponding to the instruction to be executed is performed.
- This method of determining the corresponding instruction to be executed according to the characteristics of the expression and executing the corresponding instruction operation can better improve the safety and reliability of the determination operation of the electronic device, and effectively prevent the illegal person from stealing the password and bring economics to the legal user loss.
- the use of face recognition technology while continuing the identity authentication function of face recognition, plus user-defined expression actions, can ensure that the user will not display these in unconscious states such as work, sleep or coma. The action greatly protects the safety of the user's face.
- the electronic device can instruct user A to make different custom expressions, so as to collect face images of different custom expressions presented by user A through the camera; user A can set expression characteristics and waiting The corresponding relationship between the execution instructions, such as the expressions of opening the left eye and closing the right eye corresponding to the instructions to be executed in the login account, the expressions of closing the eyes and the frown corresponding to the instructions to be executed of the small amount transfer; opening the mouth and closing the left eye
- the emoticon corresponds to the instruction to be executed of the SMS alarm.
- the electronic device sets key points on the outline of the face, eyebrows, eyes, nose or mouth and other face parts.
- the number and position of key points preferably reflect the facial expression characteristics of the user's face, for example: the eye feature points include at least the inner and outer corners of the eye, the upper and lower ends, and the center of the eyeball, and the eyebrow feature points include at least both ends of the eyebrow There are three marked points in the middle position, the characteristic points of the nose include at least the upper end, the left and right ends of the lower part and the protruding points of the nose, and the mouth at least includes the four points of the upper and lower left and right of the upper lip , Through the above expression features, the user's expression group can be determined.
- the electronic device can record the instruction to be executed corresponding to the expression group set by the user, thereby establishing an instruction database and storing the correspondence between the expression group and the instruction to be executed.
- the electronic device collects the current face of user A through the camera Image, compare the current face image with each of the pre-stored face images in the pre-stored face image list, determine the target face image corresponding to user A, and determine the expression group of the face image based on the target face image.
- the instruction database can be used to determine that the user A has issued a short message warning to be executed, so that the corresponding operation can be performed and the short message alarm is sent to the associated terminal preset by the user A.
- the expression group determines the instruction to be executed through the expression group, thereby reducing the influence of the above factors on the determination of the execution instruction.
- the position of the collection device may be too high, too low, left or right, etc., which may result in collection effects such as head down, head up, right turn or left turn.
- the opening of the mouth is different in size due to the different strength of opening the mouth.
- At least one facial image is collected and included in the same facial expression group to improve the accuracy of determining the instruction to be executed.
- the method of determining the instruction to be executed through the expression group can prevent the criminals from stealing the account password of the legitimate user and manipulating the electronic device, thereby causing losses to the legitimate user.
- Step S202 Acquire the current face image of the target object.
- the face image of the target object is collected by the camera, the camera of the image collection device is within a preset distance interval from the target face, and within the preset distance interval, the camera image collection effect is better and more Useful for image acquisition.
- Step S204 perform living body recognition on the target object based on the current face image, and determine whether the current face image information is directly derived from a real living body. If yes, go to step S206; if no, end.
- Step S206 when the current face image information directly comes from a real living body, perform face recognition on the current face image to determine whether the current face image matches each pre-stored face image in the pre-stored face image list. If yes, go to step S208; if no, end.
- the reference face image may be stored in advance, and after acquiring the face image of the target object, the face image of the target object is matched with each reference face image, if a reference corresponding to the target object is matched Face image, you can determine the identity of the target object is legal.
- Step S208 confirming that the identity of the target object is legal.
- step S210 the current facial image and the pre-stored facial image list are compared to determine the current expression group of the current facial image.
- the first expression feature model of the current face image and the second expression feature model corresponding to each pre-stored face image in the pre-stored face image list can be obtained, and then the first expression model and each second expression feature can be compared Model to obtain the similarity value between the current face image and each pre-stored face image, and then determine the target face image corresponding to the current face image based on the similarity value, and then obtain the user account corresponding to the target face image to determine The current expression group corresponding to the current face image. Through the current expression group, the instruction to be executed can be determined.
- the problem of failure to confirm the instruction to be executed due to the different angles of the current face image collection can be effectively alleviated.
- the target object needs to perform a payment operation
- the current facial images are all determined as the current expression group corresponding to the payment operation.
- an embodiment of the present application further provides a first expression feature model corresponding to the current face image; and obtaining each pre-stored face in the pre-stored face image list
- a method of a second expression feature model corresponding to an image includes the following steps:
- the setting of the preset key points is preferably some representative feature points of the face, and specific examples are: eye feature points, lip feature points, nose feature points, and brow feature points, etc.; wherein each part feature
- the number of points can be flexibly set, and the number of feature points can finally reflect the overall characteristics of the face and face.
- the first expression feature model of the current face image can be determined. For example, reading the position coordinate information of the lip feature points in the current face image, reading the coordinate position information of the eye feature points in the current face image, and combining the coordinate position information of the lip feature points and the eye feature points
- the coordinate position information of is determined as the first position coordinate set.
- the above is only an example of lip feature points and eye feature points. In practical applications, all preset key points of the face can be analyzed one by one.
- the above method for determining the first position coordinate set may be used to determine each second position coordinate set.
- expression features can also be trained and recognized through deep learning neural networks.
- the current face image is input to the pre-trained expression recognition neural network, and then passed The facial expression recognition model recognizes the first facial expression feature model corresponding to the current facial image.
- each pre-stored facial image in the pre-stored facial image list needs to be input to the facial recognition neural network to obtain the second corresponding to each pre-stored facial image Expression feature model.
- the neural network model is trained through the training data, and then the expression recognition neural network that can recognize the expression feature model described above is obtained. Recognizing facial expression feature models through deep learning can further improve the accuracy of determining facial expression feature models.
- Step S212 Determine the instruction to be executed corresponding to the current expression group.
- the instruction database stores the correspondence between expression groups and instructions to be executed.
- Step S214 an operation corresponding to the instruction to be executed is performed.
- the electronic device performs face recognition on the collected user face image, and confirms the expression feature model corresponding to the current face image and the pre-stored face image in various ways, and compares the expressions
- the feature model determines the target facial image corresponding to the current facial image, and then determines the current facial expression group, and determines the current facial expression group of the target object and its corresponding instruction to be executed, thereby performing the operation corresponding to the instruction to be executed.
- the terminal device may complete the above operation method based on the expression group.
- an embodiment of the present application further provides a schematic structural diagram of the terminal device.
- the terminal device may be Personal devices or chips such as mobile phones or computers.
- the terminal device includes a camera, a face recognition module, a living body recognition module, and an expression recognition module, and a database configured to store the user's reference facial image and specific facial image list.
- the current face image of the user is collected through the camera, and the face recognition module performs face recognition on the current face image of the user; the live recognition module performs live recognition on whether the current face image of the user is directly derived from the real living body;
- the recognition module recognizes the facial expression features of the user's current face image.
- the order of the above face recognition, living body recognition and expression recognition is not limited, and there can be multiple sorting methods, such as face recognition, living body recognition and expression recognition in sequence, or living body recognition and expression recognition in sequence And face recognition.
- the terminal device and the server can interact to complete the operation method based on the expression group.
- the interaction process between the terminal device and the server is not specifically limited.
- the embodiments of the present application provide an interaction process between the terminal device and the server, for example, as shown in FIG. 4, a schematic structural diagram of an operation device based on expression groups, in which the terminal device can complete the user through the camera The current face image is collected, and the current face image is sent to the server, and the server completes face recognition, expression recognition, or living body recognition based on the database.
- FIG. 4 a schematic structural diagram of an operation device based on expression groups, in which the terminal device can complete the user through the camera The current face image is collected, and the current face image is sent to the server, and the server completes face recognition, expression recognition, or living body recognition based on the database.
- FIG. 5 a schematic structural diagram of another operation device based on expression groups, in which the terminal device completes the current facial image collection of the user through the camera and performs live recognition on the current facial image.
- the recognition result is that the current face image comes directly from a real living body
- the current face image is sent to the server, and the server completes face recognition and expression recognition based on the database.
- FIG. 5 a schematic structural diagram of another operation device based on expression groups
- FIG. 6 is a schematic structural diagram of another operation device based on an expression group, in which the terminal device completes the collection of the user’s current facial image through the camera, performs live recognition on the current facial image, and The initial facial features are identified; then the facial expression recognition results and the current facial image are sent to the server, and the server completes facial recognition based on the database, and further determines the facial expression features corresponding to the current facial image through facial recognition.
- the terminal device may be a mobile phone, a computer, a self-service terminal or an ATM machine.
- An embodiment of the present application provides an operation determination device based on expression groups.
- the device includes the following parts:
- the facial image acquisition module 702 is configured to acquire the current facial image of the target object.
- the judgment module 704 is configured to perform live face recognition on the target object based on the current face image, and determine whether the identity of the target object is legal according to the recognition result.
- the expression obtaining module 706 is configured to obtain the current expression characteristics of the target object when the judgment result of the judgment module is yes.
- the instruction determining module 708 is configured to determine the instruction to be executed corresponding to the current facial expression feature.
- the operation execution module 710 is configured to perform an operation corresponding to the instruction to be executed.
- a face image acquisition module may acquire a face image of a target object, and perform face recognition on the target object based on the face image, and then by The determination module determines whether the identity of the target object is legal. If it is legal, the expression acquisition module acquires the current expression group corresponding to the current face image, and then determines the instruction to be executed corresponding to the acquired current expression group of the target object through the instruction determination module , So that the operation execution module executes the operation corresponding to the instruction to be executed.
- This method of determining the corresponding instruction to be executed according to the characteristics of the expression and executing the corresponding instruction operation can better improve the safety and reliability of the determination operation of the electronic device, and effectively prevent the illegal person from stealing the password and bring economics to the legal user loss.
- the use of face recognition technology while continuing the identity authentication function of face recognition, plus user-defined expression actions, can ensure that the user will not display these in unconscious states such as work, sleep or coma. The action greatly protects the safety of the user's face.
- an embodiment of the present application provides an electronic device.
- the electronic device includes: an image acquisition device 80, a processor 81, a storage device 82, and a bus 83; an image acquisition device 80 It includes a camera; a computer program is stored on the storage device 82, and when the computer program is executed by the processor, the computer program executes the method according to any one of the foregoing embodiments.
- the storage device 82 may include a high-speed random access memory (RAM, Random Access Memory), or may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
- the bus 83 may be an ISA bus, a PCI bus, an EISA bus, or the like. The bus can be divided into address bus, data bus and control bus. For ease of representation, only one bidirectional arrow is used in FIG. 8, but it does not mean that there is only one bus or one type of bus.
- the storage device 82 is configured to store a program, and the processor 81 executes the program after receiving the execution instruction.
- the method executed by the device defined by the flow process disclosed in any of the embodiments of the present application may be applied to the processor 81 , Or implemented by the processor 81.
- the processor 81 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 81 or instructions in the form of software.
- the aforementioned processor 81 may be a general-purpose processor, including a central processing unit (CPU) and a network processor (NP), etc.; it may also be a digital signal processor (DSP). ), application specific integrated circuit (Application Specific Integrated Circuit, ASIC for short), ready-made programmable gate array (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gates or transistor logic devices, and discrete hardware components.
- the methods, steps, and logical block diagrams disclosed in the embodiments of the present application may be implemented or executed.
- the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
- the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied and executed by a hardware decoding processor, or may be executed and completed by a combination of hardware and software modules in the decoding processor.
- the software module may be located in a mature storage medium in the art, such as random access memory, flash memory or read-only memory, programmable read-only memory or electrically erasable programmable memory, and registers.
- the storage medium is located in the storage device 82, and the processor 81 reads the information in the storage device 82 and completes the steps of the above method in combination with its hardware.
- An embodiment of the present application further provides a chip that stores a program, where the program executes the steps of the method according to any one of the foregoing embodiments when the program is executed by a processor.
- the disclosed system, device, and method may be implemented in other ways.
- the device embodiments described above are only schematic.
- the division of the unit is only a division of logical functions.
- multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some communication interfaces, devices or units, and may be in electrical, mechanical, or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
- the technical solution of the present application essentially or part of the contribution to the existing technology or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to enable a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- Accounting & Taxation (AREA)
- General Business, Economics & Management (AREA)
- Finance (AREA)
- Strategic Management (AREA)
- Computer Security & Cryptography (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (13)
- 一种基于表情组别的操作确定方法,其特征在于,所述方法由电子设备执行,所述方法包括:获取目标对象的当前人脸图像;基于所述当前人脸图像对目标对象进行活体人脸识别,根据识别结果判断所述目标对象的身份是否合法;所述活体人脸识别包括活体识别和人脸识别;如果合法,获取所述当前人脸图像的当前表情组别;确定所述当前表情组别对应的待执行指令;执行与所述待执行指令对应的操作。
- 根据权利要求1所述的方法,其特征在于,所述基于所述当前人脸图像对目标对象进行活体人脸识别,包括:对所述当前人脸图像进行活体识别,判断所述当前人脸图像信息是否直接来源于真实活体;当所述当前人脸图像信息直接来源于真实活体时,对所述当前人脸图像进行人脸识别,判断所述当前人脸图像是否与预存人脸图像列表中各预存人脸图像匹配;如果是,确认所述目标对象的身份合法。
- 根据权利要求2所述的方法,其特征在于,所述获取所述当前人脸图像的当前表情组别的步骤,包括:基于所述当前人脸图像和所述预存人脸图像列表,确定所述当前人脸图像的当前表情组别。
- 根据权利要求3所述的方法,其特征在于,所述基于所述当前人脸图像和所述预存人脸图像列表,确定所述当前人脸图像的当前表情组别的步骤,包括:获取所述当前人脸图像对应的第一表情特征模型;并获取所述预存人脸图像列表中各预存人脸图像对应的第二表情特征模型;将所述第一表情特征模型和各所述第二表情特征模型进行比对,确定所述当前人脸图像与各所述预存人脸图像的相似值;根据所述相似值,确定所述当前人脸图像对应的目标人脸图像;获取与所述目标人脸图像对应的用户账号;根据所述用户账号,确定所述当前人脸图像对应的当前表情组别。
- 根据权利要求4所述的方法,其特征在于,所述获取所述当前人脸图像对应的第一表情特征模型;并获取所述预存人脸图像列表中各预存人脸图像对应的第二表情特征模型的步骤,包括:根据所述当前人脸图像确定多个面部关键特征点在所述当前人脸图像上的第一位置坐标集;将所述第一位置坐标集作为所述当前人脸图像对应的第一表情特征模型;根据所述预存人脸图像列表中各预存人脸图像的多个面部关键特征点的各第二位置坐标集;将各所述第二位置坐标集作为所述预存人脸图像列表中各预存人脸图像对应的第二表情特征模型。
- 根据权利要求4所述的方法,其特征在于,所述获取所述当前人脸图像对应的第一表情特征模型;并获取所述预存人脸图像列表中各预存人脸图像对应的第二表情特征模型的步骤,还包括:将所述当前人脸图像输入至表情识别神经网络,以使所述表情特征识别网络确定所述当前人脸图像对应的第一表情特征模型;将所述人脸图像列表中各人脸图像输入至所述表情识别神经网络,以使所述表情识别神经网路确定所述预存人脸图像列表中各预存人脸图像对应的第二表情特征模型。
- 根据权利要求4所述的方法,其特征在于,所述根据所述用户账号,确定所述当前人脸图像对应的当前表情组别的步骤,包括:在预先建立的组别数据库中查找与所述用户账号对应的多个表情组别;获取与所述当前人脸图像对应的表情组别;将与所述当前人脸图像对应的表情组别,确定为当前表情组别。
- 根据权利要求1所述的方法,其特征在于,所述确定所述当前表情组别对应的待执行指令的步骤,包括:在预先建立的指令数据库中查找与所述当前表情组别对应的待执行指令;其中,所述指令数据库中存储有所述表情组别与所述待执行指令的对应关系;所述待执行指令对应至少一个所述表情组别。
- 根据权利要求8所述的方法,其特征在于,所述指令数据库至少包括通过指令、支付指令和/或报警指令;其中,所述报警指令包括至少一种报警指令;每种所述报警指令对应一种报警方式;不同种的所述报警指令对应的表情组别不同;所述支付指令包括至少一种支付指令;每种所述支付指令对应一种支付额度;不同种的所述支付指令对应的表情组别不同。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:当用户注册时,获取所述用户的用户账号,并采集所述用户的预存人脸图像;确定所述预存人脸图像的第二表情特征模型,存储所述用户账号与所述第二表情特征模型的对应关系;并存储所述用户账号与所述预存人脸图像的对应关系;基于各所述第二表情特征模型,确定各所述人脸图像的表情组别;存储所述用户设置的所述表情组别与所述待执行指令的对应关系。
- 一种基于表情组别的操作确定装置,其特征在于,所述装置由电子设备执行,所述装置包括:人脸图像获取模块,配置为获取目标对象的当前人脸图像;活体识别模块,配置为判断所述当前人脸图像是否直接来源于真实活体;人脸识别模块,配置为基于所述当前人脸图像对目标对象进行人脸识别,根据识别结果判断所述目标对象的身份是否合法;表情特征获取模块,配置为在所述人脸识别模块的识别结果为身份合法时,获取所述当前人脸图像的当前表情组别;指令确定模块,配置为确定所述当前表情组别对应的待执行指令;操作执行模块,配置为执行与所述待执行指令对应的操作。
- 一种电子设备,其特征在于,包括图像采集装置、处理器和存储装置;所述图像采集装置配置为采集图像信息;所述存储装置上存储有计算机程序,所述计算机程序在被所述处理器运行时执行如权利要求1至10任一项所述的方法。
- 一种芯片,所述芯片上存储有程序,其特征在于,所述程序被处理器运行时执行上述权利要求1至10任一项所述的方法的步骤。
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/418,775 US20220075996A1 (en) | 2018-12-26 | 2019-12-13 | Method and device for determining operation based on facial expression groups, and electronic device |
CN201980086703.1A CN113366487A (zh) | 2018-12-26 | 2019-12-13 | 基于表情组别的操作确定方法、装置及电子设备 |
KR1020217022196A KR20210101307A (ko) | 2018-12-26 | 2019-12-13 | 표정군별 기반의 작업 확정 방법, 장치 및 전자 디바이스 |
EP19903861.3A EP3905102A4 (en) | 2018-12-26 | 2019-12-13 | METHOD AND DEVICE FOR DETERMINING OPERATION BASED ON FACE EXPRESSION GROUPS AND ELECTRONIC DEVICE |
CA3125055A CA3125055A1 (en) | 2018-12-26 | 2019-12-13 | An operation determination method based on expression groups, apparatus and electronic device therefor |
JP2021534727A JP2022513978A (ja) | 2018-12-26 | 2019-12-13 | 表情グループに基づく操作決定方法、装置及び電子機器 |
AU2019414473A AU2019414473A1 (en) | 2018-12-26 | 2019-12-13 | Method and device for determining operation based on facial expression groups, and electronic device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811617580.3A CN109886697B (zh) | 2018-12-26 | 2018-12-26 | 基于表情组别的操作确定方法、装置及电子设备 |
CN201811617580.3 | 2018-12-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020135096A1 true WO2020135096A1 (zh) | 2020-07-02 |
Family
ID=66925260
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/125062 WO2020135096A1 (zh) | 2018-12-26 | 2019-12-13 | 基于表情组别的操作确定方法、装置及电子设备 |
Country Status (8)
Country | Link |
---|---|
US (1) | US20220075996A1 (zh) |
EP (1) | EP3905102A4 (zh) |
JP (1) | JP2022513978A (zh) |
KR (1) | KR20210101307A (zh) |
CN (2) | CN109886697B (zh) |
AU (1) | AU2019414473A1 (zh) |
CA (1) | CA3125055A1 (zh) |
WO (1) | WO2020135096A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114697686A (zh) * | 2020-12-25 | 2022-07-01 | 北京达佳互联信息技术有限公司 | 一种线上互动方法、装置、服务器及存储介质 |
CN116453196A (zh) * | 2023-04-22 | 2023-07-18 | 北京易知环宇文化传媒有限公司 | 一种人脸识别方法及系统 |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10860841B2 (en) * | 2016-12-29 | 2020-12-08 | Samsung Electronics Co., Ltd. | Facial expression image processing method and apparatus |
CN109886697B (zh) * | 2018-12-26 | 2023-09-08 | 巽腾(广东)科技有限公司 | 基于表情组别的操作确定方法、装置及电子设备 |
CN110795981A (zh) * | 2019-07-01 | 2020-02-14 | 烟台宏远氧业股份有限公司 | 一种高压氧舱人脸识别交互方法及系统 |
CN110472488B (zh) * | 2019-07-03 | 2024-05-10 | 平安科技(深圳)有限公司 | 基于数据处理的图片显示方法、装置和计算机设备 |
CN112242982A (zh) * | 2019-07-19 | 2021-01-19 | 腾讯科技(深圳)有限公司 | 基于图像的验证方法、设备、装置和计算机可读存储介质 |
WO2021177183A1 (ja) * | 2020-03-05 | 2021-09-10 | 日本電気株式会社 | 監視装置、監視システム、監視方法およびプログラム記録媒体 |
CN111753750B (zh) * | 2020-06-28 | 2024-03-08 | 中国银行股份有限公司 | 活体检测方法及装置、存储介质及电子设备 |
CN111931675B (zh) * | 2020-08-18 | 2024-10-01 | 熵基科技股份有限公司 | 基于人脸识别的胁迫报警方法、装置、设备和存储介质 |
CN113536262A (zh) * | 2020-09-03 | 2021-10-22 | 腾讯科技(深圳)有限公司 | 基于面部表情的解锁方法、装置、计算机设备和存储介质 |
CN112906571B (zh) * | 2021-02-20 | 2023-09-05 | 成都新希望金融信息有限公司 | 活体识别方法、装置及电子设备 |
JPWO2023105586A1 (zh) * | 2021-12-06 | 2023-06-15 | ||
CN114724256A (zh) * | 2022-04-19 | 2022-07-08 | 盐城鸿石智能科技有限公司 | 一种具有图像分析的人体感应控制系统及方法 |
CN115514893B (zh) * | 2022-09-20 | 2023-10-27 | 北京有竹居网络技术有限公司 | 图像上传方法、图像上传装置、可读存储介质和电子设备 |
WO2024123218A1 (en) * | 2022-12-05 | 2024-06-13 | Telefonaktiebolaget Lm Ericsson (Publ) | Two-factor facial recognition authentication |
CN116109318B (zh) * | 2023-03-28 | 2024-01-26 | 北京海上升科技有限公司 | 基于区块链的交互金融支付和大数据压缩存储方法及系统 |
CN117746477B (zh) * | 2023-12-19 | 2024-06-21 | 景色智慧(北京)信息科技有限公司 | 一种户外人脸识别方法、装置、电子设备及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105528703A (zh) * | 2015-12-26 | 2016-04-27 | 上海孩子国科教设备有限公司 | 通过表情实现支付确认操作的方法及系统 |
CN108052811A (zh) * | 2017-11-27 | 2018-05-18 | 北京传嘉科技有限公司 | 基于面部纹理识别的终端控制方法及系统 |
CN108363999A (zh) * | 2018-03-22 | 2018-08-03 | 百度在线网络技术(北京)有限公司 | 基于人脸识别的操作执行方法和装置 |
CN108804884A (zh) * | 2017-05-02 | 2018-11-13 | 北京旷视科技有限公司 | 身份认证的方法、装置及计算机存储介质 |
CN109886697A (zh) * | 2018-12-26 | 2019-06-14 | 广州市巽腾信息科技有限公司 | 基于表情组别的操作确定方法、装置及电子设备 |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007213378A (ja) * | 2006-02-10 | 2007-08-23 | Fujifilm Corp | 特定表情顔検出方法、撮像制御方法および装置並びにプログラム |
JP4914398B2 (ja) * | 2008-04-09 | 2012-04-11 | キヤノン株式会社 | 表情認識装置、撮像装置、方法及びプログラム |
EP2437213A1 (en) * | 2009-06-16 | 2012-04-04 | Intel Corporation | Camera applications in a handheld device |
JP5655491B2 (ja) * | 2010-10-18 | 2015-01-21 | トヨタ自動車株式会社 | 開眼状態検出装置 |
WO2013008305A1 (ja) * | 2011-07-11 | 2013-01-17 | トヨタ自動車株式会社 | 瞼検出装置 |
US9082235B2 (en) * | 2011-07-12 | 2015-07-14 | Microsoft Technology Licensing, Llc | Using facial data for device authentication or subject identification |
US9032510B2 (en) * | 2012-09-11 | 2015-05-12 | Sony Corporation | Gesture- and expression-based authentication |
US9892413B2 (en) * | 2013-09-05 | 2018-02-13 | International Business Machines Corporation | Multi factor authentication rule-based intelligent bank cards |
JP6467965B2 (ja) * | 2015-02-13 | 2019-02-13 | オムロン株式会社 | 感情推定装置及び感情推定方法 |
CN104636734A (zh) * | 2015-02-28 | 2015-05-20 | 深圳市中兴移动通信有限公司 | 终端人脸识别方法和装置 |
US9619723B1 (en) * | 2016-02-17 | 2017-04-11 | Hong Kong Applied Science and Technology Research Institute Company Limited | Method and system of identification and authentication using facial expression |
JP6747112B2 (ja) * | 2016-07-08 | 2020-08-26 | 株式会社リコー | 情報処理システム、画像処理装置、情報処理装置、及びプログラム |
CN206271123U (zh) * | 2016-12-22 | 2017-06-20 | 河南牧业经济学院 | 基于面部识别的支付装置 |
CN107038413A (zh) * | 2017-03-08 | 2017-08-11 | 合肥华凌股份有限公司 | 食谱推荐方法、装置及冰箱 |
KR102324468B1 (ko) * | 2017-03-28 | 2021-11-10 | 삼성전자주식회사 | 얼굴 인증을 위한 장치 및 방법 |
CN107554483A (zh) * | 2017-08-29 | 2018-01-09 | 湖北科技学院 | 一种基于人脸表情动作识别的车辆防盗系统 |
CN107665334A (zh) * | 2017-09-11 | 2018-02-06 | 广东欧珀移动通信有限公司 | 基于表情的智能控制方法和装置 |
CN108875633B (zh) * | 2018-06-19 | 2022-02-08 | 北京旷视科技有限公司 | 表情检测与表情驱动方法、装置和系统及存储介质 |
-
2018
- 2018-12-26 CN CN201811617580.3A patent/CN109886697B/zh active Active
-
2019
- 2019-12-13 CA CA3125055A patent/CA3125055A1/en active Pending
- 2019-12-13 KR KR1020217022196A patent/KR20210101307A/ko not_active Application Discontinuation
- 2019-12-13 US US17/418,775 patent/US20220075996A1/en not_active Abandoned
- 2019-12-13 AU AU2019414473A patent/AU2019414473A1/en not_active Abandoned
- 2019-12-13 JP JP2021534727A patent/JP2022513978A/ja active Pending
- 2019-12-13 EP EP19903861.3A patent/EP3905102A4/en active Pending
- 2019-12-13 WO PCT/CN2019/125062 patent/WO2020135096A1/zh unknown
- 2019-12-13 CN CN201980086703.1A patent/CN113366487A/zh active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105528703A (zh) * | 2015-12-26 | 2016-04-27 | 上海孩子国科教设备有限公司 | 通过表情实现支付确认操作的方法及系统 |
CN108804884A (zh) * | 2017-05-02 | 2018-11-13 | 北京旷视科技有限公司 | 身份认证的方法、装置及计算机存储介质 |
CN108052811A (zh) * | 2017-11-27 | 2018-05-18 | 北京传嘉科技有限公司 | 基于面部纹理识别的终端控制方法及系统 |
CN108363999A (zh) * | 2018-03-22 | 2018-08-03 | 百度在线网络技术(北京)有限公司 | 基于人脸识别的操作执行方法和装置 |
CN109886697A (zh) * | 2018-12-26 | 2019-06-14 | 广州市巽腾信息科技有限公司 | 基于表情组别的操作确定方法、装置及电子设备 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114697686A (zh) * | 2020-12-25 | 2022-07-01 | 北京达佳互联信息技术有限公司 | 一种线上互动方法、装置、服务器及存储介质 |
CN114697686B (zh) * | 2020-12-25 | 2023-11-21 | 北京达佳互联信息技术有限公司 | 一种线上互动方法、装置、服务器及存储介质 |
CN116453196A (zh) * | 2023-04-22 | 2023-07-18 | 北京易知环宇文化传媒有限公司 | 一种人脸识别方法及系统 |
CN116453196B (zh) * | 2023-04-22 | 2023-11-17 | 深圳市中惠伟业科技有限公司 | 一种人脸识别方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
EP3905102A4 (en) | 2022-09-14 |
CN109886697A (zh) | 2019-06-14 |
CN113366487A (zh) | 2021-09-07 |
KR20210101307A (ko) | 2021-08-18 |
CN109886697B (zh) | 2023-09-08 |
US20220075996A1 (en) | 2022-03-10 |
EP3905102A1 (en) | 2021-11-03 |
JP2022513978A (ja) | 2022-02-09 |
CA3125055A1 (en) | 2020-07-02 |
AU2019414473A1 (en) | 2021-08-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020135096A1 (zh) | 基于表情组别的操作确定方法、装置及电子设备 | |
KR102350507B1 (ko) | 출입 제어 방법, 출입 제어 장치, 시스템 및 저장매체 | |
JP6911154B2 (ja) | アクセス制御方法及び装置、システム、電子デバイス、プログラムならびに媒体 | |
JP7279973B2 (ja) | 指定ポイント承認における身元識別方法、装置及びサーバ | |
KR101997371B1 (ko) | 신원 인증 방법 및 장치, 단말기 및 서버 | |
US9985963B2 (en) | Method and system for authenticating liveness face, and computer program product thereof | |
WO2020135115A1 (zh) | 近场信息认证的方法、装置、电子设备和计算机存储介质 | |
WO2020135081A1 (zh) | 基于动态栅格化管理的身份识别方法、装置及服务器 | |
CN103324909A (zh) | 面部特征检测 | |
CN206162736U (zh) | 一种基于人脸识别的门禁系统 | |
KR20190122206A (ko) | 신분 인증 방법 및 장치, 전자 기기, 컴퓨터 프로그램 및 저장 매체 | |
Rilvan et al. | Capacitive swipe gesture based smartphone user authentication and identification | |
WO2023019927A1 (zh) | 一种人脸识别方法、装置、存储介质及电子设备 | |
TWM566865U (zh) | 基於臉部辨識進行驗證的交易系統 | |
TWI687872B (zh) | 基於臉部辨識進行驗證的交易系統及其方法 | |
TW201942879A (zh) | 基於臉部辨識進行驗證的交易系統及其方法 | |
CN112560683A (zh) | 一种翻拍图像识别方法、装置、计算机设备及存储介质 | |
Priya et al. | A novel algorithm for secure Internet Banking with finger print recognition | |
CN113254910B (zh) | 用于无人车认证系统的用户便捷认证方法及装置 | |
TWI771819B (zh) | 認證系統、認證裝置、認證方法、及程式產品 | |
RU2791846C2 (ru) | Способ и устройство для принятия решения о выполнении операции на основе групп выражений лица и электронное устройство | |
US20220027866A1 (en) | Digital virtual currency issued by being matched with biometric authentication signal, and transaction method therefor | |
US11416594B2 (en) | Methods and systems for ensuring a user is permitted to use an object to conduct an activity | |
RU2815689C1 (ru) | Способ, терминал и система для биометрической идентификации | |
US20240086921A1 (en) | Payment terminal providing biometric authentication for certain credit card transactions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19903861 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021534727 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 3125055 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20217022196 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2019903861 Country of ref document: EP Effective date: 20210726 |
|
ENP | Entry into the national phase |
Ref document number: 2019414473 Country of ref document: AU Date of ref document: 20191213 Kind code of ref document: A |