WO2020135096A1 - 基于表情组别的操作确定方法、装置及电子设备 - Google Patents

基于表情组别的操作确定方法、装置及电子设备 Download PDF

Info

Publication number
WO2020135096A1
WO2020135096A1 PCT/CN2019/125062 CN2019125062W WO2020135096A1 WO 2020135096 A1 WO2020135096 A1 WO 2020135096A1 CN 2019125062 W CN2019125062 W CN 2019125062W WO 2020135096 A1 WO2020135096 A1 WO 2020135096A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
current
expression
instruction
facial
Prior art date
Application number
PCT/CN2019/125062
Other languages
English (en)
French (fr)
Inventor
简伟明
皮爱平
梁华贵
黄飞鹰
陈秋榕
Original Assignee
巽腾(广东)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 巽腾(广东)科技有限公司 filed Critical 巽腾(广东)科技有限公司
Priority to US17/418,775 priority Critical patent/US20220075996A1/en
Priority to CN201980086703.1A priority patent/CN113366487A/zh
Priority to KR1020217022196A priority patent/KR20210101307A/ko
Priority to EP19903861.3A priority patent/EP3905102A4/en
Priority to CA3125055A priority patent/CA3125055A1/en
Priority to JP2021534727A priority patent/JP2022513978A/ja
Priority to AU2019414473A priority patent/AU2019414473A1/en
Publication of WO2020135096A1 publication Critical patent/WO2020135096A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to the field of image processing technology, and in particular, to an operation determination method, device, and electronic device based on expression groups.
  • mobile phone transfers and bank ATM machines can provide users with services such as transfer and cash deposit and withdrawal according to user instructions.
  • mobile phone transfer or bank ATM determines the legitimacy of the user's identity based on the user ID and the password entered by the user, and then follows various instructions issued by the user to perform operations corresponding to the instructions.
  • most of them need to confirm the user's identity and ensure that the user is a legitimate user before performing the operation corresponding to the instruction.
  • the existing operation determination method is very simple, Most of them only use digital/text passwords, passwords, fingerprints or faces to determine the user's identity, and then perform the operation corresponding to the instruction issued by the user.
  • the security and reliability of this simple way of using passwords or passwords is still low, and it is easy to be stolen by criminals.
  • the operation corresponding to the issued instruction brings certain losses to the legal user.
  • the purpose of the present application is to provide an operation determination method, device and electronic device based on face recognition and expression group, which can effectively improve the safety and reliability of the determination operation of the electronic device.
  • an embodiment of the present application provides an operation determination method based on expression groups.
  • the method includes: acquiring a current face image of a target object; performing live face recognition on the target object based on the current face image, based on the recognition The result judges whether the identity of the target object is legal; living face recognition includes living body recognition and face recognition; if it is legal, obtains the current expression group of the current face image; determines the instruction to be executed corresponding to the current expression group; execution and execution The operation corresponding to the instruction.
  • performing live facial recognition on the target object based on the current facial image includes: performing live recognition on the current facial image, Determine whether the current face image information directly comes from a real living body; when the current face image information directly comes from a real living body, perform face recognition on the current face image to determine whether the current face image is in the list of pre-stored face images Each pre-stored face image matches; if so, confirm that the identity of the target object is legal.
  • the embodiments of the present application provide a second possible implementation manner of the first aspect, wherein the step of acquiring the current expression group of the face image includes: based on the current person List of face images and pre-stored face images to determine the current expression group of face images.
  • the embodiments of the present application provide a third possible implementation manner of the first aspect, wherein the face image is determined based on the current face image and the pre-stored face image list
  • the steps of the current expression group include: obtaining the first expression feature model corresponding to the current face image; and obtaining the second expression feature model corresponding to each pre-stored face image in the pre-stored face image list; combining the first expression feature model and Each second expression feature model is compared to determine the similarity value between the current facial image and each pre-stored facial image; based on the similarity value, the target facial image corresponding to the current facial image is determined; the user corresponding to the target facial image is acquired Account; determine the current expression group corresponding to the current face image according to the user account.
  • the embodiments of the present application provide a fourth possible implementation manner of the first aspect, wherein the first expression feature model corresponding to the current face image is acquired; and the pre-stored person is acquired
  • the step of the second expression feature model corresponding to each pre-stored face image in the face image list includes: determining the first position coordinate set of multiple key facial feature points on the current face image according to the current face image; setting the first position The coordinate set is used as the first expression feature model corresponding to the current face image; according to the second position coordinate set of multiple key facial feature points of each pre-stored face image in the pre-stored face image list; the second position coordinate set is used as The second expression feature model corresponding to each pre-stored face image in the pre-stored face image list.
  • the embodiments of the present application provide a fifth possible implementation manner of the first aspect, in which a first expression feature model corresponding to a current face image is acquired; and a pre-stored person is acquired
  • the step of the second expression feature model corresponding to each pre-stored face image in the face image list further includes: inputting the current face image to the expression recognition neural network, so that the expression feature recognition network determines the first expression corresponding to the current face image Feature model; input each face image in the face image list to the expression recognition neural network, so that the expression recognition neural network determines the second expression feature model corresponding to each pre-stored face image in the pre-stored face image list.
  • the embodiments of the present application provide a sixth possible implementation manner of the first aspect, wherein, according to the user account, the step of determining the current expression group corresponding to the current face image , Including: searching for multiple expression groups corresponding to the user account in the pre-established group database; obtaining the expression group corresponding to the current face image; determining the expression group corresponding to the current face image as the current Emoji group.
  • the embodiments of the present application provide a seventh possible implementation manner of the first aspect, wherein the step of determining the instruction to be executed corresponding to the current expression group includes: looking up the current instruction in the pre-established instruction database The instruction to be executed corresponding to the expression group; wherein, the instruction database stores the correspondence between the expression group and the instruction to be executed; the instruction to be executed corresponds to at least one expression group.
  • the embodiments of the present application provide an eighth possible implementation manner of the first aspect, wherein the instruction database includes at least a pass instruction, a payment instruction, and/or an alarm instruction; wherein, The alarm instruction includes at least one alarm instruction; each alarm instruction corresponds to one alarm mode; different types of alarm instructions correspond to different expression groups; the payment instruction includes at least one payment instruction; each payment instruction corresponds to one payment amount; Different types of payment instructions correspond to different expression groups.
  • the embodiments of the present application provide a ninth possible implementation manner of the first aspect, wherein the method further includes: when the user registers, acquiring the user's user account and collecting the user's pre-stored face image; determination The second facial expression feature model of the pre-stored facial image stores the correspondence between the user account and the second facial expression feature model; and the correspondence between the user account and the pre-stored facial image; based on each second facial expression feature model, determines each facial image Expression group; store the correspondence between the expression group set by the user and the instruction to be executed.
  • an embodiment of the present application further provides an operation determination device based on expression groups, which is executed by an electronic device, and the device includes: a facial image acquisition module configured to acquire a current facial image of a target object;
  • the living body recognition module is configured to determine whether the current face image is directly derived from a real living body;
  • the face recognition module is configured to perform face recognition on the target object based on the current face image, and determine whether the target object's identity is legal based on the recognition result;
  • the feature acquisition module is configured to acquire the current expression group of the current facial image when the recognition result of the face recognition module is legal;
  • the instruction determination module is configured to determine the instruction to be executed corresponding to the current expression group;
  • the operation execution module Configured to perform the operation corresponding to the instruction to be executed.
  • an embodiment of the present application provides an electronic device, including: an image acquisition device, a processor, and a storage device; the image acquisition device is configured to acquire image information; a computer program is stored on the storage device, and the computer program is processed by the processor.
  • the method according to any one of the first aspect to the ninth possible implementation manner of the first aspect is executed during runtime.
  • an embodiment of the present application provides a chip that stores a program on the chip, and when the program is executed by a processor, the method steps of any one of the foregoing first aspect to the ninth possible implementation manner of the first aspect are performed. .
  • Embodiments of the present application provide an operation determination method, device, and electronic device based on expression groups, capable of acquiring a face image of a target object, performing live face recognition on the target object based on the face image, and then determining the target object Whether the identity of is, if it is legal, determine the instruction to be executed corresponding to the current facial expression characteristic of the acquired target object, and then execute the operation corresponding to the instruction to be executed.
  • this method of determining instructions to be executed based on expression groups and performing corresponding operations is more secure and reliable, and can effectively avoid theft by criminals
  • the password brings economic losses to legal users.
  • the use of face recognition technology while continuing the identity authentication function of face recognition, plus user-defined expression actions, can ensure that the user will not display these in unconscious states such as work, sleep or coma. The action greatly protects the safety of the user's face.
  • FIG. 1 shows a flowchart of a method for determining an operation based on an expression group provided by an embodiment of the present application
  • FIG. 2 shows a flowchart of another method for determining an operation based on an expression group provided by an embodiment of the present application
  • FIG. 3 shows a schematic structural diagram of a terminal device provided by an embodiment of the present application
  • FIG. 4 is a schematic structural diagram of an operation device based on expression groups provided by an embodiment of the present application.
  • FIG. 5 shows a schematic structural diagram of another operation device based on expression groups provided by an embodiment of the present application
  • FIG. 6 shows a schematic structural diagram of another operation device based on expression groups provided by an embodiment of the present application.
  • FIG. 7 shows a schematic structural diagram of another operation determination device based on expression groups provided by an embodiment of the present application.
  • FIG. 8 shows a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the current face payment technology uses face recognition as a payment method, so it can impersonate the user's identity through photos and videos to perform payment transfer or some kind of authentication behavior, which harms the user's interests; in addition, due to the use of Zhengren Facial gestures are used as a means of payment. Therefore, it is easy for users to unknowingly stolen face information for payment transfer or some kind of authentication behavior, which greatly harms the interests of users. Therefore, considering the instructions of existing electronic devices The operation confirmation method has low security and reliability, and is easily used by criminals.
  • an operation determination method, device, and electronic device based on expression groups provided in the embodiments of the present application can confirm that the user is The real person confirms the different operation instructions preset by the user through different expressions of the user, thereby greatly improving the safety and reliability of the determination operation of the electronic device.
  • the user due to the use of living face technology, the user must operate himself to be authenticated, which greatly protects the interests of the user; and because the expression can only be completed to complete the specified command and action, usually the user is at work, entertainment, sleep, coma , Drunkenness, daily life or uninformed circumstances will rarely show these expressions, so it can effectively prevent the misappropriation of face information.
  • the method may be performed by an electronic device, where the electronic device may be a camera, a live face camera, a bank ATM machine, a self-service terminal, or Camera USB Shield, Bank USB Shield with Camera, Tax Control Panel with Camera, Mobile Phone, Smart TV, Personal Computer, Laptop, Tablet PC, PC with Camera Device Connected, IPC with Camera Device Connected , PDA, handheld devices, smart watches, smart glasses, smart POS machines, smart scanners, smart robots, smart cars, smart homes, smart payment terminals and smart TVs with cameras, etc.
  • the method includes the following steps:
  • Step S102 Acquire the current face image of the target object.
  • image acquisition devices include camera devices such as cameras and live face cameras, as well as camera-equipped devices such as mobile phones, U-shields with cameras, and tax control panels with cameras.
  • Step S104 Perform live face recognition on the target object based on the current face image, and determine whether the identity of the target object is legal according to the recognition result.
  • live face recognition In order to judge whether the identity of the target object is legal, it is necessary to perform live face recognition on the current face image. By combining the live recognition and face recognition, the accuracy and safety of judging whether the identity is legal is further improved. For specific applications, first use live recognition to determine whether the current face image is directly derived from a real living body, and then use face recognition technology to perform face recognition on the collected face images. Specifically, the current face image can be combined with the pre-stored face The images are compared one by one, to determine whether the current facial image matches at least one pre-stored facial image, and to determine whether the identity information of the target object is legal.
  • the pre-stored face image may be a face image or a face image set of a specified user, a face image set of several users, or a face image set of all users.
  • living body recognition may be performed to prevent others from misusing the user's face information through photographs and other items.
  • Step S106 if it is legal, obtain the current expression group of the current face image.
  • the identity of the target object is legal, it is necessary to further obtain the current expression group of the current face image to complete the corresponding operation based on the current expression group.
  • the target facial image corresponding to the facial image can be used to obtain the current expression group corresponding to the current facial image through the target facial image.
  • a similarity threshold can be preset, and when the similarity value is greater than the preset similarity threshold, the target face image can be determined.
  • Step S108 Determine the instruction to be executed corresponding to the current expression group.
  • the pre-established instruction database may be used to search for the instruction to be executed corresponding to the expression group; the instruction database stores the correspondence between the expression group and the instruction to be executed; wherein the instruction to be executed includes at least an authentication pass instruction, Payment instructions and/or alarm instructions.
  • the authentication pass instruction may be an identity authentication completion instruction, or an authority opening instruction of an electronic device, etc.
  • the payment instruction may include multiple payment instructions, each payment instruction corresponds to a payment amount, and different payment instructions correspond to Different expression groups, the payment limit can be divided into: small amount, large amount and super large amount, etc.
  • alarm instructions include multiple alarm instructions, each alarm instruction corresponds to an alarm method, and different types of alarm instructions correspond to the expression group Differently, the alarm method can be divided into freezing fund account and alarm, false transfer and alarm, and real transfer and alarm, etc.
  • the expression group of the target object can be determined based on the corresponding relationship of the difference in the position of the key point, and the information of the expression group can be input into the pre-established instruction database to find the instruction to be executed corresponding to the expression group.
  • Step S110 Perform an operation corresponding to the instruction to be executed.
  • the operation corresponding to the authentication-passed instruction is an authority opening operation.
  • the authority opening operation may include allowing the user to specify an interface and allowing the user to use specific functions of the electronic device, etc.;
  • the instruction to be executed is
  • the corresponding operation can be a transaction operation that allows small-value transfers or small-value deposits and withdrawals;
  • the pending instruction is a text message alarm instruction, the corresponding operation can be that the electronic device sends a text message warning message to the associated terminal.
  • Embodiments of the present application provide an operation determination method based on expression groups.
  • the method can obtain a face image of a target object, and perform face recognition on the target object based on the face image, thereby determining whether the identity of the target object is legal If it is legal, the current expression group corresponding to the current face image is acquired, and then the instruction to be executed corresponding to the acquired current expression group of the target object is determined, and then the operation corresponding to the instruction to be executed is performed.
  • This method of determining the corresponding instruction to be executed according to the characteristics of the expression and executing the corresponding instruction operation can better improve the safety and reliability of the determination operation of the electronic device, and effectively prevent the illegal person from stealing the password and bring economics to the legal user loss.
  • the use of face recognition technology while continuing the identity authentication function of face recognition, plus user-defined expression actions, can ensure that the user will not display these in unconscious states such as work, sleep or coma. The action greatly protects the safety of the user's face.
  • the electronic device can instruct user A to make different custom expressions, so as to collect face images of different custom expressions presented by user A through the camera; user A can set expression characteristics and waiting The corresponding relationship between the execution instructions, such as the expressions of opening the left eye and closing the right eye corresponding to the instructions to be executed in the login account, the expressions of closing the eyes and the frown corresponding to the instructions to be executed of the small amount transfer; opening the mouth and closing the left eye
  • the emoticon corresponds to the instruction to be executed of the SMS alarm.
  • the electronic device sets key points on the outline of the face, eyebrows, eyes, nose or mouth and other face parts.
  • the number and position of key points preferably reflect the facial expression characteristics of the user's face, for example: the eye feature points include at least the inner and outer corners of the eye, the upper and lower ends, and the center of the eyeball, and the eyebrow feature points include at least both ends of the eyebrow There are three marked points in the middle position, the characteristic points of the nose include at least the upper end, the left and right ends of the lower part and the protruding points of the nose, and the mouth at least includes the four points of the upper and lower left and right of the upper lip , Through the above expression features, the user's expression group can be determined.
  • the electronic device can record the instruction to be executed corresponding to the expression group set by the user, thereby establishing an instruction database and storing the correspondence between the expression group and the instruction to be executed.
  • the electronic device collects the current face of user A through the camera Image, compare the current face image with each of the pre-stored face images in the pre-stored face image list, determine the target face image corresponding to user A, and determine the expression group of the face image based on the target face image.
  • the instruction database can be used to determine that the user A has issued a short message warning to be executed, so that the corresponding operation can be performed and the short message alarm is sent to the associated terminal preset by the user A.
  • the expression group determines the instruction to be executed through the expression group, thereby reducing the influence of the above factors on the determination of the execution instruction.
  • the position of the collection device may be too high, too low, left or right, etc., which may result in collection effects such as head down, head up, right turn or left turn.
  • the opening of the mouth is different in size due to the different strength of opening the mouth.
  • At least one facial image is collected and included in the same facial expression group to improve the accuracy of determining the instruction to be executed.
  • the method of determining the instruction to be executed through the expression group can prevent the criminals from stealing the account password of the legitimate user and manipulating the electronic device, thereby causing losses to the legitimate user.
  • Step S202 Acquire the current face image of the target object.
  • the face image of the target object is collected by the camera, the camera of the image collection device is within a preset distance interval from the target face, and within the preset distance interval, the camera image collection effect is better and more Useful for image acquisition.
  • Step S204 perform living body recognition on the target object based on the current face image, and determine whether the current face image information is directly derived from a real living body. If yes, go to step S206; if no, end.
  • Step S206 when the current face image information directly comes from a real living body, perform face recognition on the current face image to determine whether the current face image matches each pre-stored face image in the pre-stored face image list. If yes, go to step S208; if no, end.
  • the reference face image may be stored in advance, and after acquiring the face image of the target object, the face image of the target object is matched with each reference face image, if a reference corresponding to the target object is matched Face image, you can determine the identity of the target object is legal.
  • Step S208 confirming that the identity of the target object is legal.
  • step S210 the current facial image and the pre-stored facial image list are compared to determine the current expression group of the current facial image.
  • the first expression feature model of the current face image and the second expression feature model corresponding to each pre-stored face image in the pre-stored face image list can be obtained, and then the first expression model and each second expression feature can be compared Model to obtain the similarity value between the current face image and each pre-stored face image, and then determine the target face image corresponding to the current face image based on the similarity value, and then obtain the user account corresponding to the target face image to determine The current expression group corresponding to the current face image. Through the current expression group, the instruction to be executed can be determined.
  • the problem of failure to confirm the instruction to be executed due to the different angles of the current face image collection can be effectively alleviated.
  • the target object needs to perform a payment operation
  • the current facial images are all determined as the current expression group corresponding to the payment operation.
  • an embodiment of the present application further provides a first expression feature model corresponding to the current face image; and obtaining each pre-stored face in the pre-stored face image list
  • a method of a second expression feature model corresponding to an image includes the following steps:
  • the setting of the preset key points is preferably some representative feature points of the face, and specific examples are: eye feature points, lip feature points, nose feature points, and brow feature points, etc.; wherein each part feature
  • the number of points can be flexibly set, and the number of feature points can finally reflect the overall characteristics of the face and face.
  • the first expression feature model of the current face image can be determined. For example, reading the position coordinate information of the lip feature points in the current face image, reading the coordinate position information of the eye feature points in the current face image, and combining the coordinate position information of the lip feature points and the eye feature points
  • the coordinate position information of is determined as the first position coordinate set.
  • the above is only an example of lip feature points and eye feature points. In practical applications, all preset key points of the face can be analyzed one by one.
  • the above method for determining the first position coordinate set may be used to determine each second position coordinate set.
  • expression features can also be trained and recognized through deep learning neural networks.
  • the current face image is input to the pre-trained expression recognition neural network, and then passed The facial expression recognition model recognizes the first facial expression feature model corresponding to the current facial image.
  • each pre-stored facial image in the pre-stored facial image list needs to be input to the facial recognition neural network to obtain the second corresponding to each pre-stored facial image Expression feature model.
  • the neural network model is trained through the training data, and then the expression recognition neural network that can recognize the expression feature model described above is obtained. Recognizing facial expression feature models through deep learning can further improve the accuracy of determining facial expression feature models.
  • Step S212 Determine the instruction to be executed corresponding to the current expression group.
  • the instruction database stores the correspondence between expression groups and instructions to be executed.
  • Step S214 an operation corresponding to the instruction to be executed is performed.
  • the electronic device performs face recognition on the collected user face image, and confirms the expression feature model corresponding to the current face image and the pre-stored face image in various ways, and compares the expressions
  • the feature model determines the target facial image corresponding to the current facial image, and then determines the current facial expression group, and determines the current facial expression group of the target object and its corresponding instruction to be executed, thereby performing the operation corresponding to the instruction to be executed.
  • the terminal device may complete the above operation method based on the expression group.
  • an embodiment of the present application further provides a schematic structural diagram of the terminal device.
  • the terminal device may be Personal devices or chips such as mobile phones or computers.
  • the terminal device includes a camera, a face recognition module, a living body recognition module, and an expression recognition module, and a database configured to store the user's reference facial image and specific facial image list.
  • the current face image of the user is collected through the camera, and the face recognition module performs face recognition on the current face image of the user; the live recognition module performs live recognition on whether the current face image of the user is directly derived from the real living body;
  • the recognition module recognizes the facial expression features of the user's current face image.
  • the order of the above face recognition, living body recognition and expression recognition is not limited, and there can be multiple sorting methods, such as face recognition, living body recognition and expression recognition in sequence, or living body recognition and expression recognition in sequence And face recognition.
  • the terminal device and the server can interact to complete the operation method based on the expression group.
  • the interaction process between the terminal device and the server is not specifically limited.
  • the embodiments of the present application provide an interaction process between the terminal device and the server, for example, as shown in FIG. 4, a schematic structural diagram of an operation device based on expression groups, in which the terminal device can complete the user through the camera The current face image is collected, and the current face image is sent to the server, and the server completes face recognition, expression recognition, or living body recognition based on the database.
  • FIG. 4 a schematic structural diagram of an operation device based on expression groups, in which the terminal device can complete the user through the camera The current face image is collected, and the current face image is sent to the server, and the server completes face recognition, expression recognition, or living body recognition based on the database.
  • FIG. 5 a schematic structural diagram of another operation device based on expression groups, in which the terminal device completes the current facial image collection of the user through the camera and performs live recognition on the current facial image.
  • the recognition result is that the current face image comes directly from a real living body
  • the current face image is sent to the server, and the server completes face recognition and expression recognition based on the database.
  • FIG. 5 a schematic structural diagram of another operation device based on expression groups
  • FIG. 6 is a schematic structural diagram of another operation device based on an expression group, in which the terminal device completes the collection of the user’s current facial image through the camera, performs live recognition on the current facial image, and The initial facial features are identified; then the facial expression recognition results and the current facial image are sent to the server, and the server completes facial recognition based on the database, and further determines the facial expression features corresponding to the current facial image through facial recognition.
  • the terminal device may be a mobile phone, a computer, a self-service terminal or an ATM machine.
  • An embodiment of the present application provides an operation determination device based on expression groups.
  • the device includes the following parts:
  • the facial image acquisition module 702 is configured to acquire the current facial image of the target object.
  • the judgment module 704 is configured to perform live face recognition on the target object based on the current face image, and determine whether the identity of the target object is legal according to the recognition result.
  • the expression obtaining module 706 is configured to obtain the current expression characteristics of the target object when the judgment result of the judgment module is yes.
  • the instruction determining module 708 is configured to determine the instruction to be executed corresponding to the current facial expression feature.
  • the operation execution module 710 is configured to perform an operation corresponding to the instruction to be executed.
  • a face image acquisition module may acquire a face image of a target object, and perform face recognition on the target object based on the face image, and then by The determination module determines whether the identity of the target object is legal. If it is legal, the expression acquisition module acquires the current expression group corresponding to the current face image, and then determines the instruction to be executed corresponding to the acquired current expression group of the target object through the instruction determination module , So that the operation execution module executes the operation corresponding to the instruction to be executed.
  • This method of determining the corresponding instruction to be executed according to the characteristics of the expression and executing the corresponding instruction operation can better improve the safety and reliability of the determination operation of the electronic device, and effectively prevent the illegal person from stealing the password and bring economics to the legal user loss.
  • the use of face recognition technology while continuing the identity authentication function of face recognition, plus user-defined expression actions, can ensure that the user will not display these in unconscious states such as work, sleep or coma. The action greatly protects the safety of the user's face.
  • an embodiment of the present application provides an electronic device.
  • the electronic device includes: an image acquisition device 80, a processor 81, a storage device 82, and a bus 83; an image acquisition device 80 It includes a camera; a computer program is stored on the storage device 82, and when the computer program is executed by the processor, the computer program executes the method according to any one of the foregoing embodiments.
  • the storage device 82 may include a high-speed random access memory (RAM, Random Access Memory), or may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
  • the bus 83 may be an ISA bus, a PCI bus, an EISA bus, or the like. The bus can be divided into address bus, data bus and control bus. For ease of representation, only one bidirectional arrow is used in FIG. 8, but it does not mean that there is only one bus or one type of bus.
  • the storage device 82 is configured to store a program, and the processor 81 executes the program after receiving the execution instruction.
  • the method executed by the device defined by the flow process disclosed in any of the embodiments of the present application may be applied to the processor 81 , Or implemented by the processor 81.
  • the processor 81 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 81 or instructions in the form of software.
  • the aforementioned processor 81 may be a general-purpose processor, including a central processing unit (CPU) and a network processor (NP), etc.; it may also be a digital signal processor (DSP). ), application specific integrated circuit (Application Specific Integrated Circuit, ASIC for short), ready-made programmable gate array (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gates or transistor logic devices, and discrete hardware components.
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application may be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied and executed by a hardware decoding processor, or may be executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a mature storage medium in the art, such as random access memory, flash memory or read-only memory, programmable read-only memory or electrically erasable programmable memory, and registers.
  • the storage medium is located in the storage device 82, and the processor 81 reads the information in the storage device 82 and completes the steps of the above method in combination with its hardware.
  • An embodiment of the present application further provides a chip that stores a program, where the program executes the steps of the method according to any one of the foregoing embodiments when the program is executed by a processor.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a division of logical functions.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some communication interfaces, devices or units, and may be in electrical, mechanical, or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present application essentially or part of the contribution to the existing technology or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to enable a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • General Business, Economics & Management (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

一种基于表情组别的操作确定方法、装置及电子设备,涉及图像处理的技术领域,该方法由电子设备执行,该方法包括:获取目标对象的当前人脸图像(S102);基于当前人脸图像对目标对象进行活体人脸识别,根据识别结果判断目标对象的身份是否合法(S104);活体人脸识别包括活体识别和人脸识别;如果合法,获取当前人脸图像的当前表情组别(S106);确定当前表情组别对应的待执行指令(S108);执行与待执行指令对应的操作(S110)。采用了人脸识别技术,在延续了人脸识别的身份鉴权功能的同时,加上用户自定义的表情动作,可以确保用户在工作、睡眠或昏迷等无意识的状态下不会展现这些动作,保护了用户人脸的使用安全,提升了电子设备确定操作的安全性和可靠性。

Description

基于表情组别的操作确定方法、装置及电子设备
相关申请的交叉引用
本申请要求于2018年12月26日提交中国专利局的申请号为CN201811617580.3、名称为“基于表情组别的操作确定方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,尤其是涉及一种基于表情组别的操作确定方法、装置及电子设备。
背景技术
随着科技发展,电子设备能够提供很多符合用户需求的业务应用,比如手机转账以及银行ATM机能够根据用户指令而为用户提供转账和存取现金等服务。通常情况下,手机转账或银行ATM机根据用户ID和用户输入的密码确定用户身份的合法性,进而遵从用户下发的各种指令,执行与指令对应的操作。在现有技术中,为了保障操作执行的安全性和可靠性,大多需要在确认用户身份,确保该用户为合法用户时才会执行指令对应的操作,然而,现有的操作确定方式非常简单,大多仅采用数字/文字密码、口令、指纹或人脸方式确定用户身份,进而执行该用户下发的指令所对应的操作。但这种仅简单采用密码或口令等方式的安全性和可靠性仍旧较低,易被不法分子盗取,而用指纹或人脸方式则容易被复制和攻击,使得电子设备直接执行不法分子下发的指令所对应的操作,从而给合法用户带来一定的损失。
发明内容
有鉴于此,本申请的目的在于提供基于人脸识别及表情组别的操作确定方法、装置及电子设备,可以有效地提升电子设备确定操作的安全性和可靠性。
为了实现上述目的,本申请实施例采用的技术方案如下:
第一方面,本申请实施例提供了一种基于表情组别的操作确定方法,该方法包括:获取目标对象的当前人脸图像;基于当前人脸图像对目标对象进行活体人脸识别,根据识别结果判断目标对象的身份是否合法;活体人脸识别包括活体识别和人脸识别;如果合法,获取当前人脸图像的当前表情组别;确定当前表情组别对应的待执行指令;执行与待执行指令对应的操作。
结合第一方面,本申请实施例提供了第一方面的第一种可能的实施方式,其中,基于当前人脸图像对目标对象进行活体人脸识别,包括:对当前人脸图像进行活体识别,判断当前人脸图像信息是否直接来源于真实活体;当当前人脸图像信息直接来源于真实活体时, 对当前人脸图像进行人脸识别,判断当前人脸图像是否与预存的人脸图像列表中各预存人脸图像匹配;如果是,确认目标对象的身份合法。
结合第一方面的第一种可能的实施方式,本申请实施例提供了第一方面的第二种可能的实施方式,其中,获取人脸图像的当前表情组别的步骤,包括:基于当前人脸图像和预存人脸图像列表,确定人脸图像的当前表情组别。
结合第一方面的第二种可能的实施方式,本申请实施例提供了第一方面的第三种可能的实施方式,其中,基于当前人脸图像和预存人脸图像列表,确定人脸图像的当前表情组别的步骤,包括:获取当前人脸图像对应的第一表情特征模型;并获取预存人脸图像列表中各预存人脸图像对应的第二表情特征模型;将第一表情特征模型和各第二表情特征模型进行比对,确定当前人脸图像与各预存人脸图像的相似值;根据相似值,确定当前人脸图像对应的目标人脸图像;获取与目标人脸图像对应的用户账号;根据用户账号,确定当前人脸图像对应的当前表情组别。
结合第一方面的第三种可能的实施方式,本申请实施例提供了第一方面的第四种可能的实施方式,其中,获取当前人脸图像对应的第一表情特征模型;并获取预存人脸图像列表中各预存人脸图像对应的第二表情特征模型的步骤,包括:根据当前人脸图像确定多个面部关键特征点在当前人脸图像上的第一位置坐标集;将第一位置坐标集作为当前人脸图像对应的第一表情特征模型;根据预存人脸图像列表中各预存人脸图像的多个面部关键特征点的各第二位置坐标集;将各第二位置坐标集作为预存人脸图像列表中各预存人脸图像对应的第二表情特征模型。
结合第一方面的第三种可能的实施方式,本申请实施例提供了第一方面的第五种可能的实施方式,其中,获取当前人脸图像对应的第一表情特征模型;并获取预存人脸图像列表中各预存人脸图像对应的第二表情特征模型的步骤,还包括:将当前人脸图像输入至表情识别神经网络,以使表情特征识别网络确定当前人脸图像对应的第一表情特征模型;将人脸图像列表中各人脸图像输入至表情识别神经网络,以使表情识别神经网路确定预存人脸图像列表中各预存人脸图像对应的第二表情特征模型。
结合第一方面的第三种可能的实施方式,本申请实施例提供了第一方面的第六种可能的实施方式,其中,根据用户账号,确定当前人脸图像对应的当前表情组别的步骤,包括:在预先建立的组别数据库中查找与用户账号对应的多个表情组别;获取与当前人脸图像对应的表情组别;将与当前人脸图像对应的表情组别,确定为当前表情组别。
结合第一方面,本申请实施例提供了第一方面的第七种可能的实施方式,其中,确定当前表情组别对应的待执行指令的步骤,包括:在预先建立的指令数据库中查找与当前表情组别对应的待执行指令;其中,指令数据库中存储有表情组别与待执行指令的对应关系; 待执行指令对应至少一个表情组别。
结合第一方面的第七种可能的实施方式,本申请实施例提供了第一方面的第八种可能的实施方式,其中,指令数据库至少包括通过指令、支付指令和/或报警指令;其中,报警指令包括至少一种报警指令;每种报警指令对应一种报警方式;不同种的报警指令对应的表情组别不同;支付指令包括至少一种支付指令;每种支付指令对应一种支付额度;不同种的支付指令对应的表情组别不同。
结合第一方面,本申请实施例提供了第一方面的第九种可能的实施方式,其中,方法还包括:当用户注册时,获取用户的用户账号,并采集用户的预存人脸图像;确定预存人脸图像的第二表情特征模型,存储用户账号与第二表情特征模型的对应关系;并存储用户账号与预存人脸图像的对应关系;基于各第二表情特征模型,确定各人脸图像的表情组别;存储用户设置的表情组别与待执行指令的对应关系。
第二方面,本申请实施例还提供了一种基于表情组别的操作确定装置,该装置由电子设备执行,该装置包括:人脸图像获取模块,配置为获取目标对象的当前人脸图像;活体识别模块,配置为判断当前人脸图像是否直接来源于真实活体;人脸识别模块,配置为基于当前人脸图像对目标对象进行人脸识别,根据识别结果判断目标对象的身份是否合法;表情特征获取模块,配置为在人脸识别模块的识别结果为身份合法时,获取当前人脸图像的当前表情组别;指令确定模块,配置为确定当前表情组别对应的待执行指令;操作执行模块,配置为执行与待执行指令对应的操作。
第三方面,本申请实施例提供了一种电子设备,包括:图像采集装置、处理器和存储装置;图像采集装置配置为采集图像信息;存储装置上存储有计算机程序,计算机程序在被处理器运行时执行如第一方面至第一方面的第九种可能的实施方式任一项的方法。
第四方面,本申请实施例提供了一种芯片,芯片上存储有程序,程序被处理器运行时执行上述第一方面至第一方面的第九种可能的实施方式任一项的方法的步骤。
本申请实施例提供了一种基于表情组别的操作确定方法、装置及电子设备,能够获取目标对象的人脸图像,并基于该人脸图像对目标对象进行活体人脸识别,进而判断目标对象的身份是否合法,如合法,则确定与获取的目标对象的当前表情特征对应的待执行指令,进而执行与该待执行指令对应的操作。这种基于表情组别确定待执行指令,并执行相应操作的方式,与现有技术中仅采用密码和口令等简单验证方式相比,安全性和可靠性更强,能够有效避免不法分子盗取密码而给合法用户带来经济损失。另外,采用了人脸识别技术,在延续了人脸识别的身份鉴权功能的同时,加上用户自定义的表情动作,可以确保用户在工作、睡眠或昏迷等无意识的状态下不会展现这些动作,大大保护了用户人脸的使用安全。
本公开的其他特征和优点将在随后的说明书中阐述,或者,部分特征和优点可以从说 明书推知或毫无疑义地确定,或者通过实施本公开的上述技术即可得知。
为使本申请的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本申请具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了本申请实施例所提供的一种基于表情组别的操作确定方法的流程图;
图2示出了本申请实施例所提供的另一种基于表情组别的操作确定方法的流程图;
图3示出了本申请实施例所提供的一种终端设备的结构示意图;
图4示出了本申请实施例所提供的一种基于表情组别的操作装置的结构示意图;
图5示出了本申请实施例所提供的另一种基于表情组别的操作装置的结构示意图;
图6示出了本申请实施例所提供的另一种基于表情组别的操作装置的结构示意图;
图7示出了本申请实施例所提供的另一种基于表情组别的操作确定装置的结构示意图;
图8示出了本申请实施例所提供的一种电子设备的结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请的一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
目前的人脸支付技术,由于使用了人脸识别作为支付手段,因此,可以通过照片和视频等手段冒充用户身份进行支付转账或某种认证行为,危害用户的利益;另外,由于使用了端正人脸姿态作为支付的手段,因此,用户很容易在不知不觉中被盗用了的人脸信息进行支付转账或某种认证行为,极大危害了用户的利益,因此考虑到现有电子设备的指令操作确认方式的安全性和可靠性均较低,易被不法分子利用,为改善此问题,本申请实施例提供的一种基于表情组别的操作确定方法、装置及电子设备,能够确认用户为真实本人并且通过用户不同表情确认用户预设的不同操作指令,进而较大程度上提升电子设备确定操作的安全性和可靠性。另外,由于使用活体人脸技术,必须用户本人操作才可以认证通过,极大保护了用户的利益;并且,由于要做表情才可以完成指定的指令动作,通常用户在工作、娱乐、睡觉、昏迷、醉酒、日常生活中或不知情的情况下极少会展现这些表情,因此,能有效防止人脸信息被盗用。以下对本申请实施例进行详细介绍。
参见图1所示的一种基于表情组别的操作确定方法的流程图,该方法可由电子设备执行,其中,电子设备可以为摄像头、活体人脸摄像头、银行ATM机、自助终端机、带有摄像头的U盾、带有摄像头的银行U盾、带有摄像头的税控盘、手机、智能电视、个人电脑、笔记本电脑、平板电脑、连接有摄像头设备的个人电脑、连接有摄像头设备的工控机、PDA、手持设备、智能腕表、智能眼镜、智能POS机、智能扫描枪、智能机器人、智能汽车、智能家居、智能缴费终端和带摄像头的智能电视等,该方法包括以下步骤:
步骤S102,获取目标对象的当前人脸图像。
具体地,通过图像采集设备采集目标对象的人脸图像。其中,图像采集设备包括摄像头和活体人脸摄像头等摄像头设备,以及手机、带有摄像头的U盾和带有摄像头的税控盘等带有摄像头的设备。
步骤S104,基于当前人脸图像对目标对象进行活体人脸识别,根据识别结果判断目标对象的身份是否合法。
为了判断目标对象的身份是否合法,需要对当前人脸图像进行活体人脸识别,通过结合活体识别与人脸识别,进一步提高判断身份是否合法的准确性和安全性。具体应用时,首先采用活体识别判断当前人脸图像是否直接来源于真实活体,再采用人脸识别技术对采集的人脸图像进行人脸识别,具体可把当前人脸图像与预先存储的人脸图像进行一一比对,判断当前人脸图像是否与至少一个预先存储的人脸图像匹配的匹配人脸图像,判断目标对象的身份信息是否合法。其中,预先存储的人脸图像,可以是指定某一用户的人脸图像或人脸图像集,可以是若干个用户的人脸图像集,也可以是所有用户的人脸图像集。优选地,在进行人脸识别之前,可以进行活体识别,以防止他人通过照片等物品冒用用户的人脸信息。
步骤S106,如果合法,获取当前人脸图像的当前表情组别。
当目标对象的身份合法时,需要进一步获取当前人脸图像的当前表情组别,以基于当前表情组别完成相应操作。具体地,可以首先将当前人脸图像与预存人脸图像列表中的各预存人脸图像进行一一比对,得到当前人脸与各预存人脸图像之间的相似值,基于相似值确定当前人脸图像对应的目标人脸图像,便可以通过目标人脸图像获取与当前人脸图像对应的当前表情组别。其中,可以预设一个相似阈值,当相似值大于预设的相似阈值时,即可确定目标人脸图像。
步骤S108,确定当前表情组别对应的待执行指令。
具体地,可以在预先建立的指令数据库中查找与表情组别对应的待执行指令;该指令数据库中存储有表情组别与待执行指令的对应关系;其中,待执行指令至少包括认证通过指令、支付指令和/或报警指令。在实际应用中,认证通过指令可以是身份认证完成指令, 或者是电子设备的权限开放指令等;支付指令可以包括多种支付指令,每种支付指令对应一种支付额度,不同种的支付指令对应的表情组别不同,支付额度可以具体分为:小额度、大额度和超大额度等;报警指令包括多种报警指令,每种报警指令对应一种报警方式,不同种的报警指令对应的表情组别不同,报警方式可以分为冻结资金账号并报警、虚假转账并报警以及真实转账并报警等,通过设置报警指令对应的表情组别,可以在不惊动非法人员的前提下进行报警操作,有效地保障了用户的人身安全及财产安全。基于关键点位置差异的对应关系可确定目标对象的表情组别,将表情组别信息输入预先建立的指令数据库,可查找到与表情组别对应的待执行指令。
步骤S110,执行与待执行指令对应的操作。
诸如,待执行指令为认证通过指令时,认证通过指令对应的操作为权限开放操作,具体地,该权限开放操作可以包括允许用户指定界面和允许用户使用电子设备的特定功能等;待执行指令为大额度支付指令时,对应的操作可以为准许小额度转账或小额度存取款等交易操作;待执行指令为短信报警指令时,对应的操作可以为电子设备向关联终端发送短信报警信息等。
本申请实施例提供了一种基于表情组别的操作确定方法,该方法可以获取目标对象的人脸图像,并基于该人脸图像对目标对象进行人脸识别,进而判断目标对象的身份是否合法,如合法,则获取当前人脸图像对应的当前表情组别,进而确定与获取的目标对象的当前表情组别对应的待执行指令,进而执行与该待执行指令对应的操作。这种根据表情特征确定对应的待执行指令,并执行相应指令操作的方式,能够较好地提升电子设备确定操作的安全性和可靠性,有效避免不法分子盗取密码而给合法用户带来经济损失。另外,采用了人脸识别技术,在延续了人脸识别的身份鉴权功能的同时,加上用户自定义的表情动作,可以确保用户在工作、睡眠或昏迷等无意识的状态下不会展现这些动作,大大保护了用户人脸的使用安全。
为便于理解,下面提出一种具体的实施方式,如下所示:
(1)当A用户注册时,电子设备可以指示A用户做不同的自定义表情,从而通过摄像头采集A用户呈现的不同的自定义表情的人脸图像;A用户可以自行设定表情特征与待执行指令的对应关系,诸如,睁左眼且闭右眼的表情与登陆账户的待执行指令相对应,闭双眼且皱眉的表情与小额度转账的待执行指令相对应;张嘴且闭左眼的表情与短信报警的待执行指令相对应。具体实施时,在采集用户的人脸图像时,电子设备对人脸外轮廓、眉部、眼部、鼻部或嘴部等人脸部位设置关键点。具体地,关键点个数和位置优选体现出用户人脸的表情特征,例如:眼部特征点至少包括眼睛内外眼角、上下端和眼球中心等标记点,眉部特征点至少包括眉部两端和中间位置三个标记点,鼻部特征点至少包括上端、下 部左右两端和鼻翼突出点等标记点,嘴部至少包括上嘴唇上下左右四个点和下嘴唇上下左右四个点等标记点,通过上述表情特征,即可确定用户的表情组别。
电子设备可以记录用户设置的与表情组别对应的待执行指令,从而建立指令数据库,并存储表情组别与待执行指令的对应关系。
(2)当A用户在图像采集设备前做出诸如睁左眼且闭右眼表情、闭双眼且皱眉表情或张嘴且闭左眼等特定表情时,电子设备通过摄像头采集A用户的当前人脸图像,将当前人脸图像与预存人脸图像列表中的各预存人脸图像进行比对,确定用户A对应的目标人脸图像,并基于目标人脸图像确定人脸图像的表情组别。
(3)在预先建立的指令数据库中查找与A用户当前的表情组别对应的待执行指令,并执行指令对应的操作,诸如,如果确定A用户的表情组别为张嘴且闭左眼,则可通过指令数据库确定A用户发出了短信报警的待执行指令,从而可执行相应的操作,向A用户预先设置的关联终端发短信报警。
另外,考虑到人脸图像会受到采集角度、光照环境和脸部肌肉控制差异等多种因素的影响,这些影响将会造成同一个面部表情的不同采集结果,因此可以将同一面部表情列入同一表情组别,通过表情组别确定待执行指令,从而减少上述因素对待执行指令确定的影响。例如,对于采集角度的因素,可能出现采集设备位置过高、过低、偏左或偏右等情况,进而导致出现低头、抬头、右转头或左转头等采集效果。例如,对于脸部肌肉控制差异的因素,可能出现用户在做张嘴表情的时候,由于张嘴的力度不同,造成嘴巴张开的大小不一样的情况。另外,受采集角度影响,进一步影响嘴巴张开的采集效果。因此,对于同一个面部表情,采集最少一个人脸图像,并列入同一表情组别,提高确定待执行指令的准确性。优选地,在将人脸图像归入到对应的表情组别时,应先判断该表情特征是否与该用户的其他表情组别的面部表情相似,以防止因面部表情相似导致误判表情组别,进一步提高确定待执行指令的准确性和安全性。
通过表情组别确定待执行指令的方式,能够防止不法分子盗取合法用户的账户密码而操控电子设备,给合法用户带来损失。而且合法用户通过表情方式发指令的方式也难以使不法分子察觉,诸如,在不法分子胁迫合法用户在ATM机上进行转账时,合法用户做出报警表情,可暗中使ATM机发短信或者后台联网报警,进而保障自身财产安全。
为便于理解,以下给出基于本实施例提供的另一种基于表情组别的操作确定方法的一种具体实施方式,参见图2所示的另一种基于表情组别的操作确定方法的流程图,该方法包括以下步骤:
步骤S202,获取目标对象的当前人脸图像。
在一种具体的实施方式中,通过摄像头采集目标对象的人脸图像,图像采集设备的摄 像头距离目标人脸预设距离区间内,在预设距离区间内,摄像头的图像采集效果较佳,更有助于进行图像采集。
步骤S204,基于当前人脸图像对目标对象进行活体识别,判断当前人脸图像信息是否直接来源于真实活体。如果是,执行步骤S206;如果否,结束。
通过进行活体识别,可以进一步防范他人冒用目标对象的身份信息。
步骤S206,当前人脸图像信息直接来源于真实活体时,对当前人脸图像进行人脸识别,判断当前人脸图像是否与预存人脸图像列表中各预存人脸图像匹配。如果是,执行步骤S208;如果否,结束。
在一种实施方式中,可以预先存储基准人脸图像,当获取目标对象的人脸图像后,将目标对象的人脸图像与各基准人脸图像进行匹配,如果匹配到与目标对象对应的基准人脸图像,则可以确定目标对象的身份合法。
步骤S208,确认目标对象的身份合法。
步骤S210,比对当前人脸图像和预存人脸图像列表,确定当前人脸图像的当前表情组别。
具体地,可以分别获取当前人脸图像的第一表情特征模型,和预存人脸图像列表中每个预存人脸图像对应的第二表情特征模型,然后对比第一表情模型和各第二表情特征模型,以得到当前人脸图像与各预存人脸图像之间的相似值,进而基于相似值确定当前人脸图像对应的目标人脸图像,然后获取与目标人脸图像对应的用户账号,以确定当前人脸图像对应的当前表情组别。通过当前表情组别,即可确定待执行指令。
通过确定当前人脸图像对应的表情组别,可以有效缓解因当前人脸图形采集角度不同导致的待执行指令确认失败的问题。假设目标对象需要进行支付操作,在采集目标对象的当前人脸图像时,由于每次采集的角度均存在一定差异,因此通过上述确定当前表情组别的方法,将同一支付操作面部表情各角度的当前人脸图像均确定为对应支付操作的当前表情组别。通过增加确定当前表情组别的不同角度的预存人脸图像的数量,可以有效提高确定当前表情组别的准确度。
为了便于对上述实施例提供的方法进行理解,本申请实施例还提供了一种获取所述当前人脸图像对应的第一表情特征模型;并获取所述预存人脸图像列表中各预存人脸图像对应的第二表情特征模型的方法,该方法包括以下步骤:
(1)根据当前人脸图像确定多个面部关键特征点在当前人脸图像上的第一位置坐标集。
首先确定多个预设关键点,将这些预设关键点作为面部关键特征点,然后确定多个面部关键特征点在当前人脸图像上的位置坐标,得到第一位置坐标集。
具体地,预设关键点的设置优选为面部一些具有代表性的特征点,具体举例为:眼部特征点、唇部特征点、鼻部特征点和眉部特征点等;其中每个部位特征点选取个数可以灵活设置,特征点选取个数最终能够体现人脸面部整体特征。通过第一位置坐标集,可以确定当前人脸图像的第一表情特征模型。例如,读取当前人脸图像中唇部特征点的位置坐标信息,读取当前人脸图像中眼部特征点的坐标位置信息,并将上述唇部特征点的坐标位置信息和眼部特征点的坐标位置信息确定为第一位置坐标集。当然,以上仅为唇部特征点和眼部特征点为例进行说明,在实际应用中,可以对脸部的全部预设关键点进行逐一比对分析。
(2)将第一位置坐标集作为当前人脸图像对应的第一表情特征模型。
(3)根据预存人脸图像列表中各预存人脸图像的多个面部关键特征点的各第二位置坐标集。
具体地,可以采用上述确定第一位置坐标集的方法来确定各第二位置坐标集。
(4)将各所述第二位置坐标集作为所述预存人脸图像列表中各预存人脸图像对应的第二表情特征模型。
另外,随着深度学习神经网络等技术的不断发展,还可以通过深度学习神经网络对表情特征进行训练和识别,具体地,将当前人脸图像输入至预先训练得到的表情识别神经网络,然后通过表情识别模型识别当前人脸图像对应的第一表情特征模型,另外,还需将预存人脸图像列表中各预存人脸图像输入至表情识别神经网络,以得到各预存人脸图像对应的第二表情特征模型。通过训练数据对神经网络模型进行训练,进而得到上述可以识别表情特征模型的表情识别神经网络。通过深度学习对表情特征模型进行识别,可以进一步提高确定表情特征模型的准确性。
步骤S212,确定当前表情组别对应的待执行指令。
通过预先建立的指令数据库,查找与当前表情组别对应的待执行指令。该指令数据库中存储有表情组别与待执行指令的对应关系。
步骤S214,执行与待执行指令对应的操作。
本申请实施例提出的上述方法,电子设备对采集到的用户人脸图像进行人脸识别,并通过多种方式确认当前人脸图像与预存人脸图像对应的表情特征模型,通过比对各表情特征模型以确定当前人脸图像对应的目标人脸图像,进而确定当前表情组别,并确定目标对象的当前表情组别和其对应的待执行指令,从而执行与待执行指令对应的操作。这种通过确定用户的表情当前人脸图像,确定用户对应的当前表情组别,能够较好地提升电子设备确定操作的安全性和可靠性。
在一种实施方式中,可以由终端设备完成上述基于表情组别的操作方法,具体地,本 申请实施例还提供了一种终端设备的结构示意图,如图3所示,该终端设备可以为手机或电脑等个人设备或者芯片,该终端设备包括摄像头、人脸识别模块、活体识别模块和表情识别模块,以及配置为存储用户的基准人脸图像和特定人脸图像列表的数据库。通过摄像头采集用户的当前人脸图像,由人脸识别模块对用户的当前人脸图像进行人脸识别;由活体识别模块对用户的当前人脸图像是否直接来源与真实活体进行活体识别;由表情识别模块对用户的当前人脸图像中的表情特征进行识别。值得注意的是,对于上述人脸识别、活体识别和表情识别的顺序不做限定,可以有多种排序方式,例如依次进行人脸识别、活体识别和表情识别,或者依次进行活体识别、表情识别和人脸识别等。
另外,可以由终端设备和服务器进行交互,进而完成基于表情组别的操作方法。其中,终端设备与服务器之间的交互过程不做具体的限定。为便于理解,本申请实施例提供了终端设备与服务器之间的交互过程,例如如图4所示的一种基于表情组别的操作装置的结构示意图,其中,可以由终端设备通过摄像头完成用户的当前人脸图像的采集,将当前人脸图像发送至服务器,由服务器基于数据库完成人脸识别、表情识别或者活体识别。例如图5所示的另一种基于表情组别的操作装置的结构示意图,其中,由终端设备通过摄像头完成用户的当前人脸图像的采集工作,以及对当前人脸图像进行活体识别,在活体识别结果为当前人脸图像直接来源于真实活体时,将当前人脸图像发送至服务器,由服务器基于数据库完成人脸识别和表情识别。例如图6所示的另一种基于表情组别的操作装置的结构示意图,其中,由终端设备通过摄像头完成用户的当前人脸图像的采集工作,对当前人脸图像进行活体识别,还对用户的表情特征进行初次识别;然后将表情特征识别结果和当前人脸图像发送至服务器,由服务器基于数据库完成人脸识别,以及通过表情识别进一步确定当前人脸图像对应的表情特征。其中,上述终端设备可以为手机、电脑、自助办理终端或ATM机等。
本申请实施例提供了一种基于表情组别的操作确定装置,参见图7示出的一种基于表情组别的操作确定装置的结构框图,该装置包括以下部分:
人脸图像获取模块702,配置为获取目标对象的当前人脸图像。
判断模块704,配置为基于当前人脸图像对目标对象进行活体人脸识别,根据识别结果判断目标对象的身份是否合法。
表情获取模块706,配置为在判断模块的判断结果为是时,获取目标对象的当前表情特征。
指令确定模块708,配置为确定当前表情特征对应的待执行指令。
操作执行模块710,配置为执行与待执行指令对应的操作。
本申请实施例提供了一种基于表情组别的操作确定装置,该方法可以由人脸图像获取 模块获取目标对象的人脸图像,并基于该人脸图像对目标对象进行人脸识别,进而由判断模块判断目标对象的身份是否合法,如合法,表情获取模块则获取当前人脸图像对应的当前表情组别,进而通过指令确定模块确定与获取的目标对象的当前表情组别对应的待执行指令,以使操作执行模块执行与该待执行指令对应的操作。这种根据表情特征确定对应的待执行指令,并执行相应指令操作的方式,能够较好地提升电子设备确定操作的安全性和可靠性,有效避免不法分子盗取密码而给合法用户带来经济损失。另外,采用了人脸识别技术,在延续了人脸识别的身份鉴权功能的同时,加上用户自定义的表情动作,可以确保用户在工作、睡眠或昏迷等无意识的状态下不会展现这些动作,大大保护了用户人脸的使用安全。所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的一种基于表情组别的操作确定装置具体工作过程,可以参考前述实施例中的对应过程,在此不再赘述。
本申请实施例提供了一种电子设备,参见图8所示的一种电子设备的结构示意图,该电子设备包括:图像采集装置80、处理器81、存储装置82和总线83;图像采集装置80包括摄像头;存储装置82上存储有计算机程序,计算机程序在被处理器运行时执行如前述实施例任一项的方法。
其中,存储装置82可能包含高速随机存取存储器(RAM,Random Access Memory),也可能还包括非易失存储器(non-volatile memory),例如至少一个磁盘存储器。总线83可以是ISA总线、PCI总线或EISA总线等。总线可以分为地址总线、数据总线和控制总线等。为便于表示,图8中仅用一个双向箭头表示,但并不表示仅有一根总线或一种类型的总线。
其中,存储装置82配置为存储程序,处理器81在接收到执行指令后,执行程序,前述本申请实施例任一实施例揭示的流过程定义的装置所执行的方法可以应用于处理器81中,或者由处理器81实现。
处理器81可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器81中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器81可以是通用处理器,包括中央处理器(Central Processing Unit,简称CPU)和网络处理器(Network Processor,简称NP)等;还可以是数字信号处理器(Digital Signal Processing,简称DSP)、专用集成电路(Application Specific Integrated Circuit,简称ASIC)、现成可编程门阵列(Field-Programmable Gate Array,简称FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件以及分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存或只 读存储器,可编程只读存储器或者电可擦写可编程存储器以及寄存器等本领域成熟的存储介质中。该存储介质位于存储装置82,处理器81读取存储装置82中的信息,结合其硬件完成上述方法的步骤。
本申请实施例还提供了一种芯片,该芯片上存储有程序,其中,程序被处理器运行时执行前述实施例任一项的方法的步骤。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统具体工作过程,可以参考前述实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
在本申请的描述中,需要说明的是,术语“中心”、“上”、“下”、“左”、“右”、“竖直”、“水平”、“内”和“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、或者以特定的方位构造和操作,因此不能理解为对本申请的限制。此外,术语“第一”、“第二”和“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性。
最后应说明的是:以上所述实施例,仅为本申请的具体实施方式,用以说明本申请的 技术方案,而非对其限制,本申请的保护范围并不局限于此,尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本申请实施例技术方案的精神和范围,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (13)

  1. 一种基于表情组别的操作确定方法,其特征在于,所述方法由电子设备执行,所述方法包括:
    获取目标对象的当前人脸图像;
    基于所述当前人脸图像对目标对象进行活体人脸识别,根据识别结果判断所述目标对象的身份是否合法;所述活体人脸识别包括活体识别和人脸识别;
    如果合法,获取所述当前人脸图像的当前表情组别;
    确定所述当前表情组别对应的待执行指令;
    执行与所述待执行指令对应的操作。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述当前人脸图像对目标对象进行活体人脸识别,包括:
    对所述当前人脸图像进行活体识别,判断所述当前人脸图像信息是否直接来源于真实活体;
    当所述当前人脸图像信息直接来源于真实活体时,对所述当前人脸图像进行人脸识别,判断所述当前人脸图像是否与预存人脸图像列表中各预存人脸图像匹配;
    如果是,确认所述目标对象的身份合法。
  3. 根据权利要求2所述的方法,其特征在于,所述获取所述当前人脸图像的当前表情组别的步骤,包括:
    基于所述当前人脸图像和所述预存人脸图像列表,确定所述当前人脸图像的当前表情组别。
  4. 根据权利要求3所述的方法,其特征在于,所述基于所述当前人脸图像和所述预存人脸图像列表,确定所述当前人脸图像的当前表情组别的步骤,包括:
    获取所述当前人脸图像对应的第一表情特征模型;并获取所述预存人脸图像列表中各预存人脸图像对应的第二表情特征模型;
    将所述第一表情特征模型和各所述第二表情特征模型进行比对,确定所述当前人脸图像与各所述预存人脸图像的相似值;
    根据所述相似值,确定所述当前人脸图像对应的目标人脸图像;
    获取与所述目标人脸图像对应的用户账号;
    根据所述用户账号,确定所述当前人脸图像对应的当前表情组别。
  5. 根据权利要求4所述的方法,其特征在于,所述获取所述当前人脸图像对应的第一表情特征模型;并获取所述预存人脸图像列表中各预存人脸图像对应的第二表情特征模型的步骤,包括:
    根据所述当前人脸图像确定多个面部关键特征点在所述当前人脸图像上的第一位置坐标集;
    将所述第一位置坐标集作为所述当前人脸图像对应的第一表情特征模型;
    根据所述预存人脸图像列表中各预存人脸图像的多个面部关键特征点的各第二位置坐标集;
    将各所述第二位置坐标集作为所述预存人脸图像列表中各预存人脸图像对应的第二表情特征模型。
  6. 根据权利要求4所述的方法,其特征在于,所述获取所述当前人脸图像对应的第一表情特征模型;并获取所述预存人脸图像列表中各预存人脸图像对应的第二表情特征模型的步骤,还包括:
    将所述当前人脸图像输入至表情识别神经网络,以使所述表情特征识别网络确定所述当前人脸图像对应的第一表情特征模型;
    将所述人脸图像列表中各人脸图像输入至所述表情识别神经网络,以使所述表情识别神经网路确定所述预存人脸图像列表中各预存人脸图像对应的第二表情特征模型。
  7. 根据权利要求4所述的方法,其特征在于,所述根据所述用户账号,确定所述当前人脸图像对应的当前表情组别的步骤,包括:
    在预先建立的组别数据库中查找与所述用户账号对应的多个表情组别;
    获取与所述当前人脸图像对应的表情组别;
    将与所述当前人脸图像对应的表情组别,确定为当前表情组别。
  8. 根据权利要求1所述的方法,其特征在于,所述确定所述当前表情组别对应的待执行指令的步骤,包括:
    在预先建立的指令数据库中查找与所述当前表情组别对应的待执行指令;其中,所述指令数据库中存储有所述表情组别与所述待执行指令的对应关系;所述待执行指令对应至少一个所述表情组别。
  9. 根据权利要求8所述的方法,其特征在于,所述指令数据库至少包括通过指令、支付指令和/或报警指令;其中,
    所述报警指令包括至少一种报警指令;每种所述报警指令对应一种报警方式;不同种的所述报警指令对应的表情组别不同;
    所述支付指令包括至少一种支付指令;每种所述支付指令对应一种支付额度;不同种的所述支付指令对应的表情组别不同。
  10. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    当用户注册时,获取所述用户的用户账号,并采集所述用户的预存人脸图像;
    确定所述预存人脸图像的第二表情特征模型,存储所述用户账号与所述第二表情特征模型的对应关系;并存储所述用户账号与所述预存人脸图像的对应关系;
    基于各所述第二表情特征模型,确定各所述人脸图像的表情组别;
    存储所述用户设置的所述表情组别与所述待执行指令的对应关系。
  11. 一种基于表情组别的操作确定装置,其特征在于,所述装置由电子设备执行,所述装置包括:
    人脸图像获取模块,配置为获取目标对象的当前人脸图像;
    活体识别模块,配置为判断所述当前人脸图像是否直接来源于真实活体;
    人脸识别模块,配置为基于所述当前人脸图像对目标对象进行人脸识别,根据识别结果判断所述目标对象的身份是否合法;
    表情特征获取模块,配置为在所述人脸识别模块的识别结果为身份合法时,获取所述当前人脸图像的当前表情组别;
    指令确定模块,配置为确定所述当前表情组别对应的待执行指令;
    操作执行模块,配置为执行与所述待执行指令对应的操作。
  12. 一种电子设备,其特征在于,包括图像采集装置、处理器和存储装置;
    所述图像采集装置配置为采集图像信息;
    所述存储装置上存储有计算机程序,所述计算机程序在被所述处理器运行时执行如权利要求1至10任一项所述的方法。
  13. 一种芯片,所述芯片上存储有程序,其特征在于,所述程序被处理器运行时执行上述权利要求1至10任一项所述的方法的步骤。
PCT/CN2019/125062 2018-12-26 2019-12-13 基于表情组别的操作确定方法、装置及电子设备 WO2020135096A1 (zh)

Priority Applications (7)

Application Number Priority Date Filing Date Title
US17/418,775 US20220075996A1 (en) 2018-12-26 2019-12-13 Method and device for determining operation based on facial expression groups, and electronic device
CN201980086703.1A CN113366487A (zh) 2018-12-26 2019-12-13 基于表情组别的操作确定方法、装置及电子设备
KR1020217022196A KR20210101307A (ko) 2018-12-26 2019-12-13 표정군별 기반의 작업 확정 방법, 장치 및 전자 디바이스
EP19903861.3A EP3905102A4 (en) 2018-12-26 2019-12-13 METHOD AND DEVICE FOR DETERMINING OPERATION BASED ON FACE EXPRESSION GROUPS AND ELECTRONIC DEVICE
CA3125055A CA3125055A1 (en) 2018-12-26 2019-12-13 An operation determination method based on expression groups, apparatus and electronic device therefor
JP2021534727A JP2022513978A (ja) 2018-12-26 2019-12-13 表情グループに基づく操作決定方法、装置及び電子機器
AU2019414473A AU2019414473A1 (en) 2018-12-26 2019-12-13 Method and device for determining operation based on facial expression groups, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811617580.3A CN109886697B (zh) 2018-12-26 2018-12-26 基于表情组别的操作确定方法、装置及电子设备
CN201811617580.3 2018-12-26

Publications (1)

Publication Number Publication Date
WO2020135096A1 true WO2020135096A1 (zh) 2020-07-02

Family

ID=66925260

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/125062 WO2020135096A1 (zh) 2018-12-26 2019-12-13 基于表情组别的操作确定方法、装置及电子设备

Country Status (8)

Country Link
US (1) US20220075996A1 (zh)
EP (1) EP3905102A4 (zh)
JP (1) JP2022513978A (zh)
KR (1) KR20210101307A (zh)
CN (2) CN109886697B (zh)
AU (1) AU2019414473A1 (zh)
CA (1) CA3125055A1 (zh)
WO (1) WO2020135096A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697686A (zh) * 2020-12-25 2022-07-01 北京达佳互联信息技术有限公司 一种线上互动方法、装置、服务器及存储介质
CN116453196A (zh) * 2023-04-22 2023-07-18 北京易知环宇文化传媒有限公司 一种人脸识别方法及系统

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10860841B2 (en) * 2016-12-29 2020-12-08 Samsung Electronics Co., Ltd. Facial expression image processing method and apparatus
CN109886697B (zh) * 2018-12-26 2023-09-08 巽腾(广东)科技有限公司 基于表情组别的操作确定方法、装置及电子设备
CN110795981A (zh) * 2019-07-01 2020-02-14 烟台宏远氧业股份有限公司 一种高压氧舱人脸识别交互方法及系统
CN110472488B (zh) * 2019-07-03 2024-05-10 平安科技(深圳)有限公司 基于数据处理的图片显示方法、装置和计算机设备
CN112242982A (zh) * 2019-07-19 2021-01-19 腾讯科技(深圳)有限公司 基于图像的验证方法、设备、装置和计算机可读存储介质
WO2021177183A1 (ja) * 2020-03-05 2021-09-10 日本電気株式会社 監視装置、監視システム、監視方法およびプログラム記録媒体
CN111753750B (zh) * 2020-06-28 2024-03-08 中国银行股份有限公司 活体检测方法及装置、存储介质及电子设备
CN111931675B (zh) * 2020-08-18 2024-10-01 熵基科技股份有限公司 基于人脸识别的胁迫报警方法、装置、设备和存储介质
CN113536262A (zh) * 2020-09-03 2021-10-22 腾讯科技(深圳)有限公司 基于面部表情的解锁方法、装置、计算机设备和存储介质
CN112906571B (zh) * 2021-02-20 2023-09-05 成都新希望金融信息有限公司 活体识别方法、装置及电子设备
JPWO2023105586A1 (zh) * 2021-12-06 2023-06-15
CN114724256A (zh) * 2022-04-19 2022-07-08 盐城鸿石智能科技有限公司 一种具有图像分析的人体感应控制系统及方法
CN115514893B (zh) * 2022-09-20 2023-10-27 北京有竹居网络技术有限公司 图像上传方法、图像上传装置、可读存储介质和电子设备
WO2024123218A1 (en) * 2022-12-05 2024-06-13 Telefonaktiebolaget Lm Ericsson (Publ) Two-factor facial recognition authentication
CN116109318B (zh) * 2023-03-28 2024-01-26 北京海上升科技有限公司 基于区块链的交互金融支付和大数据压缩存储方法及系统
CN117746477B (zh) * 2023-12-19 2024-06-21 景色智慧(北京)信息科技有限公司 一种户外人脸识别方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105528703A (zh) * 2015-12-26 2016-04-27 上海孩子国科教设备有限公司 通过表情实现支付确认操作的方法及系统
CN108052811A (zh) * 2017-11-27 2018-05-18 北京传嘉科技有限公司 基于面部纹理识别的终端控制方法及系统
CN108363999A (zh) * 2018-03-22 2018-08-03 百度在线网络技术(北京)有限公司 基于人脸识别的操作执行方法和装置
CN108804884A (zh) * 2017-05-02 2018-11-13 北京旷视科技有限公司 身份认证的方法、装置及计算机存储介质
CN109886697A (zh) * 2018-12-26 2019-06-14 广州市巽腾信息科技有限公司 基于表情组别的操作确定方法、装置及电子设备

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007213378A (ja) * 2006-02-10 2007-08-23 Fujifilm Corp 特定表情顔検出方法、撮像制御方法および装置並びにプログラム
JP4914398B2 (ja) * 2008-04-09 2012-04-11 キヤノン株式会社 表情認識装置、撮像装置、方法及びプログラム
EP2437213A1 (en) * 2009-06-16 2012-04-04 Intel Corporation Camera applications in a handheld device
JP5655491B2 (ja) * 2010-10-18 2015-01-21 トヨタ自動車株式会社 開眼状態検出装置
WO2013008305A1 (ja) * 2011-07-11 2013-01-17 トヨタ自動車株式会社 瞼検出装置
US9082235B2 (en) * 2011-07-12 2015-07-14 Microsoft Technology Licensing, Llc Using facial data for device authentication or subject identification
US9032510B2 (en) * 2012-09-11 2015-05-12 Sony Corporation Gesture- and expression-based authentication
US9892413B2 (en) * 2013-09-05 2018-02-13 International Business Machines Corporation Multi factor authentication rule-based intelligent bank cards
JP6467965B2 (ja) * 2015-02-13 2019-02-13 オムロン株式会社 感情推定装置及び感情推定方法
CN104636734A (zh) * 2015-02-28 2015-05-20 深圳市中兴移动通信有限公司 终端人脸识别方法和装置
US9619723B1 (en) * 2016-02-17 2017-04-11 Hong Kong Applied Science and Technology Research Institute Company Limited Method and system of identification and authentication using facial expression
JP6747112B2 (ja) * 2016-07-08 2020-08-26 株式会社リコー 情報処理システム、画像処理装置、情報処理装置、及びプログラム
CN206271123U (zh) * 2016-12-22 2017-06-20 河南牧业经济学院 基于面部识别的支付装置
CN107038413A (zh) * 2017-03-08 2017-08-11 合肥华凌股份有限公司 食谱推荐方法、装置及冰箱
KR102324468B1 (ko) * 2017-03-28 2021-11-10 삼성전자주식회사 얼굴 인증을 위한 장치 및 방법
CN107554483A (zh) * 2017-08-29 2018-01-09 湖北科技学院 一种基于人脸表情动作识别的车辆防盗系统
CN107665334A (zh) * 2017-09-11 2018-02-06 广东欧珀移动通信有限公司 基于表情的智能控制方法和装置
CN108875633B (zh) * 2018-06-19 2022-02-08 北京旷视科技有限公司 表情检测与表情驱动方法、装置和系统及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105528703A (zh) * 2015-12-26 2016-04-27 上海孩子国科教设备有限公司 通过表情实现支付确认操作的方法及系统
CN108804884A (zh) * 2017-05-02 2018-11-13 北京旷视科技有限公司 身份认证的方法、装置及计算机存储介质
CN108052811A (zh) * 2017-11-27 2018-05-18 北京传嘉科技有限公司 基于面部纹理识别的终端控制方法及系统
CN108363999A (zh) * 2018-03-22 2018-08-03 百度在线网络技术(北京)有限公司 基于人脸识别的操作执行方法和装置
CN109886697A (zh) * 2018-12-26 2019-06-14 广州市巽腾信息科技有限公司 基于表情组别的操作确定方法、装置及电子设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697686A (zh) * 2020-12-25 2022-07-01 北京达佳互联信息技术有限公司 一种线上互动方法、装置、服务器及存储介质
CN114697686B (zh) * 2020-12-25 2023-11-21 北京达佳互联信息技术有限公司 一种线上互动方法、装置、服务器及存储介质
CN116453196A (zh) * 2023-04-22 2023-07-18 北京易知环宇文化传媒有限公司 一种人脸识别方法及系统
CN116453196B (zh) * 2023-04-22 2023-11-17 深圳市中惠伟业科技有限公司 一种人脸识别方法及系统

Also Published As

Publication number Publication date
EP3905102A4 (en) 2022-09-14
CN109886697A (zh) 2019-06-14
CN113366487A (zh) 2021-09-07
KR20210101307A (ko) 2021-08-18
CN109886697B (zh) 2023-09-08
US20220075996A1 (en) 2022-03-10
EP3905102A1 (en) 2021-11-03
JP2022513978A (ja) 2022-02-09
CA3125055A1 (en) 2020-07-02
AU2019414473A1 (en) 2021-08-05

Similar Documents

Publication Publication Date Title
WO2020135096A1 (zh) 基于表情组别的操作确定方法、装置及电子设备
KR102350507B1 (ko) 출입 제어 방법, 출입 제어 장치, 시스템 및 저장매체
JP6911154B2 (ja) アクセス制御方法及び装置、システム、電子デバイス、プログラムならびに媒体
JP7279973B2 (ja) 指定ポイント承認における身元識別方法、装置及びサーバ
KR101997371B1 (ko) 신원 인증 방법 및 장치, 단말기 및 서버
US9985963B2 (en) Method and system for authenticating liveness face, and computer program product thereof
WO2020135115A1 (zh) 近场信息认证的方法、装置、电子设备和计算机存储介质
WO2020135081A1 (zh) 基于动态栅格化管理的身份识别方法、装置及服务器
CN103324909A (zh) 面部特征检测
CN206162736U (zh) 一种基于人脸识别的门禁系统
KR20190122206A (ko) 신분 인증 방법 및 장치, 전자 기기, 컴퓨터 프로그램 및 저장 매체
Rilvan et al. Capacitive swipe gesture based smartphone user authentication and identification
WO2023019927A1 (zh) 一种人脸识别方法、装置、存储介质及电子设备
TWM566865U (zh) 基於臉部辨識進行驗證的交易系統
TWI687872B (zh) 基於臉部辨識進行驗證的交易系統及其方法
TW201942879A (zh) 基於臉部辨識進行驗證的交易系統及其方法
CN112560683A (zh) 一种翻拍图像识别方法、装置、计算机设备及存储介质
Priya et al. A novel algorithm for secure Internet Banking with finger print recognition
CN113254910B (zh) 用于无人车认证系统的用户便捷认证方法及装置
TWI771819B (zh) 認證系統、認證裝置、認證方法、及程式產品
RU2791846C2 (ru) Способ и устройство для принятия решения о выполнении операции на основе групп выражений лица и электронное устройство
US20220027866A1 (en) Digital virtual currency issued by being matched with biometric authentication signal, and transaction method therefor
US11416594B2 (en) Methods and systems for ensuring a user is permitted to use an object to conduct an activity
RU2815689C1 (ru) Способ, терминал и система для биометрической идентификации
US20240086921A1 (en) Payment terminal providing biometric authentication for certain credit card transactions

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19903861

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021534727

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 3125055

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20217022196

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019903861

Country of ref document: EP

Effective date: 20210726

ENP Entry into the national phase

Ref document number: 2019414473

Country of ref document: AU

Date of ref document: 20191213

Kind code of ref document: A