CN109359460B - Face recognition method and terminal equipment - Google Patents

Face recognition method and terminal equipment Download PDF

Info

Publication number
CN109359460B
CN109359460B CN201811386658.5A CN201811386658A CN109359460B CN 109359460 B CN109359460 B CN 109359460B CN 201811386658 A CN201811386658 A CN 201811386658A CN 109359460 B CN109359460 B CN 109359460B
Authority
CN
China
Prior art keywords
image
screen
camera
facial feature
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811386658.5A
Other languages
Chinese (zh)
Other versions
CN109359460A (en
Inventor
彭义军
陈艺锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811386658.5A priority Critical patent/CN109359460B/en
Publication of CN109359460A publication Critical patent/CN109359460A/en
Application granted granted Critical
Publication of CN109359460B publication Critical patent/CN109359460B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Finance (AREA)
  • Vascular Medicine (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the invention provides a face recognition method and terminal equipment, relates to the technical field of terminals, and aims to solve the problem that the existing face recognition mode is low in safety. The method comprises the following steps: acquiring a first image and a second image, wherein the first image and the second image are images acquired by a first camera and a second camera respectively under the condition that an included angle between a first screen and a second screen is a first angle, the first camera is positioned on the plane where the first screen is positioned, and the second camera is positioned on the plane where the second screen is positioned; and determining that the face recognition is successful in the case that the first face feature in the first image and the second face feature in the second image conform to the preset face features.

Description

Face recognition method and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of terminals, in particular to a face recognition method and terminal equipment.
Background
With the development of terminal technology, the application of face recognition technology in terminal devices has become more common, for example, a user may unlock a screen or pay a fee by using the face recognition technology.
Generally, before a user performs facial recognition using a terminal device, the user may enter his/her facial image into the terminal device. Then, when the terminal device performs face recognition, the terminal device scans the face of the user, and then the terminal device compares the image obtained by scanning with the face image of the user pre-recorded by the terminal device, and if the facial features in the image obtained by scanning are matched with the facial features in the face image of the user pre-recorded, the face recognition is successful.
However, according to the current facial recognition method, if a picture including the facial features of the user is placed in front of the camera, the facial recognition of the terminal device may be successful, that is, the current facial recognition method is low in security.
Disclosure of Invention
The embodiment of the invention provides a face recognition method and terminal equipment, and aims to solve the problem that the existing face recognition mode is low in safety.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides a face recognition method, where a first image and a second image are obtained, where the first image and the second image are images acquired by a first camera and a second camera respectively when an included angle between a first screen and a second screen is a first angle, the first camera is located on a plane where the first screen is located, and the second camera is located on a plane where the second screen is located; and determining that the face recognition is successful in the case that the first face feature in the first image and the second face feature in the second image conform to preset face features.
In a second aspect, an embodiment of the present invention further provides a terminal device, where the terminal device includes: the device comprises an acquisition module and a determination module; the acquisition module is used for acquiring a first image and a second image, wherein the first image and the second image are images acquired by a first camera and a second camera respectively under the condition that an included angle between a first screen and a second screen is a first angle, the first camera is positioned on the plane where the first screen is positioned, and the second camera is positioned on the plane where the second screen is positioned; the determining module is configured to determine that the face recognition is successful if the first facial feature in the first image acquired by the acquiring module and the second facial feature in the second image acquired by the acquiring module conform to a preset facial feature.
In a third aspect, an embodiment of the present invention provides a terminal device, which includes a processor, a memory, and a computer program stored on the memory and operable on the processor, and when executed by the processor, the computer program implements the steps of the face recognition method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the face recognition method according to the first aspect.
In the embodiment of the present invention, first, the terminal device may acquire the first image and the second image. Then, in a case where the first facial feature in the first image and the second facial feature in the second image conform to the preset facial feature, the terminal device may determine that the facial recognition is successful. The first image and the second image are images collected by the first camera and the second camera respectively under the condition that an included angle between the first screen and the second screen of the terminal equipment is a first angle, the first camera is located on a plane where the first screen is located, and the second camera is located on a plane where the second screen is located. When the included angle between the first screen and the second screen is an angle, the facial features of the user in the images collected by the first camera and the second camera are different. That is, when the included angle between the first camera and the second camera is the first angle, the first facial feature in the first image collected by the first camera and the second facial feature in the second image collected by the first camera are different. The method for determining the success of face recognition under the condition that the first face feature and the second face feature both accord with the preset face features improves the safety of face recognition compared with the current face recognition method, for example, when a photo is placed in front of a camera of the terminal equipment for face recognition, the face recognition cannot be successful.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a face recognition method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an included angle between screens according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an embodiment of an image acquisition system;
fig. 5 is a schematic diagram of state transition of a terminal device according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a possible structure of a terminal device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another possible terminal device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of another possible terminal device according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a hardware structure of a terminal device according to various embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that "/" in this context means "or", for example, A/B may mean A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. "plurality" means two or more than two.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first image and the second image, etc. are for distinguishing different images, rather than for describing a particular order of the images.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The terminal device in the embodiment of the present invention may be a terminal device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the face recognition method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the face recognition method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the face recognition method may operate based on the android operating system shown in fig. 1. Namely, the processor or the terminal device can implement the face recognition method provided by the embodiment of the invention by running the software program in the android operating system.
The face recognition method according to the embodiment of the present invention will be described with reference to fig. 2. Fig. 2 is a schematic flow chart of a face recognition method according to an embodiment of the present invention, as shown in fig. 2, the face recognition method includes S201 and S202:
s201, the terminal equipment acquires a first image and a second image, the first image and the second image are images collected by a first camera and a second camera respectively under the condition that an included angle between a first screen and the second screen is a first angle, the first camera is located on a plane where the first screen is located, and the second camera is located on a plane where the second screen is located.
It should be noted that the plane of the screen includes the screen, and the extension plane of the screen.
Optionally, the first camera may be located at any position of the plane where the first screen is located, for example, when the first camera is located on the first screen, the first camera may be located at the top end of the first screen, may also be located at the bottom end of the first screen, and may be located in a middle area, a left area, or a right area of the top end (or the bottom end) of the first screen; of course, when the first camera is a telescopic camera (including a rotatable telescopic camera), the first camera may also be located on the extension plane of the first screen, which is not specifically limited in this embodiment of the present invention.
Similarly, the second camera may be located at any position of the plane where the second screen is located, for example, when the second camera is located on the second screen, the second camera may be located at the top end of the second screen, may also be located at the bottom end of the second screen, and may be located in a middle area, a left area, or a right area of the top end (or the bottom end) of the second screen; of course, when the second camera is a telescopic camera (including a rotatable telescopic camera), the second camera may also be located on the extension plane of the second screen, which is not specifically limited in this embodiment of the present invention.
Optionally, the first camera and the second camera may both be located at the top end of the screen, or both be located at the bottom end of the screen, or one may be located at the top end of the screen, or one may be located at the bottom end of the screen.
Optionally, in this embodiment of the present invention, the first camera and the second camera may be front-facing cameras with an unchangeable position of the terminal device, and may also be cameras with a changeable position (for example, one camera may rotate, and may be a front-facing camera when the camera faces the screen, and may also be a rear-facing camera when the camera faces the rear cover of the terminal device).
In the embodiment of the present invention, the first camera and the second camera may also be 3D cameras (for example, TOF cameras) or 2D cameras, which is not limited in the embodiment of the present invention.
S202, under the condition that the first facial features in the first image and the second facial features in the second image accord with preset facial features, the terminal device determines that the facial recognition is successful.
For example, fig. 3 is a schematic diagram of a screen angle provided by an embodiment of the present invention. Wherein, supposing that the first camera and the second camera are both cameras on the screen, as shown in (a) of fig. 3, the left screen 31 is the first screen, the right screen 32 is the second screen, the camera 33 on the plane where the left screen 31 is located is the first camera, the camera 34 on the plane where the right screen 32 is located is the second camera, and the included angle between the left screen 31 and the right screen 32 is θ. Assuming that the first camera and the second camera are both retractable cameras, as shown in (b) of fig. 3, the left screen 31a is a first screen, the right screen 32a is a second screen, the camera 33a on the extension plane of the left screen 31a is a first camera, the camera 34a on the extension plane of the right screen 32a is a second camera, and the included angle between the left screen 31a and the right screen 32a is θ.
It is understood that the specific angle value of the first angle is independent of whether the face is successfully recognized, and regardless of the first angle, the terminal device may determine that the face is successfully recognized as long as the first facial feature in the first image acquired at the first angle and the second facial feature in the second image acquired at the first angle conform to the preset facial feature.
In the face recognition method provided by the embodiment of the invention, first, the terminal device may acquire the first image and the second image. Then, in a case where the first facial feature in the first image and the second facial feature in the second image conform to the preset facial feature, the terminal device may determine that the facial recognition is successful. The first image and the second image are images collected by the first camera and the second camera respectively under the condition that an included angle between the first screen and the second screen of the terminal equipment is a first angle, the first camera is located on a plane where the first screen is located, and the second camera is located on a plane where the second screen is located. When the included angle between the first screen and the second screen is an angle, the facial features of the user in the images collected by the first camera and the second camera are different. That is, when the included angle between the first camera and the second camera is the first angle, the first facial feature in the first image collected by the first camera and the second facial feature in the second image collected by the first camera are different. The method for determining the success of face recognition under the condition that the first face feature and the second face feature both accord with the preset face features improves the safety of face recognition compared with the current face recognition method, for example, when a photo is placed in front of a camera of the terminal equipment for face recognition, the face recognition cannot be successful.
Optionally, the preset facial features include a first target facial feature and a second target facial feature, the first target facial feature is a facial feature in a first preset image, and the second target facial feature is a facial feature in a second preset image.
Optionally, the first preset image and the second preset image are images in a first image group, the first image group is one of a plurality of image groups in the terminal device, the image in each image group is an image acquired by the first camera and the second camera respectively under the condition that an included angle between the first screen and the second screen is an angle, and the included angle corresponding to each image group is an angle within a first preset angle range.
Specifically, each image group may include one image acquired by the first camera and one image acquired by the second camera, where the first preset image and the second preset image are two images in the first image group. Of course, each image group may also include two subgroups, and each subgroup includes two or more images. The first preset image is one image in a first subgroup in the first image group, and the second preset image is one image in a second subgroup in the first image group.
It should be noted that, within the first preset angle range, both the two cameras of the terminal device may acquire a complete facial feature of a user, and outside the first preset angle range (that is, an included angle between the first screen and the second screen is smaller than a minimum value of the first preset angle range, or an included angle between the first screen and the second screen is larger than a maximum value of the first preset angle range), there may be a case where one camera cannot acquire a complete facial feature of a user, or both cameras cannot acquire a complete facial feature of a user.
For example, assuming that the first preset angle range may be set to [130 °, 135 ° ], when an included angle between the first screen and the second screen is any one of [130 °, 135 ° ], both cameras of the terminal device may acquire a complete facial feature of a user, and when the included angle between the first screen and the second screen is 129 ° or 136 °, at least one of the two cameras of the terminal device may not acquire a complete facial feature of a user; of course, in practical applications, other more suitable angle ranges may also be adopted, and at this time, if the angle of 129 ° or 136 ° is within the first preset angle range, both the two cameras of the terminal device may acquire a complete facial feature of the user, which is not specifically limited in the embodiment of the present invention.
Optionally, the first angle may be an angle within a first preset angle range, and the first angle may also be an angle within a second preset angle range including the first preset angle range. That is, when the user opens the foldable screen and the included angle between the first screen and the second screen reaches the second angle range, the face recognition method provided in the embodiment of the present invention may be started to be executed, where a minimum value of the first angle may be a preset value, and the preset value may be a minimum value of the included angle between the first screen and the second screen, which correspond to the face image, that may be acquired by both the first camera and the second camera of the terminal device.
Illustratively, the terminal device may enter a plurality of image groups in the following manner. For convenience of explanation, each image group includes one third image and one fourth image as an example. And the included angle between the first screen and the second screen corresponding to each image group is different. And when the included angle between the first screen and the second screen of the terminal equipment is theta, the third image in the same image group is an image which is acquired by the first camera and comprises the facial features of the target user, and when the included angle between the first screen and the second screen of the terminal equipment is theta, the fourth image in the same image group is an image which is acquired by the second camera and comprises the facial features of the target user. Specifically, the terminal device may acquire N image groups when the terminal device is in a landscape mode, where the N image groups include N third images and N fourth images, and an included angle between a first screen and a second screen corresponding to each image group in the N image groups is different. The terminal equipment can acquire K image groups when the terminal equipment erects a screen, wherein K is M-N; the K image groups include K third images and K fourth images. The first screen and the second screen corresponding to each image group in the K image groups have different included angles.
Fig. 4 is a schematic diagram of an acquired image according to an embodiment of the present invention. As shown in fig. 4 (a), when the terminal device 40 is in the vertical screen state and the angle θ between the screen 41 and the screen 42 is an angle in [130 °, 135 ° ], the user can enter one or more image groups including facial features using the camera 43 and the camera 44 of the terminal device 40. As shown in (b) in fig. 4, when the terminal apparatus 40 is in the landscape state and the angle between the screen 41 and the screen 42 is an angle in [130 °, 135 ° ], the user can enter one or more image groups including facial features using the camera 43 and the camera 44 of the terminal apparatus 40.
Optionally, the first preset image is an image in a preset image acquired by the first screen, and the second preset image is an image in a preset image acquired by the second screen. Of course, the terminal device may determine whether the first facial feature matches the first target facial feature according to whether the facial feature in the first image is the same as the facial feature in any one of the M third images, or whether the similarity is greater than a threshold value. Similarly, the terminal device may determine whether the second facial feature matches the second target facial feature according to whether the facial feature in the second image is the same as the facial feature in any one of the M fourth images, or whether the similarity is greater than a threshold.
Based on the scheme, the terminal device enters the plurality of image groups into the terminal device, so that the terminal device can be compared with the facial features of the images in the plurality of image groups in the face recognition process, and the success of face recognition can be determined under the condition that the first facial feature and the second facial feature in the plurality of image groups are in accordance with the preset facial features of one group of images, so that the accuracy of face recognition can be improved.
In a possible implementation manner, in the face recognition method provided in the embodiment of the present invention, S202 in the above embodiment may be specifically executed through S202a1 and S202a 2:
s202a1, the terminal device determines whether the first facial feature matches the first target facial feature, and determines whether the second facial feature matches the second target facial feature.
Specifically, the terminal device may determine whether the similarity between the first facial feature and the first target facial feature is the same or whether the similarity is greater than a threshold value, to determine whether the first facial feature matches the first target facial feature. Similarly, the terminal device may determine whether the similarity between the second facial feature and the second target facial feature is the same or greater than a threshold value to determine whether the second facial feature matches the second target facial feature.
S202a2, in the case where the first facial feature conforms to the first target facial feature and the second facial feature conforms to the second target facial feature, the terminal device determines that the facial recognition is successful.
Based on this scheme, first, the terminal device may acquire the first image and the second image first. The terminal device may then determine whether the first facial feature matches the first target facial feature, determine whether the second facial feature matches the second target facial feature, and in the event that the first facial feature matches the first target facial feature and the second facial feature matches the second target facial feature, the terminal device may determine that facial recognition was successful. When the included angle between the first screen and the second screen is an angle, the facial features of the user in the images collected by the first camera and the second camera are different. That is, when the included angle between the first camera and the second camera is the first angle, the first facial feature in the first image collected by the first camera and the second facial feature in the second image collected by the first camera are different. In the case that the first facial feature conforms to the first target facial feature and the second facial feature conforms to the second target facial feature, the terminal device determines that the facial recognition is successful, and compared with the current facial recognition mode, the security of the facial recognition is improved, for example, when a photo is placed in front of a camera of the terminal device for facial recognition, the facial recognition cannot be successful.
In a possible implementation manner, before S201, the face recognition method provided in the embodiment of the present invention may further include S203:
s203, the terminal equipment receives a first input of a user.
Wherein the first input includes an input for determining payment or an input for controlling an angle value between the first screen and the second screen to be a first angle.
Optionally, the terminal device provided in the embodiment of the present invention may have a touch screen, where the touch screen may be configured to receive an input from a user, and display a content corresponding to the input to the user in response to the input. Wherein, when the first input is an input for determining payment, the first input may be a touch screen input, a fingerprint input, a gravity input, a key input, or the like. The touch screen input is input such as press input, long press input, slide input, click input, hover input (input by a user near the touch screen) of the touch screen of the terminal device by the user. The fingerprint input is input by a user to a sliding fingerprint, a long-press fingerprint, a single-click fingerprint, a double-click fingerprint and the like of a fingerprint identifier of the terminal equipment. The gravity input is input such as shaking of the terminal equipment in a specific direction, shaking of the terminal equipment for a specific number of times and the like. The key input corresponds to a single-click input, a double-click input, a long-press input, a combination key input, and the like of the user for a key such as a power key, a volume key, a Home key, and the like of the terminal device. When the first input is an input for controlling an included angle between the first screen and the second screen to be a first angle, the first input may be an input for a user to fold at least one of the two screens.
It should be noted that, in the embodiment of the present invention, the user triggering payment may be a user using the terminal device, and may also be a surcharge of sending, to the user using the terminal device, by another user, which is not specifically limited in this embodiment of the present invention.
Further, S201 may specifically be executed by S201 a:
s201a, in response to the first input, the terminal device acquires a first image and a second image.
Illustratively, by taking the unlocking screen as an example, when a user is a terminal device of a folding screen, and when the user opens the folding screen, and when the angle of the terminal device between the first screen and the second screen is a first angle, after the face recognition is determined to be successful according to the first image and the second image acquired by the terminal device, the terminal device can be unlocked directly, so that the operation steps for unlocking the screen can be reduced, the safety of unlocking the screen of the terminal device is improved, the screen unlocking is completed in the process of opening the folding screen by the user, the convenience and the interest of unlocking the screen are also increased, and the use experience of the user is improved.
Fig. 5 is a schematic diagram illustrating a state change of a terminal device according to an embodiment of the present invention, and for convenience of description, a terminal device 50 with a foldable screen shown in fig. 5 is taken as an example, and as shown in (a) of fig. 5, an included angle between a screen 51 and a screen 52 of the terminal device 50 is a state of 0 °; as shown in fig. 5 (b), the state is such that the angle between the screen 51 and the screen 52 of the terminal device 50 is (0 °, 180 °); as shown in fig. 5 (c), the angle between the screen 51 and the screen 52 of the terminal device 50 is 180 °. In the process that the user opens the folding screen from the state shown in (a) in fig. 5 to the state shown in (c) in fig. 5 through the state shown in (b) in fig. 5, the face recognition method provided by the embodiment of the present invention may be performed to unlock the screen.
It can be understood that, when the user needs to pay, if the user selects the face recognition for payment, the terminal device may continue the payment process after the face recognition is successful.
Based on the scheme, the terminal device can acquire the first image and the second image after receiving the input of the user for determining payment or receiving the input when the user controls the included angle value between the first screen and the second screen to be the first angle, so as to execute the face recognition process in the embodiment.
In a possible implementation manner, the image identification method provided in the embodiment of the present invention, before S201, may further include S204:
and S204, if the facial images are detected in both the first screen and the second screen, starting the first camera and the second camera by the terminal equipment.
Specifically, the terminal device may identify whether a face image exists through infrared rays, or may determine that the face image is detected in both the first screen and the second screen through other manners, which is not specifically limited in this embodiment of the present invention.
It should be noted that S204 may also be executed after S203 and before S201, which is not specifically limited in this embodiment of the present invention.
Further, S201 may specifically be executed by S201 b:
s201b, the terminal device controls the first camera to acquire the first image and controls the second camera to acquire the second image.
Optionally, if two cameras of the terminal device are already turned on, in the face recognition method provided in the embodiment of the present invention, S201 may further execute, through S201 c:
s201c, if the facial images are detected in both the first screen and the second screen, the terminal device controls the first camera to acquire the first image and controls the second camera to acquire the second image.
Based on the scheme, the terminal equipment determines whether the facial images are detected in the first screen and the second screen, when the terminal equipment detects the facial images in the first screen and the second screen, if at least one of the first camera and the second camera of the terminal equipment is not started, the terminal equipment starts the first camera and the second camera, and under the condition that the terminal equipment determines that the first camera and the second camera are started, the terminal equipment controls the first camera to acquire the first image and controls the second camera to acquire the second image. The terminal equipment does not need to open two cameras all the time, when the facial images are detected in the first screen and the second screen, the cameras are opened again, and resource waste of the terminal equipment can be reduced.
Fig. 6 is a schematic diagram of a possible structure of a terminal device according to an embodiment of the present invention, and as shown in fig. 6, the terminal device 600 includes: an acquisition module 601 and a determination module 602; the acquisition module 601 is configured to acquire a first image and a second image, where the first image and the second image are images acquired by a first camera and a second camera respectively when an included angle between a first screen and a second screen is a first angle, the first camera is located on a plane where the first screen is located, and the second camera is located on a plane where the second screen is located; a determining module 602, configured to determine that the face recognition is successful if the first facial feature in the first image acquired by the acquiring module 601 and the second facial feature in the second image acquired by the acquiring module 601 match a preset facial feature.
Optionally, the preset facial features include a first target facial feature and a second target facial feature, the first target facial feature is a facial feature in a first preset image, and the second target facial feature is a facial feature in a second preset image; the determining module 602 is specifically configured to determine that the face recognition is successful when the first facial feature matches the first target facial feature and the second facial feature matches the second target facial feature.
Optionally, the first preset image and the second preset image are images in a first image group, the first image group is one of a plurality of image groups in the terminal device, the image in each image group is an image acquired by the first camera and the second camera respectively under the condition that an included angle between the first screen and the second screen is an angle, and the included angle corresponding to each image group is an angle within a first preset angle range.
Optionally, with reference to fig. 6, as shown in fig. 7, the terminal device 600 further includes a receiving module 603; the receiving module 603 is configured to receive a first input from the user before the obtaining module 601 obtains the first image and the second image; the obtaining module 601 is specifically configured to obtain a first image and a second image in response to the first input received by the receiving module 603; wherein the first input includes an input for determining payment or an input for controlling an angle value between the first screen and the second screen to be a first angle.
Optionally, with reference to fig. 6, as shown in fig. 8, the terminal device 600 further includes a starting module 604; a starting module 604, configured to start the first camera and the second camera if facial images are detected in both the first screen and the second screen before the obtaining module 601 obtains the first image and the second image; the obtaining module 601 is specifically configured to: and controlling the first camera to acquire a first image and controlling the second camera to acquire a second image.
The terminal device 600 provided by the embodiment of the present invention can implement each process implemented by the terminal device in the foregoing method embodiments, and for avoiding repetition, details are not described here again.
According to the terminal device provided by the embodiment of the invention, firstly, the terminal device can acquire the first image and the second image. Then, in a case where the first facial feature in the first image and the second facial feature in the second image conform to the preset facial feature, the terminal device may determine that the facial recognition is successful. The first image and the second image are images collected by the first camera and the second camera respectively under the condition that an included angle between the first screen and the second screen of the terminal equipment is a first angle, the first camera is located on a plane where the first screen is located, and the second camera is located on a plane where the second screen is located. When the included angle between the first screen and the second screen is an angle, the facial features of the user in the images collected by the first camera and the second camera are different. That is, when the included angle between the first camera and the second camera is the first angle, the first facial feature in the first image collected by the first camera and the second facial feature in the second image collected by the first camera are different. The method for determining the success of face recognition under the condition that the first face feature and the second face feature both accord with the preset face features improves the safety of face recognition compared with the current face recognition method, for example, when a photo is placed in front of a camera of the terminal equipment for face recognition, the face recognition cannot be successful.
Fig. 9 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention, where the terminal device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 9 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like.
The processor 110 is configured to obtain a first image and a second image, where the first image and the second image are images acquired by a first camera and a second camera respectively when an included angle between a first screen and a second screen is a first angle, the first camera is located on a plane where the first screen is located, and the second camera is located on a plane where the second screen is located; and determining that the face recognition is successful in the case that the first face feature in the first image and the second face feature in the second image conform to the preset face features.
According to the terminal device provided by the embodiment of the invention, firstly, the terminal device can acquire the first image and the second image. Then, in a case where the first facial feature in the first image and the second facial feature in the second image conform to the preset facial feature, the terminal device may determine that the facial recognition is successful. The first image and the second image are images collected by the first camera and the second camera respectively under the condition that an included angle between the first screen and the second screen of the terminal equipment is a first angle, the first camera is located on a plane where the first screen is located, and the second camera is located on a plane where the second screen is located. When the included angle between the first screen and the second screen is an angle, the facial features of the user in the images collected by the first camera and the second camera are different. That is, when the included angle between the first camera and the second camera is the first angle, the first facial feature in the first image collected by the first camera and the second facial feature in the second image collected by the first camera are different. The method for determining the success of face recognition under the condition that the first face feature and the second face feature both accord with the preset face features improves the safety of face recognition compared with the current face recognition method, for example, when a photo is placed in front of a camera of the terminal equipment for face recognition, the face recognition cannot be successful.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 9, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and preferably, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Optionally, an embodiment of the present invention further provides a terminal device, which, with reference to fig. 9, includes a processor 110, a memory 109, and a computer program that is stored in the memory 109 and is executable on the processor 110, and when the computer program is executed by the processor 110, the terminal device implements each process of the foregoing facial recognition method embodiment, and can achieve the same technical effect, and details are not described here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned face recognition method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. A face recognition method applied to a terminal device having a first screen and a second screen, the method comprising:
acquiring a first image and a second image, wherein the first image and the second image are images respectively acquired by a first camera and a second camera under the condition that an included angle between the first screen and the second screen is a first angle, the first camera is positioned on the plane where the first screen is positioned, and the second camera is positioned on the plane where the second screen is positioned;
determining that face recognition is successful if a first facial feature in the first image matches a facial feature in a first preset image and a second facial feature in the second image matches a facial feature in a second preset image;
the first preset image and the second preset image are images in a first image group, the first image group is one of a plurality of image groups in the terminal equipment, the image in each image group is an image which is acquired by the first camera and the second camera respectively under the condition that an included angle between the first screen and the second screen is an angle, the included angle corresponding to each image group is an angle within a first preset angle range, and the first angle is an angle within the first preset angle range;
prior to acquiring the first image and the second image, the method further comprises:
receiving a first input of a user;
the acquiring the first image and the second image includes:
acquiring the first image and the second image in response to the first input;
and the first input is input when the included angle value between the first screen and the second screen is controlled to be the first angle.
2. The method according to claim 1, wherein a first target facial feature is a facial feature in the first preset image and a second target facial feature is a facial feature in the second preset image;
determining that facial recognition is successful if a first facial feature in the first image corresponds to a facial feature in a first preset image and a second facial feature in the second image corresponds to a facial feature in a second preset image, comprising:
determining that facial recognition is successful if the first facial feature matches the first target facial feature and the second facial feature matches the second target facial feature.
3. The method of claim 1, wherein prior to acquiring the first image and the second image, the method:
if facial images are detected in both the first screen and the second screen, starting the first camera and the second camera;
the acquiring the first image and the second image includes:
and controlling the first camera to acquire the first image, and controlling the second camera to acquire the second image.
4. A terminal device is provided with a first screen and a second screen and is characterized by comprising an acquisition module, a receiving module and a determination module;
the acquisition module is used for acquiring a first image and a second image, wherein the first image and the second image are images acquired by a first camera and a second camera respectively under the condition that an included angle between a first screen and a second screen is a first angle, the first camera is positioned on the plane where the first screen is positioned, and the second camera is positioned on the plane where the second screen is positioned;
the determining module is configured to determine that the facial recognition is successful when the first facial feature in the first image acquired by the acquiring module matches the facial feature in a first preset image and the second facial feature in the second image acquired by the acquiring module matches the facial feature in a second preset image;
the first preset image and the second preset image are images in a first image group, the first image group is one of a plurality of image groups in the terminal equipment, the image in each image group is an image which is acquired by the first camera and the second camera respectively under the condition that an included angle between the first screen and the second screen is an angle, the included angle corresponding to each image group is an angle within a first preset angle range, and the first angle is an angle within the first preset angle range;
the receiving module is used for receiving a first input of a user before the acquiring module acquires the first image and the second image;
the obtaining module is specifically configured to obtain the first image and the second image in response to the first input received by the receiving module;
and the first input is input when the included angle value between the first screen and the second screen is controlled to be the first angle.
5. The terminal device according to claim 4, wherein a first target facial feature is a facial feature in the first preset image, and a second target facial feature is a facial feature in the second preset image;
the determining module is specifically configured to determine that the face recognition is successful if the first facial feature matches the first target facial feature and the second facial feature matches the second target facial feature.
6. The terminal device of claim 4, wherein the terminal device further comprises a start module;
the starting module is configured to start the first camera and the second camera if facial images are detected in both the first screen and the second screen before the acquiring module acquires the first image and the second image;
the acquisition module is specifically configured to: and controlling the first camera to acquire the first image, and controlling the second camera to acquire the second image.
7. A terminal device, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the face recognition method according to any one of claims 1 to 3.
8. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the face recognition method according to any one of claims 1 to 3.
CN201811386658.5A 2018-11-20 2018-11-20 Face recognition method and terminal equipment Active CN109359460B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811386658.5A CN109359460B (en) 2018-11-20 2018-11-20 Face recognition method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811386658.5A CN109359460B (en) 2018-11-20 2018-11-20 Face recognition method and terminal equipment

Publications (2)

Publication Number Publication Date
CN109359460A CN109359460A (en) 2019-02-19
CN109359460B true CN109359460B (en) 2021-01-08

Family

ID=65332425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811386658.5A Active CN109359460B (en) 2018-11-20 2018-11-20 Face recognition method and terminal equipment

Country Status (1)

Country Link
CN (1) CN109359460B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858463B (en) * 2019-02-22 2021-03-26 成都云鼎丝路信息技术有限公司 Dual-engine user identification method, system and terminal
CN110445979B (en) * 2019-06-26 2021-03-23 维沃移动通信有限公司 Camera switching method and terminal equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8542879B1 (en) * 2012-06-26 2013-09-24 Google Inc. Facial recognition
CN104615997B (en) * 2015-02-15 2018-06-19 四川川大智胜软件股份有限公司 A kind of face method for anti-counterfeit based on multiple-camera
CN106933330A (en) * 2017-03-13 2017-07-07 维沃移动通信有限公司 A kind of display control method and mobile terminal
CN107609471A (en) * 2017-08-02 2018-01-19 深圳元见智能科技有限公司 A kind of human face in-vivo detection method
CN107682634A (en) * 2017-10-18 2018-02-09 维沃移动通信有限公司 A kind of facial image acquisition methods and mobile terminal

Also Published As

Publication number Publication date
CN109359460A (en) 2019-02-19

Similar Documents

Publication Publication Date Title
CN108459797B (en) Control method of folding screen and mobile terminal
CN109241775B (en) Privacy protection method and terminal
CN108595946B (en) Privacy protection method and terminal
CN109523253B (en) Payment method and device
CN108256308B (en) Face recognition unlocking control method and mobile terminal
CN109190356B (en) Screen unlocking method and terminal
CN109241832B (en) Face living body detection method and terminal equipment
CN109257505B (en) Screen control method and mobile terminal
CN107704182B (en) Code scanning method and mobile terminal
CN111163260B (en) Camera starting method and electronic equipment
CN111339515A (en) Application program starting method and electronic equipment
CN110730298A (en) Display control method and electronic equipment
CN108664818B (en) Unlocking control method and device
WO2021082772A1 (en) Screenshot method and electronic device
CN110519503B (en) Method for acquiring scanned image and mobile terminal
CN110232266B (en) Screen unlocking method and terminal equipment
CN109933267B (en) Method for controlling terminal equipment and terminal equipment
CN109359460B (en) Face recognition method and terminal equipment
CN111325746A (en) Skin detection method and electronic equipment
CN109189514B (en) Terminal device control method and terminal device
CN107809515B (en) Display control method and mobile terminal
CN109547622B (en) Verification method and terminal equipment
CN109343900B (en) Permission configuration method and terminal
CN108108608B (en) Control method of mobile terminal and mobile terminal
CN110610146A (en) Face recognition method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant