CN111027374B - Image recognition method and electronic equipment - Google Patents

Image recognition method and electronic equipment Download PDF

Info

Publication number
CN111027374B
CN111027374B CN201911032707.XA CN201911032707A CN111027374B CN 111027374 B CN111027374 B CN 111027374B CN 201911032707 A CN201911032707 A CN 201911032707A CN 111027374 B CN111027374 B CN 111027374B
Authority
CN
China
Prior art keywords
image
component
angle
camera
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911032707.XA
Other languages
Chinese (zh)
Other versions
CN111027374A (en
Inventor
徐顺海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Device Co Ltd
Original Assignee
Huawei Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Device Co Ltd filed Critical Huawei Device Co Ltd
Priority to CN201911032707.XA priority Critical patent/CN111027374B/en
Publication of CN111027374A publication Critical patent/CN111027374A/en
Priority to PCT/CN2020/108709 priority patent/WO2021082620A1/en
Application granted granted Critical
Publication of CN111027374B publication Critical patent/CN111027374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/0206Portable telephones comprising a plurality of mechanically joined movable body parts, e.g. hinged housings
    • H04M1/0208Portable telephones comprising a plurality of mechanically joined movable body parts, e.g. hinged housings characterized by the relative motions of the body parts
    • H04M1/0214Foldable telephones, i.e. with body parts pivoting to an open position around an axis parallel to the plane they define in closed position
    • H04M1/0216Foldable in one direction, i.e. using a one degree of freedom hinge
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0264Details of the structure or mounting of specific components for a camera module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0266Details of the structure or mounting of specific components for a display module assembly
    • H04M1/0268Details of the structure or mounting of specific components for a display module assembly including a flexible display panel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to the technical field of biological authentication, in particular to an image acquisition image recognition method and electronic equipment. The method is applied to an electronic device configured with a first component and a second component, wherein the first component at least comprises a first camera and a first display screen, and the second component at least comprises a second camera. The method comprises the following steps: when an included angle is formed between the second component and the first component, capturing a first image of the object to be verified through the first camera; capturing a second image of the object to be verified through a second camera; wherein the capturing time of the first image and the second image is the same; verifying the identity of the object to be verified according to the first image and the second image; or sending the first image and the second image to other electronic equipment so that the other electronic equipment can verify the identity of the object to be verified according to the first image and the second image.

Description

Image recognition method and electronic equipment
Technical Field
The application relates to the technical field of biological authentication, in particular to an image recognition method and electronic equipment.
Background
Currently, face recognition is applied to many scenes, such as face unlocking, face payment and the like. In the scenes of face unlocking, face payment and the like, the electronic equipment can conduct user identity verification in a face recognition mode. Specifically, the electronic device may collect a face image of the user and then compare the face image with the face image of the user stored in the database. If the user identity is matched, the user identity verification is passed; if the user identity is not matched, the user identity verification is not passed.
The current face recognition process requires longer time and is low in efficiency, and the following common mode of performing spoofing verification by forging a face also exists.
a, using a display device such as a tablet computer to display a photo of a principal which is recorded or photographed in advance, so as to perform verification and cheating when the true face of the principal is replaced.
And b, manufacturing a mask by using materials such as silica gel, rubber and the like, and only exposing parts such as eyes and the like to perform verification deception.
The above-mentioned fraud authentication method brings a greater security risk to the user.
Disclosure of Invention
The embodiment of the application provides an image recognition method and electronic equipment, which can capture a front face image and a side face image of an object to be verified at the same time when performing face recognition, shorten the time of image acquisition, improve the efficiency of face recognition and reduce the operation space of deception verification.
In a first aspect, an embodiment of the present application provides an image recognition method, which is applied to an electronic device configured with a first component and a second component, where the first component includes at least a first camera and a first display screen, the second component includes at least a second camera, and the second component is bendable relative to the first component; when the second component is bent relative to the first component, the acquisition area of the first camera and the acquisition area of the second camera are crossed; the method comprises the following steps: capturing a first image of an object to be verified by the first camera when an included angle exists between the second component and the first component; capturing a second image of the object to be verified through the second camera; wherein the capturing time of the first image and the second image is the same; verifying the identity of the object to be verified according to the first image and the second image; or sending the first image and the second image to other electronic equipment so that the other electronic equipment can verify the identity of the object to be verified according to the first image and the second image.
That is, when the second component is bent relative to the first component, the front face image (or the side face image) of the object to be verified can be acquired through the first camera arranged on the first component at the same time, and the side face image (or the front face image) of the object to be verified can be acquired through the second camera arranged on the second component, that is, the front face image acquisition and the side face image acquisition can be performed in parallel, so that the image acquisition time for face recognition is shortened, and the face recognition efficiency is improved. The capturing time of the front face image and the side face image is the same, so that the operation space for cheating verification by forging the face is reduced, and the security level of face recognition is improved.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the electronic device is configured with a foldable screen.
That is, in this implementation, the first member and the second member may be members provided on both sides of a folding structure of the foldable screen electronic device, by which the second member may be folded with respect to the first member, and an area of the foldable screen corresponding to the first member may serve as the first display screen.
With reference to the first aspect, in a second possible implementation manner of the first aspect, the second component further includes a second display screen, and the first display screen and the second display screen are two independent display screens.
That is, in this implementation, the electronic device may be configured with two separate physical displays that are relatively bendable to the physically attached components.
With reference to the first aspect, in a third possible implementation manner of the first aspect, the first image is an image captured by the first camera when an angle of bending the second component relative to the first component is a preset first angle.
That is, in this implementation, when the bending angle between the first member and the second member reaches a preset angle, the image captured by the first camera and the image captured by the second camera may be used as the image for verifying the identity of the object to be verified, the side face image photographed at a specific angle may be used as the image for verifying the identity of the object to be verified, and the security level of face recognition may be improved.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, the method further includes: and displaying a progress bar on the first display screen, wherein the progress bar is used for prompting the bending angle of the second part relative to the first part, so that the object to be verified is subjected to bending operation, and the bending angle of the second part relative to the first part reaches the first angle.
That is, in this implementation manner, the progress bar may be displayed on the first display screen, so that the user may bend the second component to a state of bending the second component at a preset first angle with respect to the first component, thereby improving the efficiency of image acquisition in the face recognition process.
With reference to the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, the progress bar is composed of a plurality of progress areas, and each of the plurality of progress areas corresponds to a different angle of the plurality of angles; the method further comprises the steps of: when the second component is bent at any angle of the plurality of angles relative to the first component, displaying a progress area corresponding to the any angle according to a preset display mode.
That is, in this implementation manner, each progress area in the progress bar corresponds to a different angle, and when the second component is bent at a certain angle with respect to the first component, the progress area corresponding to the angle is displayed in a preset manner, so that the user can be reminded of the angle of bending between the second component and the first component currently.
With reference to the first aspect, in a sixth possible implementation manner of the first aspect, the first image is an image captured by the first camera when an included angle between the second component and the first component is a second angle; the method further comprises the steps of: when the included angle between the second component and the first component is a third angle, capturing a third image of the object to be verified through the first camera, wherein the third angle and the second angle are different; capturing a fourth image of the object to be verified through the second camera; wherein the capturing time of the third image and the fourth image is the same; the verifying the identity of the object to be verified according to the first image and the second image comprises: verifying the identity of the object to be verified according to the first image, the second image, the third image and the fourth image; or, the sending the first image and the second image to other electronic devices includes: and sending the first image, the second image, the third image and the fourth image to the other electronic equipment so that the other electronic equipment can verify the identity of the object to be verified according to the first image, the second image, the third image and the fourth image.
That is, in this implementation manner, the front face image (or the side face image) captured by the first camera and the side face image (or the front face image) of the second camera can be used to verify the identity of the object to be verified under the condition that the bending angles of the second component relative to the first component are different, and in particular, the identity of the object to be verified can be verified by combining the side face images photographed at different angles and the front face images corresponding to the side face images, so that the security level of face recognition is improved.
With reference to the first aspect, in a seventh possible implementation manner of the first aspect, the method further includes: and displaying a positioning area on the first display screen so that the object to be verified can adjust the position of the head portrait in the first display screen, and the head portrait enters the positioning area.
That is, in this implementation manner, the positioning frame may be displayed on the first display screen, so that the object to be verified may pass through the positioning frame, and the position of the head portrait in the first display screen may be adjusted, so that the adjustment of the head position of the object to be verified may be achieved, and the user operation is improved.
With reference to the first aspect, in an eighth possible implementation manner of the first aspect, the method further includes: determining a first feature point of a face image of the first image; and adjusting the position of the face image in the first image and the position of the face image in the second image according to the position of the first feature point.
In the implementation mode, the face image deviated from the positioning area is adjusted to the positioning area, so that a qualified image which can be used for identity verification can be quickly obtained without repeatedly aligning the head portrait with the face positioning area by a user, and the user operation experience is improved.
In a second aspect, an embodiment of the present application provides an image recognition device, configured to an electronic device configured to include a first component and a second component, where the first component includes at least a first camera and a first display screen, and the second component includes at least a second camera, and the second component is bendable relative to the first component; when the second component is bent relative to the first component, the acquisition area of the first camera and the acquisition area of the second camera are crossed; the device comprises:
the first capturing unit is used for capturing a first image of an object to be verified through the first camera when an included angle exists between the second component and the first component;
a second capturing unit, configured to capture a second image of the object to be authenticated by using the second camera; wherein the capturing time of the first image and the second image is the same;
The verification unit is used for verifying the identity of the object to be verified according to the first image and the second image; or sending the first image and the second image to other electronic equipment so that the other electronic equipment can verify the identity of the object to be verified according to the first image and the second image.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the electronic device is configured with a foldable screen.
With reference to the second aspect, in a second possible implementation manner of the second aspect, the second component further includes a second display screen, and the first display screen and the second display screen are two independent display screens.
With reference to the second aspect, in a third possible implementation manner of the second aspect, the first image is an image captured by the first capturing unit when an angle of bending the second component relative to the first component is a preset first angle through the first camera.
With reference to the third possible implementation manner of the second aspect, in a fourth possible implementation manner of the second aspect, the apparatus further includes: the first display unit is used for displaying a progress bar on the first display screen, and the progress bar is used for prompting the bending angle of the second component relative to the first component so that the object to be verified performs bending operation, and the bending angle of the second component relative to the first component reaches the first angle.
With reference to the fourth possible implementation manner of the second aspect, in a fifth possible implementation manner of the second aspect, the progress bar is composed of a plurality of progress areas, and each of the plurality of progress areas corresponds to a different angle of the plurality of angles; and the first display unit is used for displaying a progress area corresponding to any angle in the plurality of angles according to a preset display mode when the second component is bent at the any angle relative to the first component.
With reference to the second aspect, in a sixth possible implementation manner of the second aspect, the first image is an image captured by the first capturing unit through the first camera when an included angle between the second component and the first component is a second angle; the first capturing unit is further configured to capture, when an included angle between the second component and the first component is a third angle, a third image of the object to be verified by the first camera, where the third angle and the second angle are different; the second capturing unit is further configured to capture a fourth image of the object to be verified through the second camera; wherein the capturing time of the third image and the fourth image is the same; the verification unit is further configured to verify the identity of the object to be verified according to the first image, the second image, the third image and the fourth image; or sending the first image, the second image, the third image and the fourth image to the other electronic equipment so that the other electronic equipment can verify the identity of the object to be verified according to the first image, the second image, the third image and the fourth image.
With reference to the second aspect, in a seventh possible implementation manner of the second aspect, the apparatus further includes: and the second display unit is used for displaying a positioning area on the first display screen so that the object to be verified can adjust the position of the head portrait in the first display screen, and the head portrait can enter the positioning area.
With reference to the second aspect, in an eighth possible implementation manner of the second aspect, the apparatus further includes: a determining unit, configured to determine a first feature point of a face image of the first image; and the adjusting unit is used for adjusting the position of the face image in the first image and the position of the face image in the second image according to the position of the first characteristic point.
It will be appreciated that the image recognition apparatus provided in the second aspect is configured to perform the corresponding method provided in the first aspect, and therefore, the advantages achieved by the image recognition apparatus may refer to the advantages in the corresponding method provided in the first aspect, which are not described herein.
In a third aspect, an embodiment of the present application provides an electronic device, including: the device comprises a processor, a memory, a first component and a second component, wherein the first component at least comprises a first camera and a first display screen, and the second component at least comprises a second camera; the second part can be bent relative to the first part; when the second component is bent relative to the first component, the acquisition area of the first camera and the acquisition area of the second camera are crossed;
The memory is used for storing computer instructions;
when the electronic device is running, the processor executes the computer instructions to cause the electronic device to perform:
capturing a first image of an object to be verified by the first camera when an included angle exists between the second component and the first component;
capturing a second image of the object to be verified through the second camera; wherein the capturing time of the first image and the second image is the same;
verifying the identity of the object to be verified according to the first image and the second image; or sending the first image and the second image to other electronic equipment so that the other electronic equipment can verify the identity of the object to be verified according to the first image and the second image.
With reference to the third aspect, in a first possible implementation manner of the third aspect, the electronic device is configured with a foldable screen.
With reference to the third aspect, in a second possible implementation manner of the third aspect, the second component further includes a second display screen, and the first display screen and the second display screen are two independent display screens.
With reference to the third aspect, in a third possible implementation manner of the third aspect, the first image is an image captured by the first camera when an angle of bending the second component relative to the first component is a preset first angle.
With reference to the third possible implementation manner of the third aspect, in a fourth possible implementation manner of the third aspect, the processor executes the computer instructions to cause the electronic device to further perform: and displaying a progress bar on the first display screen, wherein the progress bar is used for prompting the bending angle of the second part relative to the first part, so that the object to be verified is subjected to bending operation, and the bending angle of the second part relative to the first part reaches the first angle.
With reference to the fourth possible implementation manner of the third aspect, in a fifth possible implementation manner of the third aspect, the progress bar is composed of a plurality of progress areas, and each of the plurality of progress areas corresponds to a different angle of the plurality of angles; the processor executes the computer instructions to cause the electronic device to further perform: when the second component is bent at any angle of the plurality of angles relative to the first component, displaying a progress area corresponding to the any angle according to a preset display mode.
With reference to the third aspect, in a sixth possible implementation manner of the third aspect, the first image is an image captured by the first camera when an included angle between the second component and the first component is a second angle; the processor executes the computer instructions to cause the electronic device to further perform: when the included angle between the second component and the first component is a third angle, capturing a third image of the object to be verified through the first camera, wherein the third angle and the second angle are different; capturing a fourth image of the object to be verified through the second camera; wherein the capturing time of the third image and the fourth image is the same; verifying the identity of the object to be verified according to the first image, the second image, the third image and the fourth image; or sending the first image, the second image, the third image and the fourth image to the other electronic equipment so that the other electronic equipment can verify the identity of the object to be verified according to the first image, the second image, the third image and the fourth image.
With reference to the third aspect, in a seventh possible implementation manner of the third aspect, the processor executes the computer instructions to cause the electronic device to further perform: and displaying a positioning area on the first display screen so that the object to be verified can adjust the position of the head portrait in the first display screen, and the head portrait enters the positioning area.
With reference to the third aspect, in an eighth possible implementation manner of the third aspect, the processor executes the computer instructions to cause the electronic device to further perform: determining a first feature point of a face image of the first image; and adjusting the position of the face image in the first image and the position of the face image in the second image according to the position of the first feature point.
It may be appreciated that the electronic device provided in the third aspect is configured to perform the corresponding method provided in the first aspect, and therefore, the advantages achieved by the electronic device may refer to the advantages in the corresponding method provided in the first aspect, which are not described herein.
In a fourth aspect, embodiments of the present application provide a computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of the first aspect.
It will be appreciated that the computer storage medium provided in the fourth aspect is configured to perform the corresponding method provided in the first aspect, and thus, the advantages achieved by the computer storage medium may refer to the advantages in the corresponding method provided in the first aspect, and will not be described herein.
In a fifth aspect, embodiments of the present application provide a computer program product comprising program code for implementing the method according to the first aspect when the program code is executed by a processor in an electronic device.
It will be appreciated that the computer program product provided in the fifth aspect is for performing the corresponding method provided in the first aspect, and therefore, the advantages achieved by the computer program product may refer to the advantages in the corresponding method provided in the first aspect, and will not be described in detail herein.
Drawings
FIG. 1A is a schematic diagram of an example of acquiring a face image of a user;
FIG. 1B is a schematic diagram of an example of acquiring a left face image of a user;
FIG. 1C is a schematic diagram of an example of capturing a right face image of a user;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 5A is a schematic top view of an example image capture provided in an embodiment of the present application;
FIG. 5B is a schematic front view of the image capture example shown in FIG. 5A;
FIG. 6A is a schematic top view of an example image capture provided in an embodiment of the present application;
FIG. 6B is a schematic front view of the image capture example shown in FIG. 6A;
FIG. 7A is a schematic top view of an example image capture provided in an embodiment of the present application;
FIG. 7B is a schematic front view of the image capture example shown in FIG. 7A;
fig. 8A is a schematic diagram of an example of a progress bar of the electronic device provided in the embodiment of the present application when the component A2 and the component A1 are in a straight state;
fig. 8B is a schematic view of an example of a progress bar of the electronic device provided in the embodiment of the present application in a 25 ° bending state with respect to the component A1;
fig. 8C is a schematic view of an example of a progress bar of the electronic device provided in the embodiment of the present application in a 50 ° bending state with respect to the component A1;
fig. 9A is a schematic diagram of an image capturing effect according to an embodiment of the present application;
FIG. 9B is a schematic diagram of another image capturing effect according to an embodiment of the present application;
FIG. 9C is a schematic view of still another image capturing effect according to an embodiment of the present application;
FIG. 9D is a schematic view of still another image capturing effect according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a software architecture according to an embodiment of the present application;
FIG. 12 is a schematic diagram of another software architecture according to an embodiment of the present application;
fig. 13 is a flowchart of an image recognition method according to an embodiment of the present application;
fig. 14 is a schematic block diagram of an image recognition device according to an embodiment of the present application;
fig. 15 is a schematic block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the accompanying drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise.
Wherein, in the description of the present specification, "/" means or is meant unless otherwise indicated, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in the description of the embodiments of the present application, "plurality" means two or more than two.
In the description of this specification, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
In order to improve the security level of identity verification by using a human face, a human face living body detection mode is generally adopted to perform human face recognition. The conventional algorithm for face living body detection is performed as follows.
1. Clicking an identification button, and calling a camera; 2. acquiring a camera operation authority, wherein the authority is confirmed by a user when an application is installed or by other authority confirmation modes before use; 3. initializing a page, creating a camera page, and creating a space or a position for storing opening mouth image data and shaking head image data to be acquired; 4. opening recognition, and carrying out facial frame recognition; 5. face part recognition, judging whether a human face is detected; 6. after the face is detected, judging the position of the face; 7. judging the position properly, and verifying whether to open the mouth; 8. after the mouth opening verification is finished, verifying whether to shake the head; 9. after the head shaking verification is finished, the front face and the side face are photographed in a countdown manner for 3 seconds; 10. after photographing, the front face and the side face photographed are selected to be re-photographed or uploaded, and face image data (such as an identity card image, a pre-collected face image and the like) collected in advance by a database are compared.
In general, the conventional face biopsy method is as follows.
(1) As shown in fig. 1A, the front face alignment detection device aligns a front face into a positioning area;
(2) Prompting a user to blink so as to take a picture with blink expression; prompting a user to open the mouth so as to shoot an image with the mouth-open expression;
(3) As shown in fig. 1B, the user is prompted to turn left to take an image of the left side face; and as shown in fig. 1C, prompting the user to turn right to take an image of the right side face;
(4) Repeating step (3) at least twice;
(5) And carrying out identification verification according to the photographed image.
The above-described face living body detection method involves the selection of a camera and the processing of a background algorithm, and the capturing of the front face image and the side face image is performed in series, and therefore takes a long time. Specifically, generally, face image data acquired by a terminal side needs to be uploaded to a cloud end, and the cloud end completes face recognition verification according to a verification algorithm. Therefore, the terminal serially performs photographing and uploading of face image data. Generally, it takes at least 3 seconds to shoot a front face or a side face (left or right) at a single time. Therefore, it takes a long time to shoot the front face and the side face a plurality of times.
And the user shakes the head, and the same camera or the cameras with consistent or approximately consistent shooting angles are used for shooting different sides of the face at different moments respectively, so that the face authentication method is based on the verification of the plane form. In addition, for an electronic device that does not include a device such as an infrared projector, a scene that cannot be modeled stereoscopically can only be imaged as a planar image, and a stereoscopic image cannot be imaged. This creates room for fraud verification by forging a face, for example by replacing a real face with a frontal and side-face photograph, respectively.
The embodiment of the application provides an image recognition method, which can simultaneously capture a face front image and a face side image of an object to be verified, and can obtain the face front image and the face side image of the object to be verified in a short time so as to perform identity verification. In addition, the capturing time of the face front image and the face side image of the object to be verified is the same, namely the face posture when the face front image is shot and the face posture when the face side image is shot are the same, so that the face front image and the face side image form three-dimensional information of the object to be verified, the operation space for deceptive verification through counterfeiting the face is reduced, and the security level of face recognition verification is improved.
The image recognition method provided by the embodiment of the application can be applied to electronic equipment. The electronic device may be a portable electronic device such as a cell phone, tablet computer, personal digital assistant (personal digital assistant, PDA), wearable device, laptop computer (laptop), etc. Exemplary embodiments of portable electronic devices include, but are not limited to, portable electronic devices that carry iOS, android, microsoft or other operating systems. The type of the electronic device is not particularly limited in the embodiments of the present application.
In some embodiments, referring to fig. 2, an electronic device may include a component A1 and a component A2. The component A1 includes at least a camera B1 and a display screen C1 (not shown). The display screen C1 may display an image captured by the camera B1. The camera B1 is a front camera, and the shooting direction (also referred to as the object side direction) is the direction facing the display screen C1. The component A2 includes at least a camera B2. As shown in fig. 2, the member A2 may be bent at an angle θ with respect to the member A1. The angle θ is the angle between the extension line of the part A1 toward the part A2 side and the part A2. In one example, the angle θ may be any of 0-180 °. In one example, the angle θ may be a plurality of preset angles in the range of 0-180 °, such as 5 °, 10 °, 15 °, and so on. The electronic device may comprise an angle sensor that may detect the angle between the component A1 and the component A2. It is easy to understand that the angle θ between the component A1 and the component A2 is complementary to the angle θ, and therefore, the angle θ can be obtained by detecting the obtained angle between the component A1 and the component A2 by the angle sensor. With continued reference to fig. 2, when the component A2 is bent at an angle θ with respect to the component A1, the shooting direction of the camera B1 and the shooting direction of the camera B2 intersect, that is, the acquisition area of the camera B1 and the acquisition area of the camera B2 intersect. Thus, the electronic equipment can realize the simultaneous capture of facial images of different faces of the object to be verified.
In some embodiments, referring to fig. 3, the electronic device may be an electronic device configured with a folding screen. As shown in fig. 3, the folding screen may be divided into a display screen C1, a bending area, and a display screen C1. The electronic device may include a part A1, a bending portion through which the part A2 may be bent at any angle of 0 to 180 ° with respect to the part A1, and a part A2. The component A1 may include a display screen C1 and a camera B1, among others. The component A2 may include a display screen C2 and a camera B2. The bend region may comprise a bend region of the folding screen.
In some embodiments, referring to fig. 4, the electronic device may be an electronic device configured with at least two separate display screens. As shown in fig. 4, the electronic device may include a part A1, a bending portion, and a part A2, and the part A2 may be bent at any angle of 0 to 180 ° with respect to the part A1. The component A1 may include a display screen C1 and a camera B1. The component A2 may include a display screen C2 and a camera B2.
The image recognition method provided by the embodiment of the application can be applied to various face recognition scenes. Next, a scenario in which the method is applicable is described by way of example.
In some embodiments, the image recognition method provided by the embodiment of the application can be applied to a screen unlocking scene. Under the state of screen blacking or screen lightening and screen locking of the electronic equipment, a user can bend the electronic equipment to enable the electronic equipment to acquire the user face image and the user side face image captured at the same time. The manner in which the user face image and the user side face image are obtained will be described in detail below. The electronic device may verify the identity of the user from the captured face image and side face image. Specifically, the similarity of the captured face image and the standard face image, and the similarity of the captured measurement image and the standard side face image can be verified. If the similarity of the front face image and the similarity of the side face image are higher than the similarity threshold value corresponding to each other, the verification is successful. Otherwise, the verification fails. If the verification is successful, unlocking the screen, entering the desktop, or entering an interface displayed by the electronic equipment before the screen is blacked out or locked.
In one illustrative example, the standard front face image and the standard side face image may be images that were previously acquired and saved by the electronic device. In one example, the user may cause the electronic device to acquire a standard front face image and a standard side face image by the function of "add face data" in the "settings" menu. The manner in which the standard face image and the standard side face image are acquired will be described in detail below.
In some embodiments, the image recognition method provided in the embodiments of the present application may be applied to an APP (application) face login scenario. In one illustrative example, a user may set an interface in a login mode of an application supporting face-swipe login, start the face-swipe login mode, and enable an electronic device to obtain a standard face image and a standard side face image of the user. And when the user logs in the application, the user can select face brushing to log in, so that the electronic equipment acquires the front face image of the user and the side face image of the user, and performs verification according to the standard front face image and the standard side face image. If the verification is successful, the application can be logged in. If the verification fails, the login is refused.
In some embodiments, the image recognition method provided by the embodiments of the present application may be applied to a face-brushing payment scenario. In one example, a user may set an interface on a payment method of an application supporting face-brushing payment, initiate the payment method of face-brushing payment, and enable the electronic device to acquire a standard face image and a standard side face image of the user. When the user makes a face-brushing payment, the face-brushing payment is enabled to be verified according to the standard face image and the standard side face image. And if the verification is successful, executing the payment operation. If the verification fails, refusing to execute the payment operation.
In one example, the standard face image and the standard side face image may be stored locally on the electronic device rather than uploaded to a server of the application to reduce the risk of leakage of biometric information of the user. When face payment is started, an application client installed on the electronic device can negotiate an authentication mode (for example, an authentication mode of a public key and a private key based on a digital signature certificate) with a remote application server. Under the condition that the user face refreshing verification is successful, the application client can send authentication information related to the negotiated authentication mode to the application server, so that the application server can execute operations such as fund transfer and the like to finish payment.
The above embodiments are merely illustrative, and not restrictive, of face recognition scenarios to which the image recognition method provided in the embodiments of the present application may be applied. The embodiment of the application can also be applied to other face recognition scenes, and is not listed here.
Next, an example of an electronic device configured with a folding screen is described as an example of the image recognition method provided in the embodiment of the present application.
Referring to fig. 5A and 5B, in a face recognition scenario, a user (may also be referred to as an object to be authenticated) may face the display screen C1, so that the camera B1 may capture an image of the user's face. The electronic device is a portable electronic device as described above. When a user uses the electronic device to conduct face recognition, the user can hold the electronic device and make the display screen C1 of the electronic device face towards the front face of the user. In this scenario, the face of the user is typically about 20 cm from the display screen C1. The distance from the face of the user to the display screen C1 is generally between 10-30 cm in consideration of the operation habits of different users and the body sizes of the users (for example, the length of arms, etc.).
As shown in fig. 5A and 5B, when the face recognition function of the electronic device is started, the camera B1 may capture an image of a face located within its shooting range. The display screen C1 may display an image captured by the camera B1. In some embodiments, as shown in FIG. 5A, display C1 may display a location area and display a prompt (e.g., "please put the avatar into the location area") to prompt the user to put the avatar into the location area.
As shown in fig. 5A, in the unfolded configuration of the folding screen, since the acquisition directions of the camera B1 and the camera B2 are parallel, when the user faces the display screen C1, it is difficult for the camera B1 to capture or effectively capture the user side face image. Under the condition that the user keeps facing the display screen C1, the user can bend the component A2, so that the component A2 bends relative to the component A1, the acquisition area of the camera B2 and the acquisition area of the camera B1 are intersected, and the camera B2 captures or effectively captures a side face image of the user. Bending of the component A2 relative to the component A1 may also be referred to as bending of the display C2 relative to the display C1.
Referring to fig. 6A and 6B, in a state in which the component A2 is bent with respect to the component A1, the camera B1 may capture a front face image of the user, and the camera B2 may capture a side face image of the user. The face image and the side face image at the same capturing time may be determined as images for verifying the identity of the user. In one example, the camera B2 may be a wide-angle camera (e.g., a camera with a visual angle of 120 ° or more) so that the side face image of the user is within the photographing range of the camera B in a state where the component 2 is bent with respect to the component A1.
In some embodiments, as shown in fig. 6A and 6B, an image P1 captured by the camera B1 in a state in which the component A2 is bent at an angle θ1 with respect to the component A1 may be determined as a front face image for verifying the user's identity, and an image P2 captured by the camera B2 in a state in which the component 2 is bent at an angle θ1 with respect to the component may be determined as a side face image for verifying the user's identity. Wherein the capturing time of the image P1 and the image P2 is the same. The angle θ1 may be any value between 0 ° -90 ° (excluding 0 °). In one example, the angle θ1 may be any value between 10 ° -80 °. In one example, the angle θ1 may be any value between 20 ° -70 °. In one example, the angle θ1 may be 25 °, or may be 50 °, or may be 70 °, or the like, which is not illustrated.
The angle θ1 may be a preset threshold value for triggering and determining an image for verifying the identity of the user, that is, in the face recognition scenario, each time the angle of the component A2 and the component A1 relative to each other reaches the angle θ1, the electronic device determines the images captured by the camera B1 and the camera B2 at this time as the image for verifying the identity of the user.
In a specific implementation, the angle sensor of the electronic device may detect the included angle between the component A2 and the component A1, so that the electronic device may calculate, according to the detected included angle, the angle at which the component A2 is bent with respect to the component A1 (the angle at which the component A2 is bent with respect to the component A1 is the complement of the included angle between the component A2 and the component A1). When it is detected and calculated that the component A2 and the component A1 are bent at the angle θ1, the images captured by the cameras B1 and B2 at this time are determined as images for verifying the identity of the user.
More specifically, it is easy to understand that in the face recognition scenario, the camera captures images (i.e., records video) at a certain frame rate (frame rate), for example, 30fps (frames per second), 60fps, etc. Therefore, there is a time interval between capturing moments of adjacent images captured by the camera. Taking 60fps as an example, the camera captures adjacent images at a sixty-second interval. If the electronic device calculates that the component A2 and the component A1 are bent at the angle θ1, and at the same time, the camera B1 and the camera B2 capture images respectively, the images captured at this time are determined as images for verifying the identity of the user. If the electronic device calculates that the time T1 when the component A2 and the component A1 are bent at the angle theta 1 is in a time interval between capturing adjacent images by the camera B1 or the camera B2, determining the image captured by the camera B1 or the camera B2 at the time T1 as an image for verifying the identity of the user.
In some embodiments, referring to fig. 5B, in a face recognition scenario, when the component A2 and the component A1 are not bent relatively at the angle θ1, the electronic device may prompt the user to perform an operation of bending the component A2 to a state of bending relatively to the component A1 at the angle θ1. In one example, a state in which the member A2 is bent with respect to the member A1 at an angle θ1 and less than the angle θ1 may be defined as the first level. A progress bar corresponding to the first level may be displayed on the display screen C1, and areas of the progress bar may correspond to different angles, respectively. Whenever the part A2 is bent at an angle to the part A1, the area on the progress bar corresponding to the angle is displayed (e.g., highlighted, etc.) in a preset manner to prompt the user of the relative bending angle of the current part A2 and the part A1. In one example, referring to fig. 5B and 6B, the progress bar may be in a bar shape without dividing regions, and may correspond to a continuous angle. In one example, referring to fig. 8A and 8B, the progress bar is composed of a plurality of progress blocks, each corresponding to a different preset angle, and adjacent progress blocks are different by a preset angle value (e.g., 5). In one example, information prompting the user to bend the component A2 to a relative bend with respect to the component A1 at an angle θ1, such as "please bend to a first level," may be displayed on the display screen C1. In one example, a prompt voice, such as "please bend to the first level," may be played.
Next, the relationship between the image P1 capturing time and the image P2 capturing time is specifically discussed.
In some embodiments, the same capture time of image P1 and image P2 specifically means that image P1 and image P2 are captured at the same time. In this embodiment, the operations of the camera B1 and the camera B2 are synchronized, that is, the timing at which each frame of image is captured is the same. Therefore, the front face image captured by the camera B1 at the time T2 and the side face image captured by the camera B2 at the time T2 can be taken as images for verifying the user identity.
In some embodiments, the same capture time of the image P1 and the image P2 specifically refers to a face image and a side face image in which the time interval between the capture time of the image P1 and the capture time of the image P2 is less than the duration threshold t1. In this embodiment, the operations of the camera B1 and the camera B2 are not synchronized, i.e., the timings of capturing the images are different.
In one example, the front face image captured by camera B1 at time T3 (i.e., image P1) may be determined as the front face image for verifying the user's identity, and the side face image captured by camera B2 most recently before time T3 (or the side face image captured by camera B2 most recently after time T3) (i.e., image P2) may be determined as the side face image for verifying the user's identity. It will be readily appreciated that the time interval between the capturing time and time T3 of the side face image captured most recently by the camera B2 before time T3 (or the side face image captured most recently by the camera B2 after time T3) is smaller than the time interval between capturing of adjacent images by the camera B2. The time interval between capturing adjacent images by the camera B2 may be taken as the duration threshold t1.
In one example, the side face image captured by camera B2 at time T4 (i.e., image P2) may be determined as the side face image used to verify the identity of the user, and the front face image captured by camera B1 before time T4 (or the front face image captured by camera B1 after time T4) (i.e., image P1) may be determined as the front face image used to verify the identity of the user. It is easy to understand that the time interval between the capturing time and the time T4 of the front face image captured most recently by the camera B1 before the time T4 (or the front face image captured most recently by the camera B1 after the time T4) is smaller than the time interval between capturing of the adjacent images by the camera B1. The time interval between capturing adjacent images by camera B1 may be referred to as a duration threshold t1.
Regardless of whether the duration threshold t1 is the time interval between capturing of adjacent images by the camera B2 or the time interval between capturing of adjacent images by the camera B1, the duration of the duration threshold t1 is extremely short, and thus, the capturing time of the face image determined to verify the user identity and the capturing time of the side face image determined to verify the user identity can be regarded as the same or substantially the same.
In some embodiments, as shown in fig. 7A and 7B, the electronic device may further determine an image P3 captured by the camera B1 in a state in which the component A2 is bent at an angle θ2 different from the angle θ1 with respect to the component A1 as a front face image for verifying the user identity, and determine an image P4 captured by the camera B2 in a state in which the component 2 is bent at the angle θ2 with respect to the component A1 as a side face image for verifying the user identity. Wherein the capturing time of the image P3 and the image P4 is the same. The images P3 and P4 may be used in combination with the images P1 and P2 described above to verify the identity of the user.
In this embodiment, the electronic device may capture images in different bending states, and determine the images in different bending states as images for verifying the identity of the user, thereby improving the security level of face recognition. The angle θ2 may be a preset further threshold for triggering determination of an image for verifying the identity of the user, and the electronic device determines the images captured by the cameras B1 and B2 at this time as images for verifying the identity of the user each time the angle of the relative bending of the parts A2 and A1 reaches the angle θ1.
The capturing time of the image P3 and the image P4 may refer to the description of the capturing time of the image P1 and the capturing time of the image P4, which are not described herein.
In one illustrative example, angle θ2 is less than or equal to 90 ° and greater than angle θ1, such that a user may continue to bend component A2 without changing the direction of bending after bending component A2 from a straight state of component A2 and component A1 to a state of component A2 at angle θ1, while achieving bending of component A2 at angle θ2 relative to component A1 with a good user operational experience. The angle θ2 may be any value between the angle θ1 and 90 ° (excluding the angle θ1). In one example, the angle θ2 may be any value between 30 ° -90 °. In one example, the angle θ2 may be any value between 40 ° -90 °. In one example, the angle θ2 may be 40 °, or may be 50 °, or may be 70 °, or the like, which is not illustrated.
In one example of this example, referring to fig. 6B, after the component A2 and the component A1 have been relatively bent at the angle θ1 or after having been relatively bent at the angle θ1, the electronic device may prompt the user to perform an operation of bending the component A2 to a state of being relatively bent at the angle θ2 with the component A1. In one example, a state in which the member A2 is bent with respect to the member A1 at an angle between the angle θ1 and the angle θ2 (including the angle θ2, excluding the angle θ1) may be defined as the second level. A progress bar corresponding to the second level may be displayed on the display screen C1, and each area of the progress bar may correspond to a different angle, respectively. Whenever the part A2 is bent at an angle to the part A1, the area on the progress bar corresponding to the angle is displayed (e.g., highlighted, etc.) in a preset manner to prompt the user of the relative bending angle of the current part A2 and the part A1. In one example, referring to fig. 6B and 7B, the progress bar may be in a bar shape without dividing regions, and may correspond to a continuous angle. In one example, referring to fig. 8B and 8C, the progress bar is composed of a plurality of progress blocks, each corresponding to a different preset angle, and adjacent progress blocks are different by a preset angle value (e.g., 5). In one example, information prompting the user to bend the component A2 to a relative bend with respect to the component A1 at an angle θ2, such as "please bend to a second level," may be displayed on the display screen C1. In one example, a prompt voice, such as "please bend to the second level," may be played.
In some embodiments, the electronic device may further determine an image P5 captured by the camera B1 in a state in which the component A2 is bent with respect to the component A1 at an angle θ3 different from the angles θ2 and θ1 as a front face image for verifying the user's identity, and an image P6 captured by the camera B2 in a state in which the component 2 is bent with respect to the component A1 at the angle θ3 as a side face image for verifying the user's identity. Wherein the capturing time of the image P5 and the image P6 is the same. Image P5 and image P6 may be used in combination with images P3, P4, P1 and P2 described above to verify the identity of the user.
In this embodiment, the electronic device may capture images in multiple bending states, and determine the images in multiple bending states as images for verifying the user identity, thereby improving the security level of face recognition. The angle θ3 may be a threshold for triggering determination of an image for verifying the identity of the user, and the electronic device determines the images captured by the cameras B1 and B2 at this time as images for verifying the identity of the user each time the angle of the relative bending of the parts A2 and A1 reaches the angle θ3.
The capturing time of the image P5 and the image P6 may refer to the description of the capturing time of the image P1 and the capturing time of the image P4, which are not described herein.
In one example, angle θ3 is less than or equal to 90 ° and greater than angle θ2, so that a user may continue to bend component A2 without changing the direction of bending after bending component A2 from a straight state of component A2 and component A1 to a state of component A2 at angle θ2 relative to component A1, and with a good user operation experience, achieve bending of component A2 at angle θ3 relative to component A1. The angle θ2 may be any value between the angle θ2 and 90 ° (excluding the angle θ2). In one example, the angle θ3 may be any value between 50 ° -90 °. In one example, the angle θ2 may be any value between 60 ° -90 °. In one example, the angle θ3 may be 70 °, 80 °, 70 °, or the like, which is not illustrated.
In one example, referring to fig. 7B, after the component A2 and the component A1 have been bent at the angle θ2 or at the angle θ2, the electronic device may prompt the user to perform an operation of bending the component A2 to a state of being bent at the angle θ3 with respect to the component A1. In one example, a state in which the member A2 is bent with respect to the member A1 at an angle between the angle θ2 and the angle θ3 (including the angle θ3, excluding the angle θ2) may be defined as the third level. A progress bar corresponding to the third level may be displayed on the display screen C1. The progress bar corresponding to the third level may be implemented by referring to the description of the progress bar corresponding to the first level object and the progress bar corresponding to the second level, which are not described herein. In one example, the electronic device may also display or voice play information for prompting the user to bend the component A2 to a relative bend with respect to the component A1 at the angle θ3, such as "please bend to the second level".
When the image recognition method provided in the embodiment of the present application is applied to the electronic device shown in fig. 4, the embodiments shown in fig. 5A, 5B, 6A, 6B, 7A, and 7B may be referred to and will not be described herein.
The image recognition method provided by the embodiment of the present application may further obtain a standard face image and a standard side face image, and the specific obtaining manner is similar to that of obtaining an image for verifying the identity of the user, which is not described herein.
The image recognition method provided by the embodiment of the application can acquire the front face image and the side face image with the same capturing time, saves the time for acquiring the front face image and the side face image in the face recognition process, and improves the user experience; in addition, the capturing time of the front face image and the side face image is the same, the space for cheating verification by forging the face is reduced, and the security level of face recognition is improved.
Referring to fig. 9A, in general, in a face recognition scenario, a positioning area for positioning a face is displayed on a display screen, and a face image in a captured image is required to be located in the positioning area. Users often need to adjust their own positions or the positions of the electronic devices multiple times to position their head portraits in the positioning areas, so that the user experience of face recognition is poor. Particularly, when the component A2 of the electronic device is bent at a specific angle relative to the component A1 to obtain the front face image and the side face image with the same capturing time, the bending operation of the electronic device by the user often generates shaking, which causes the head portrait of the user to deviate from the positioning area.
The embodiment of the application provides a method for adjusting face images in images, which can be applied to electronic equipment. The method can acquire the characteristic points of the front face image in the front face image for verifying the user identity, determine the relative positions between the characteristic points and the center points of the positioning areas, and then adjust the positions of the front face image in the front face image for verifying the user identity and the side face image in the side face image for verifying the user identity according to the relative positions. The relative position may include a distance and a relative orientation. Specifically, the coordinates of the nose tip image on the display screen C1 may be used as the position of the nose tip image, and the coordinates of the center point of the positioning area on the display screen C1 may be used as the position of the center point, so that the distance and the relative orientation between the two may be determined.
In some embodiments, the feature points of the frontal image used for position adjustment may be nose tip images in the frontal image.
In some embodiments, the longitudinal centerline and the lateral centerline of the frontal image may be determined, and an intersection of the longitudinal centerline and the lateral centerline may be determined, which may be a feature point of the frontal image for position adjustment.
In some embodiments, referring to fig. 9B, fig. 9C, and fig. 9D, a face image adjustment method provided in the embodiments of the present application is illustrated.
In this embodiment, it may be defined that when the user normally uses the electronic device to perform face recognition, the direction indicated by the forehead of the face displayed on the display screen C1 is an upward direction, and the direction indicated by the chin of the face is a downward direction. The direction perpendicular to the upward direction on the plane on which the display screen C1 is located may be defined as the left-right direction. The feature point of the front face image for position adjustment may be set as a nose tip image. The frontal face image and the side face image may be generally face images. The position of the face image may be represented by coordinates of the face image in the display screen.
Referring to fig. 9B, the front image P7 captured by the camera B1 may be set to deviate from the positioning area to the right, for example, the nose tip image in the front image is located above the center point of the positioning area by L1 cm and at the right side by L2 cm, that is, the front image in the front image P7 deviates from the positioning area by L1 cm upwards and deviates from the positioning area to the right by L2 cm. Accordingly, the side face image in the side face image P8 captured by the camera B2 at the same capturing time as the front face image P7 is also deviated upward by L1 cm and rightward by L2 cm. If the front face image P7 and the side face image P8 are images determined to be used for verifying the identity of the user, before the front face image P7 and the side face image P8 are used for verifying the identity of the user, the face images in the front face image P7 and the side face image P8 may be adjusted downward by L1 cm, the front face images in the front face image P7 may be adjusted leftward by L2 cm, and the face images in the side face image P8 may be adjusted leftward by L3 cm, where L3 is the product of L2 and the cosine value of the angle θ1. The angle θ1 is an angle at which the component A2 is bent with respect to the component A1 when the front face image P7 and the side face image P8 are captured.
Referring to fig. 9C, the front image in the front image P9 captured by the camera B1 may be set to deviate downward from the positioning area, for example, the nose tip image in the front image is located L4 cm below the center point of the positioning area, that is, the front image in the front image P7 deviates downward from the positioning area L4 cm. Accordingly, the side face image in the side face image P10 captured by the camera B2 at the same capturing time as the front face image P9 is also deviated downward by L4 cm. If the front face image P9 and the side face image P10 are images determined to be used for verifying the identity of the user, the face image in the front face image P9 and the side face image P10 may be adjusted upward by L4 cm before the front face image P9 and the side face image P10 are used for verifying the identity of the user.
Referring to fig. 9D, the frontal image in the frontal image P11 captured by the camera B1 may be set to deviate upward from the positioning area, for example, the nose tip image in the frontal image is located L5 cm above the center point of the positioning area, that is, the frontal image in the frontal image P7 deviates upward from the positioning area L5 cm. Accordingly, the side face image in the side face image P12 captured by the camera B2 at the same capturing time as the front face image P11 is also deviated upward by L5 cm. If the face image P11 and the side image P12 are images determined to be used for verifying the identity of the user, the face images in the face image P11 and the side image P12 may be adjusted downward by L5 cm before the face image P11 and the side image P12 are used for verifying the identity of the user.
When the front face image deviates from the positioning area, the position of the front face image in the front face image and the position of the side face image in the side face image corresponding to the front face image may be adjusted by referring to the methods shown in fig. 9B, 9C, and 9D.
According to the face image adjusting method in the image, the face image deviating from the locating area can be adjusted to the locating area, so that a qualified image capable of being used for identity verification can be obtained quickly under the condition that a user does not need to repeatedly align the head portrait with the face locating area, and user operation experience is improved.
Fig. 10 shows a schematic structural diagram of an electronic device 100 according to an embodiment of the present application. The image recognition method and the face image adjustment method provided by the embodiment of the application can be applied to the electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of the embodiment of the present invention does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc., respectively, through different I2C bus interfaces. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 communicates with the touch sensor 180K through an I2C bus interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through the I2S interface, to implement a function of answering a call through the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through a UART interface, to implement a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of electronic device 100. The processor 110 and the display 194 communicate via a DSI interface to implement the display functionality of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present invention is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1. Wherein the 1 or N display screens 194 may include display screen C1. When the electronic device 100 includes N display screens 194, the N display screens 194 may include a display screen C1 and a display screen C2.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 2 or N cameras 193, N being a positive integer greater than 1. The 2 or N cameras 193 may include a camera B1 and a camera B2.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip machine, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, the electronic device 100 may range using the distance sensor 180F to achieve quick focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light outward through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that there is an object in the vicinity of the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there is no object in the vicinity of the electronic device 100. The electronic device 100 can detect that the user holds the electronic device 100 close to the ear by using the proximity light sensor 180G, so as to automatically extinguish the screen for the purpose of saving power. The proximity light sensor 180G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by temperature sensor 180J exceeds a threshold, electronic device 100 performs a reduction in the performance of a processor located in the vicinity of temperature sensor 180J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 100 heats the battery 142 to avoid the low temperature causing the electronic device 100 to be abnormally shut down. In other embodiments, when the temperature is below a further threshold, the electronic device 100 performs boosting of the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperatures.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 170 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 180M, so as to implement a voice function. The application processor may analyze the heart rate information based on the blood pressure beat signal acquired by the bone conduction sensor 180M, so as to implement a heart rate detection function.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, i.e.: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
Next, the software structure of the electronic device 100 is described by way of example. A layered architecture, an event driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture may be employed.
Fig. 11 shows a software structure provided in an embodiment of the present application. The software architecture may include an application layer, a native (native) layer, a kernel layer. The native layer comprises a determining module and an adjusting module. The determining module is configured to perform the method embodiments shown in fig. 5A, 5B, 6A, 6B, 7A, and 7B. The adjustment module is used to perform the method embodiments shown in fig. 9B, 9C, and 9D.
Fig. 12 is a block diagram showing another software configuration of the electronic device 100.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 12, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, lock screens (not shown), payments (not shown), etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 12, the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (e.g., openGL ES), 2D graphics engine (e.g., SGL), face recognition image determination module (not shown), and face image adjustment module (not shown), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The face recognition determination module may be used to perform the method embodiments shown in fig. 5A, 5B, 6A, 6B, 7A, and 7B. The facial image adjustment module is used for executing the method embodiments shown in fig. 9B, 9C and 9D.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The workflow of the electronic device 100 software and hardware is illustrated below in connection with capturing a photo scene.
When touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into the original input event (including information such as touch coordinates, time stamp of touch operation, etc.). The original input event is stored at the kernel layer. The application framework layer acquires an original input event from the kernel layer, and identifies a control corresponding to the input event. Taking the touch operation as a touch click operation, taking a control corresponding to the click operation as an example of a control of a camera application icon, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera driver by calling a kernel layer, and captures a still image or video by the camera 193.
Fig. 13 shows a flowchart of an image recognition method. This aspect applies to an electronic device configured with a first part including at least a first camera and a first display screen and a second part including at least a second camera, the second part being bendable relative to the first part; when the second component is bent relative to the first component, the acquisition area of the first camera object and the acquisition area of the second camera are intersected. Reference is made in particular to the implementation described above for the embodiment shown in fig. 2.
As shown in fig. 13, the method is as follows.
Step 1300, capturing a first image of an object to be verified through the first camera when an included angle exists between the second component and the first component;
step 1302, capturing a second image of the object to be verified by the second camera; wherein the capturing time of the first image and the second image is the same.
Step 1302 may be implemented with reference to the embodiments shown in fig. 6A, 6B, 7A, and 7B, which are not described herein.
A step 1304 of verifying the identity of the object to be verified according to the first image and the second image; or sending the first image and the second image to other electronic equipment so that the other electronic equipment can verify the identity of the object to be verified according to the first image and the second image.
When authentication of the identity of the object to be authenticated (user) needs to be performed locally on the electronic device, the electronic device may authenticate the identity of the object to be authenticated according to the first image and the second image. When the other electronic device (e.g., a remote service server) is required to verify the identity of the object to be verified, the electronic device may send the first image and the second image to the other electronic device, so that the other electronic device verifies the identity of the object to be verified according to the first image and the second image.
The verification process can refer to the introduction of application scenes such as screen unlocking, application face login, face payment and the like.
In some embodiments, the electronic device is configured with a foldable screen. The implementation of the embodiment shown in fig. 3 may be referred to above and will not be described in detail here.
In some embodiments, the second component further comprises a second display, the first display and the second display being two separate displays. The implementation of the embodiment shown in fig. 4 may be referred to above and will not be described in detail herein.
In some embodiments, the first image is an image captured by the first camera when the angle of bending the second part relative to the first part is a preset first angle. The implementation may be referred to the embodiments shown in fig. 6A and 6B, and will not be described herein.
In one example of these embodiments, the method shown in fig. 13 further comprises: and displaying a progress bar on the first display screen, wherein the progress bar is used for prompting the bending angle of the second part relative to the first part, so that the object to be verified is subjected to bending operation, and the bending angle of the second part relative to the first part reaches the first angle. Reference may be made to the implementation of the embodiments shown in fig. 5B, 6B, 8A and 8B, and the description thereof will be omitted here.
In one example of this example, the progress bar is composed of a plurality of progress areas, each of the plurality of progress areas corresponding to a different one of a plurality of angles, respectively; the method of 13 further comprises: when the second component is bent at any angle of the plurality of angles relative to the first component, displaying a progress area corresponding to the any angle according to a preset display mode. Reference may be made to the implementation of the embodiments shown in fig. 5B, 6B, 7B, 8A, 8B and 8C, and the description thereof is omitted here.
In some embodiments, the first image is an image captured by the first camera when the included angle between the second component and the first component is a second angle; the method shown in fig. 13 further comprises: when the included angle between the second component and the first component is a third angle, capturing a third image of the object to be verified through the first camera, wherein the third angle and the second angle are different; capturing a fourth image of the object to be verified through the second camera; wherein the capturing time of the third image and the fourth image is the same. Step 1304 specifically includes: verifying the identity of the object to be verified according to the first image, the second image, the third image and the fourth image; or, the sending the first image and the second image to other electronic devices includes: and sending the first image, the second image, the third image and the fourth image to the other electronic equipment so that the other electronic equipment can verify the identity of the object to be verified according to the first image, the second image, the third image and the fourth image.
The implementation may be referred to the embodiments shown in fig. 7A and 7B, and will not be described herein.
In some embodiments, the method of fig. 13 further comprises: and displaying a positioning area on the first display screen so that the object to be verified can adjust the position of the head portrait in the first display screen, and the head portrait enters the positioning area.
The implementation may be referred to the embodiments shown in fig. 5B, 6B and 7B, and will not be described herein.
In some embodiments, the method of fig. 13 further comprises: determining a first feature point of a face image of the first image; and adjusting the position of the face image in the first image and the position of the face image in the second image according to the position of the first feature point.
The embodiments shown in fig. 9A, 9B, 9C, and 9D may be referred to and will not be described herein.
The image recognition method provided by the embodiment of the application can acquire the front face image and the side face image with the same capturing time, saves the time for acquiring the front face image and the side face image in the face recognition process, and improves the user experience; in addition, the capturing time of the front face image and the side face image is the same, the space for cheating verification by forging the face is reduced, and the security level of face recognition is improved.
The embodiment of the application provides an image recognition device 1400, which is arranged on an electronic device configured with a first component and a second component, wherein the first component at least comprises a first camera and a first display screen, the second component at least comprises a second camera, and the second component can be bent relative to the first component; when the second component is bent relative to the first component, the acquisition area of the first camera and the acquisition area of the second camera are crossed; the apparatus 1400 includes:
a first capturing unit 1410, configured to capture, when an included angle is formed between the second component and the first component, a first image of an object to be verified by the first camera;
a second capturing unit 1420 configured to capture a second image of the object to be authenticated by the second camera; wherein the capturing time of the first image and the second image is the same;
a verification unit 1430 for verifying the identity of the object to be verified according to the first image and the second image; or sending the first image and the second image to other electronic equipment so that the other electronic equipment can verify the identity of the object to be verified according to the first image and the second image.
In some embodiments, the electronic device is configured with a foldable screen.
In some embodiments, the second component further comprises a second display, the first display and the second display being two separate displays.
In some embodiments, the first image is an image captured by the first capturing unit through the first camera when the angle of bending the second component relative to the first component is a preset first angle.
In one example of these embodiments, the apparatus 1400 further comprises: and the first display unit (not shown) is used for displaying a progress bar on the first display screen, wherein the progress bar is used for prompting the bending angle of the second component relative to the first component so that the object to be verified performs bending operation, and the bending angle of the second component relative to the first component reaches the first angle.
In one example of this example, the progress bar is composed of a plurality of progress areas, each of the plurality of progress areas corresponding to a different one of a plurality of angles, respectively; the first display unit is further configured to: when the second component is bent at any angle of the plurality of angles relative to the first component, displaying a progress area corresponding to the any angle according to a preset display mode.
In some embodiments, the first image is an image captured by the first capture unit through the first camera when the angle between the second component and the first component is a second angle; the first capturing unit 1410 is further configured to capture, when an included angle between the second component and the first component is a third angle, a third image of the object to be verified by the first camera, where the third angle and the second angle are different; the second capturing unit 1420 is further configured to capture, by using the second camera, a fourth image of the object to be verified; wherein the capturing time of the third image and the fourth image is the same; the verification unit 1430 is further configured to verify the identity of the object to be verified according to the first image, the second image, the third image, and the fourth image; or sending the first image, the second image, the third image and the fourth image to the other electronic equipment so that the other electronic equipment can verify the identity of the object to be verified according to the first image, the second image, the third image and the fourth image.
In some embodiments, the apparatus 1400 further comprises: and a second display unit (not shown) for displaying a positioning area on the first display screen, so that the object to be verified adjusts the position of the head portrait in the first display screen to enable the head portrait to enter the positioning area.
In some embodiments, the apparatus 1400 further comprises: a determining unit 1440, configured to determine a first feature point of a face image of the first image; an adjusting unit 1450 is configured to adjust a position of the face image in the first image and a position of the face image in the second image according to the position of the first feature point.
The image recognition device provided by the embodiment of the application can acquire the front face image and the side face image with the same capturing time, so that the time for acquiring the front face image and the side face image in the face recognition process is saved, and the user experience is improved; in addition, the capturing time of the front face image and the side face image is the same, the space for cheating verification by forging the face is reduced, and the security level of face recognition is improved.
The foregoing description of the apparatus provided in the embodiments of the present application has been mainly presented in terms of a method flow. It will be appreciated that each electronic device, in order to implement the above-described functionality, includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, according to each method embodiment shown in fig. 5A, 5B, 6A, 6B, 7A, 7B, 9C, and 9D, the functional modules of the electronic device may be divided, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
The embodiment of the application provides electronic equipment. Referring to fig. 15, the electronic device includes a processor 1510, a memory 1520, a transceiver 1530, a first component including a first camera 1540 and a first display 1550, and a second component including a second camera 1560. The second part can be bent relative to the first part; when the second part is bent relative to the first part, there is an intersection between the acquisition area of the first camera 1540 and the acquisition area of the second camera 1560.
The memory is used for storing computer instructions; when the electronic device is running, the processor executes the computer instructions to cause the electronic device to perform the method of fig. 13, wherein the processor 1510 captures a first image of an object to be authenticated through the first camera 1540 when the second component and the first component have an angle therebetween; processor 1510 captures a second image of the object to be authenticated through second camera 1560; wherein the capturing time of the first image and the second image is the same; processor 1510 verifies the identity of the object to be verified based on the first image and the second image; or, the transceiver 1530 transmits the first image and the second image to other electronic devices so that the other electronic devices verify the identity of the object to be verified according to the first image and the second image.
In some embodiments, the electronic device further includes a communication bus 1570, wherein the processor 1510 may be connected to the memory 1520, the transceiver 1530, the first camera 1540, the first display 1550, the second camera 1560 through the communication bus 1570, so that corresponding control of the transceiver 1530, the first camera 1540, the first display 1550, the second camera 1560 may be performed according to computer-executable instructions stored in the memory 1520.
Specific implementations of each component/device at the electronic device end in the embodiments of the present application may be implemented by referring to the above embodiments of each method shown in fig. 13 and fig. 5A, 5B, 6A, 6B, 7A, 7B, 9C, and 9D, or may be implemented by referring to the electronic device shown in fig. 2, 3, and 4, which are not described herein again.
Therefore, the electronic device provided by the embodiment of the application can acquire the front face image and the side face image with the same capturing time, so that the time for acquiring the front face image and the side face image in the face recognition process is saved, and the user experience is improved; in addition, the capturing time of the front face image and the side face image is the same, the space for cheating verification by forging the face is reduced, and the security level of face recognition is improved.
It is to be appreciated that the processor in embodiments of the present application may be a central processing unit (central processing unit, CPU), but may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate arrays (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. The general purpose processor may be a microprocessor, but in the alternative, it may be any conventional processor.
The method steps in the embodiments of the present application may be implemented by hardware, or may be implemented by a processor executing software instructions. The software instructions may be comprised of corresponding software modules that may be stored in random access memory (random access memory, RAM), flash memory, read-only memory (ROM), programmable ROM (PROM), erasable programmable PROM (EPROM), electrically erasable programmable EPROM (EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
It will be appreciated that the various numerical numbers referred to in the embodiments of the present application are merely for ease of description and are not intended to limit the scope of the embodiments of the present application.

Claims (25)

1. An image recognition method is characterized by being applied to an electronic device provided with a first component and a second component, wherein the first component at least comprises a first camera and a first display screen, the second component at least comprises a second camera, and the second component can be bent relative to the first component; when the second component is bent relative to the first component, the acquisition area of the first camera and the acquisition area of the second camera are crossed; the method comprises the following steps:
prompting a user to bend the second part and the first part relatively at a first angle;
when the angle of the second part bent relative to the first part is a first angle, capturing a first image of an object to be verified through the first camera;
capturing a second image of the object to be verified through the second camera; wherein the capturing time of the first image and the second image is the same;
verifying the identity of the object to be verified according to the first image and the second image; or sending the first image and the second image to other electronic equipment so that the other electronic equipment can verify the identity of the object to be verified according to the first image and the second image.
2. The method of claim 1, wherein the electronic device is configured with a foldable screen.
3. The method of claim 1, wherein the second component further comprises a second display, the first display and the second display being two separate displays.
4. The method of claim 1, wherein prompting the user to bend the second member to a state opposite the first member at a first angle comprises: and displaying a progress bar on the first display screen, wherein the progress bar is used for prompting the bending angle of the second part relative to the first part, so that the object to be verified is subjected to bending operation, and the bending angle of the second part relative to the first part reaches the first angle.
5. The method of claim 4, wherein the progress bar is comprised of a plurality of progress areas, each of the plurality of progress areas corresponding to a different one of a plurality of angles;
the method further comprises the steps of: when the second component is bent at any angle of the plurality of angles relative to the first component, displaying a progress area corresponding to the any angle according to a preset display mode.
6. The method of claim 1, wherein the first image is an image captured by the first camera when an angle between the second component and the first component is a second angle; the method further comprises the steps of:
when the included angle between the second component and the first component is a third angle, capturing a third image of the object to be verified through the first camera, wherein the third angle and the second angle are different;
capturing a fourth image of the object to be verified through the second camera; wherein the capturing time of the third image and the fourth image is the same;
the verifying the identity of the object to be verified according to the first image and the second image comprises: verifying the identity of the object to be verified according to the first image, the second image, the third image and the fourth image; or alternatively, the first and second heat exchangers may be,
the sending the first image and the second image to other electronic devices includes: and sending the first image, the second image, the third image and the fourth image to the other electronic equipment so that the other electronic equipment can verify the identity of the object to be verified according to the first image, the second image, the third image and the fourth image.
7. The method according to claim 1, wherein the method further comprises:
and displaying a positioning area on the first display screen so that the object to be verified can adjust the position of the head portrait in the first display screen, and the head portrait enters the positioning area.
8. The method according to claim 1, wherein the method further comprises:
determining a first feature point of a face image of the first image;
and adjusting the position of the face image in the first image and the position of the face image in the second image according to the position of the first feature point.
9. An image recognition device is characterized by being arranged on an electronic device provided with a first component and a second component, wherein the first component at least comprises a first camera and a first display screen, the second component at least comprises a second camera, and the second component can be bent relative to the first component; when the second component is bent relative to the first component, the acquisition area of the first camera and the acquisition area of the second camera are crossed; the device comprises:
the prompting unit is used for prompting a user to bend the second part and the first part relatively at a first angle;
The first capturing unit is used for capturing a first image of an object to be verified through the first camera when the angle of bending of the second component relative to the first component is a first angle;
a second capturing unit, configured to capture a second image of the object to be authenticated by using the second camera; wherein the capturing time of the first image and the second image is the same;
the verification unit is used for verifying the identity of the object to be verified according to the first image and the second image; or sending the first image and the second image to other electronic equipment so that the other electronic equipment can verify the identity of the object to be verified according to the first image and the second image.
10. The apparatus of claim 9, wherein the electronic device is configured with a foldable screen.
11. The apparatus of claim 9, wherein the second component further comprises a second display, the first display and the second display being two separate displays.
12. The apparatus of claim 9, wherein the apparatus further comprises: the first display unit is used for displaying a progress bar on the first display screen, and the progress bar is used for prompting the bending angle of the second component relative to the first component so that the object to be verified performs bending operation, and the bending angle of the second component relative to the first component reaches the first angle.
13. The apparatus of claim 12, wherein the progress bar is comprised of a plurality of progress areas, each of the plurality of progress areas corresponding to a different one of a plurality of angles;
the first display unit is further configured to: when the second component is bent at any angle of the plurality of angles relative to the first component, displaying a progress area corresponding to the any angle according to a preset display mode.
14. The apparatus of claim 9, wherein the first image is an image captured by the first capture unit through the first camera when an angle between the second component and the first component is a second angle;
the first capturing unit is further configured to capture, when an included angle between the second component and the first component is a third angle, a third image of the object to be verified by the first camera, where the third angle and the second angle are different;
the second capturing unit is further configured to capture a fourth image of the object to be verified through the second camera; wherein the capturing time of the third image and the fourth image is the same;
The verification unit is further configured to verify the identity of the object to be verified according to the first image, the second image, the third image and the fourth image; or sending the first image, the second image, the third image and the fourth image to the other electronic equipment so that the other electronic equipment can verify the identity of the object to be verified according to the first image, the second image, the third image and the fourth image.
15. The apparatus of claim 9, wherein the apparatus further comprises: and the second display unit is used for displaying a positioning area on the first display screen so that the object to be verified can adjust the position of the head portrait in the first display screen, and the head portrait can enter the positioning area.
16. The apparatus of claim 9, wherein the apparatus further comprises:
a determining unit, configured to determine a first feature point of a face image of the first image;
and the adjusting unit is used for adjusting the position of the face image in the first image and the position of the face image in the second image according to the position of the first characteristic point.
17. An electronic device, comprising: the device comprises a processor, a memory, a first component and a second component, wherein the first component at least comprises a first camera and a first display screen, and the second component at least comprises a second camera; the second part can be bent relative to the first part; when the second component is bent relative to the first component, the acquisition area of the first camera and the acquisition area of the second camera are crossed;
the memory is used for storing computer instructions;
when the electronic device is running, the processor executes the computer instructions to cause the electronic device to perform:
prompting a user to bend the second part and the first part relatively at a first angle;
when the angle of the second part bent relative to the first part is a first angle, capturing a first image of an object to be verified through the first camera;
capturing a second image of the object to be verified through the second camera; wherein the capturing time of the first image and the second image is the same;
verifying the identity of the object to be verified according to the first image and the second image; or sending the first image and the second image to other electronic equipment so that the other electronic equipment can verify the identity of the object to be verified according to the first image and the second image.
18. The electronic device of claim 17, wherein the electronic device is configured with a foldable screen.
19. The electronic device of claim 17, wherein the second component further comprises a second display, the first display and the second display being two separate displays.
20. The electronic device of claim 17, wherein the processor executes the computer instructions such that the electronic device further performs: and displaying a progress bar on the first display screen, wherein the progress bar is used for prompting the bending angle of the second part relative to the first part, so that the object to be verified is subjected to bending operation, and the bending angle of the second part relative to the first part reaches the first angle.
21. The electronic device of claim 20, wherein the progress bar is comprised of a plurality of progress areas, each of the plurality of progress areas corresponding to a different one of a plurality of angles;
the processor executes the computer instructions to cause the electronic device to further perform: when the second component is bent at any angle of the plurality of angles relative to the first component, displaying a progress area corresponding to the any angle according to a preset display mode.
22. The electronic device of claim 17, wherein the first image is an image captured by the first camera when an included angle between the second component and the first component is a second angle;
the processor executes the computer instructions to cause the electronic device to further perform:
when the included angle between the second component and the first component is a third angle, capturing a third image of the object to be verified through the first camera, wherein the third angle and the second angle are different;
capturing a fourth image of the object to be verified through the second camera; wherein the capturing time of the third image and the fourth image is the same;
verifying the identity of the object to be verified according to the first image, the second image, the third image and the fourth image; or sending the first image, the second image, the third image and the fourth image to the other electronic equipment so that the other electronic equipment can verify the identity of the object to be verified according to the first image, the second image, the third image and the fourth image.
23. The electronic device of claim 17, wherein the processor executes the computer instructions such that the electronic device further performs:
and displaying a positioning area on the first display screen so that the object to be verified can adjust the position of the head portrait in the first display screen, and the head portrait enters the positioning area.
24. The electronic device of claim 17, wherein the processor executes the computer instructions such that the electronic device further performs:
determining a first feature point of a face image of the first image;
and adjusting the position of the face image in the first image and the position of the face image in the second image according to the position of the first feature point.
25. A computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 1-9.
CN201911032707.XA 2019-10-28 2019-10-28 Image recognition method and electronic equipment Active CN111027374B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911032707.XA CN111027374B (en) 2019-10-28 2019-10-28 Image recognition method and electronic equipment
PCT/CN2020/108709 WO2021082620A1 (en) 2019-10-28 2020-08-12 Image recognition method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911032707.XA CN111027374B (en) 2019-10-28 2019-10-28 Image recognition method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111027374A CN111027374A (en) 2020-04-17
CN111027374B true CN111027374B (en) 2023-06-30

Family

ID=70200125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911032707.XA Active CN111027374B (en) 2019-10-28 2019-10-28 Image recognition method and electronic equipment

Country Status (2)

Country Link
CN (1) CN111027374B (en)
WO (1) WO2021082620A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111027374B (en) * 2019-10-28 2023-06-30 华为终端有限公司 Image recognition method and electronic equipment
CN112733804B (en) * 2021-01-29 2024-01-19 闽江学院 Image pick-up device for measuring human body parameters

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658572A (en) * 2018-12-21 2019-04-19 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110134459A (en) * 2019-05-15 2019-08-16 Oppo广东移动通信有限公司 Using starting method and Related product
CN110225244A (en) * 2019-05-15 2019-09-10 华为技术有限公司 A kind of image capturing method and electronic equipment
CN110244840A (en) * 2019-05-24 2019-09-17 华为技术有限公司 Image processing method, relevant device and computer storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9349342B2 (en) * 2012-03-05 2016-05-24 Beijing Lenovo Software Ltd. Display method and electronic device
CN102708383B (en) * 2012-05-21 2014-11-26 广州像素数据技术开发有限公司 System and method for detecting living face with multi-mode contrast function
CN105590097B (en) * 2015-12-17 2019-01-25 重庆邮电大学 Dual camera collaboration real-time face identification security system and method under the conditions of noctovision
CN108764179A (en) * 2018-05-31 2018-11-06 惠州市德赛西威汽车电子股份有限公司 A kind of shared automobile unlocking method and system based on face recognition technology
CN110248081A (en) * 2018-10-12 2019-09-17 华为技术有限公司 Image capture method and electronic equipment
CN109543521A (en) * 2018-10-18 2019-03-29 天津大学 The In vivo detection and face identification method that main side view combines
CN109688253B (en) * 2019-02-28 2021-05-18 维沃移动通信有限公司 Shooting method and terminal
CN110360972B (en) * 2019-07-10 2020-12-08 Oppo广东移动通信有限公司 Calibration method and device of angle sensor, terminal and storage medium
CN111027374B (en) * 2019-10-28 2023-06-30 华为终端有限公司 Image recognition method and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658572A (en) * 2018-12-21 2019-04-19 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110134459A (en) * 2019-05-15 2019-08-16 Oppo广东移动通信有限公司 Using starting method and Related product
CN110225244A (en) * 2019-05-15 2019-09-10 华为技术有限公司 A kind of image capturing method and electronic equipment
CN110244840A (en) * 2019-05-24 2019-09-17 华为技术有限公司 Image processing method, relevant device and computer storage medium

Also Published As

Publication number Publication date
CN111027374A (en) 2020-04-17
WO2021082620A1 (en) 2021-05-06

Similar Documents

Publication Publication Date Title
CN110058777B (en) Method for starting shortcut function and electronic equipment
CN112351156B (en) Lens switching method and device
WO2020047868A1 (en) Business processing method and device
CN113496426A (en) Service recommendation method, electronic device and system
CN111103922B (en) Camera, electronic equipment and identity verification method
CN110138999B (en) Certificate scanning method and device for mobile terminal
CN114125130B (en) Method for controlling communication service state, terminal device and readable storage medium
CN110248037B (en) Identity document scanning method and device
CN112930533A (en) Control method of electronic equipment and electronic equipment
WO2023116414A1 (en) Multi-screen unlocking method and electronic device
CN114095599A (en) Message display method and electronic equipment
CN111027374B (en) Image recognition method and electronic equipment
CN115914461B (en) Position relation identification method and electronic equipment
CN116708751B (en) Method and device for determining photographing duration and electronic equipment
CN116152814A (en) Image recognition method and related equipment
CN116527266A (en) Data aggregation method and related equipment
CN113645595B (en) Equipment interaction method and device
CN116029716A (en) Remote payment method, electronic equipment and system
CN116301483A (en) Application card management method, electronic device and storage medium
CN115706916A (en) Wi-Fi connection method and device based on position information
CN114827098A (en) Method and device for close shooting, electronic equipment and readable storage medium
CN113867851A (en) Electronic equipment operation guide information recording method, electronic equipment operation guide information acquisition method and terminal equipment
CN113970965A (en) Message display method and electronic equipment
WO2022222702A1 (en) Screen unlocking method and electronic device
CN114117458A (en) Key using method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant