CN110852217B - Face recognition method and electronic equipment - Google Patents

Face recognition method and electronic equipment Download PDF

Info

Publication number
CN110852217B
CN110852217B CN201911046236.8A CN201911046236A CN110852217B CN 110852217 B CN110852217 B CN 110852217B CN 201911046236 A CN201911046236 A CN 201911046236A CN 110852217 B CN110852217 B CN 110852217B
Authority
CN
China
Prior art keywords
target
camera
face image
matching
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911046236.8A
Other languages
Chinese (zh)
Other versions
CN110852217A (en
Inventor
刘旭东
杜莉莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911046236.8A priority Critical patent/CN110852217B/en
Publication of CN110852217A publication Critical patent/CN110852217A/en
Application granted granted Critical
Publication of CN110852217B publication Critical patent/CN110852217B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Telephone Function (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The embodiment of the invention provides a face recognition method and electronic equipment, which are applied to the technical field of communication and are used for solving the problems of larger power consumption or recognition failure in the traditional face recognition process. The method comprises the following steps: under the condition that the target parameters meet the preset conditions, extending out of a camera of the electronic equipment, and collecting a face image to be identified; under the condition that the face image to be recognized is matched with a preset face image, the electronic equipment executes target operation; the target parameter is used for representing the safety level of the electronic equipment and/or the light quantity of the environment.

Description

Face recognition method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a face recognition method and electronic equipment.
Background
With the development of electronic equipment technology, the frequency of using electronic equipment by users is higher and higher, and the requirements of users on the system security of the electronic equipment are also higher and higher.
At present, most electronic devices in the market have a face recognition function, that is, when the electronic devices perform face recognition, a pre-stored matching template is used to recognize a face image to be recognized, that is, the pre-stored matching template is used to match the face image to be recognized with a preset face image, so that face recognition is realized.
However, since the conventional face recognition can only complete accurate recognition under the condition that the external environment is stable and the predetermined condition is satisfied, when the external environment condition of the electronic device is not good, the problem of poor face recognition effect of the electronic device exists.
Disclosure of Invention
The embodiment of the invention provides a face recognition method and electronic equipment, which are used for solving the problem of low face recognition success rate in the traditional face recognition process.
In order to solve the technical problems, the application is realized as follows:
In a first aspect, an embodiment of the present invention provides a method for face recognition, where the method includes: under the condition that the target parameters meet the preset conditions, the camera is extended to acquire the face image to be identified; under the condition that the face image to be recognized is matched with a preset face image, the electronic equipment executes target operation; the target parameter is used for representing the safety level of the electronic equipment and/or the light quantity of the environment.
In a second aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes an execution module; the execution module is used for extending the camera to acquire the face image to be identified under the condition that the target parameter meets the preset condition; the execution module is further configured to execute a target operation when the matching module matches the face image to be identified with a preset face image; the target parameter is used for representing the safety level of the electronic equipment and/or the light quantity of the environment.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program implementing the steps of the method for face recognition according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of face recognition as described in the first aspect.
In the embodiment of the invention, as the target parameters are used for representing the safety level of the electronic equipment and/or the light quantity of the environment, the electronic equipment controls the camera to extend under the condition that whether the target parameters meet the preset conditions or not so as to acquire the face image to be recognized for face recognition, and executes corresponding target operation under the condition that the face image to be recognized is matched with the preset face image. Therefore, the electronic equipment performs face recognition by acquiring the face image to be recognized, which is suitable for the current environmental condition, so that the recognition success rate is greatly improved, and potential safety hazards in the face recognition process can be avoided.
Drawings
Fig. 1 is a schematic diagram of a possible architecture of an android operating system according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a method for face recognition according to an embodiment of the present invention;
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 4 is a schematic hardware structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In this context "/" means "or" for example, a/B may mean a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone.
It should be noted that "plurality" herein means two or more than two.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
It should be noted that, in order to clearly describe the technical solution of the embodiment of the present invention, in the embodiment of the present invention, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function or effect, and those skilled in the art will understand that the words "first", "second", etc. do not limit the number and execution order. For example, the first threshold and the second threshold are used to distinguish between different thresholds, and are not used to describe a particular order of thresholds.
The execution subject of the face recognition method provided by the embodiment of the invention can be the electronic equipment (including mobile electronic equipment and non-mobile electronic equipment), or can be a functional module and/or a functional entity which can realize the face recognition method in the electronic equipment, and the execution subject can be specifically determined according to actual use requirements. The method for recognizing human face provided by the embodiment of the invention is exemplified by an electronic device.
The electronic device in the embodiment of the invention can be a terminal device. The terminal device may be a mobile terminal device or a non-mobile terminal device. The mobile terminal device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), or the like; the non-mobile terminal device may be a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-help machine, or the like; the embodiment of the present invention is not particularly limited.
The camera of the electronic device in the embodiment of the invention can extend and pop out (generally can be called as a pop-up camera). For example, the pop-up camera may be a pop-up under-screen camera, which may be in a state or a non-pop-up state. Under the condition that the pop-up under-screen camera is in a non-pop-up state (namely an under-screen state), as an opening exists in the screen area where the pop-up under-screen camera is located, the pop-up under-screen camera can still realize a photographing function. The pop-up type under-screen camera may take various forms, such as a lifting type, a side-rotating type, a side-pop type, and a sliding cover separated type, which is not limited by the embodiment of the present invention.
For example, since the amount of light entering the camera when the pop-up under-screen camera is in a state is greater than the amount of light entering the camera when the pop-up under-screen camera is in a non-state, the resolution of the image acquired by the pop-up under-screen camera in a pop-up state is greater than the resolution of the image acquired by the pop-up under-screen camera in a non-pop-up state.
The electronic device in the embodiment of the invention can be an electronic device with an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present invention is not limited specifically.
The software environment to which the face recognition method provided by the embodiment of the invention is applied is described below by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, respectively: an application program layer, an application program framework layer, a system runtime layer and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third party application programs) in the android operating system.
The application framework layer is a framework of applications, and developers can develop some applications based on the application framework layer while adhering to the development principle of the framework of the applications.
The system runtime layer includes libraries (also referred to as system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of the android operating system, and belongs to the bottommost layer of the software hierarchy of the android operating system. The kernel layer provides core system services and a driver related to hardware for the android operating system based on a Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the face recognition method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the face recognition method may be operated based on the android operating system shown in fig. 1. Namely, the processor or the electronic device can realize the face recognition method provided by the embodiment of the invention by running the software program in the android operating system.
The following describes a face recognition method according to an embodiment of the present invention with reference to a face recognition method flowchart shown in fig. 2, and fig. 2 is a flowchart of a face recognition method according to an embodiment of the present invention, including steps 201 to 202:
Step 201: and under the condition that the target parameters meet the preset conditions, extending out of the camera of the electronic equipment, and collecting the face image to be identified.
In an embodiment of the present invention, the target parameter is used to characterize a security level and/or an amount of light of an environment in which the electronic device is located. Illustratively, the above-described target parameters include at least one of: security level information, light quantity information of the environment.
Optionally, in the embodiment of the present invention, the electronic device may periodically collect the target parameter, or may actively collect the target parameter when face recognition is required, or may collect the target parameter in real time, which is not limited in the embodiment of the present invention. In one example, the terminal may directly set an acquisition period, and according to the acquisition period, the electronic device may determine the target matching template according to the target parameter acquired last time when the electronic device performs face recognition.
Optionally, in the embodiment of the present invention, an image including an image of a face of a user (i.e. the image of the face to be identified) may be acquired by using a camera in the electronic device, so as to extract facial feature information from the image.
Optionally, in the embodiment of the present invention, the light quantity of the environment of the electronic device may be obtained by a photosensitive sensor in the electronic device, or may be determined by starting a camera to obtain a preview effect before the image is acquired, and may specifically be set according to actual requirements, which is not limited in the embodiment of the present invention.
Step 202: and under the condition that the face image to be recognized is matched with the preset face image, the electronic equipment executes target operation.
Optionally, in the embodiment of the present invention, matching the face image to be identified with the preset face image means: the face image to be recognized is the same as the preset face image, or the similarity of the face image to be recognized and the preset face image is larger than or equal to a preset threshold value. For example, the preset threshold may be set to 95%, that is, if the similarity between the face image to be identified and the preset face image is greater than or equal to 95%, the face image to be identified acquired by the electronic device is considered to conform to the preset face image.
Optionally, in an embodiment of the present invention, the above target operation includes at least one of: and opening the target application program, unlocking the screen and finishing payment.
In the embodiment of the invention, as the target parameters are used for representing the safety level of the electronic equipment and/or the light quantity of the environment, the electronic equipment controls the camera to extend under the condition that whether the target parameters meet the preset conditions or not so as to acquire the face image to be recognized for face recognition, and executes corresponding target operation under the condition that the face image to be recognized is matched with the preset face image. Therefore, the electronic equipment performs face recognition by acquiring the face image to be recognized, which is suitable for the current environmental condition, so that the recognition success rate is greatly improved, and potential safety hazards in the face recognition process can be avoided.
Optionally, in an embodiment of the present invention, when the camera is not extended, the camera is an under-screen camera, and the predetermined condition includes at least one of the following: the light quantity of the environment of the electronic equipment is smaller than a first threshold value, and the security level of the electronic equipment is larger than or equal to a preset level.
Optionally, in the embodiment of the present invention, after the step 201, the following step A1 is further included:
step A1: and the electronic equipment retracts the camera under the condition that the face image to be recognized is acquired or the face image to be recognized is matched with the preset face image.
Compared with the traditional under-screen camera which is required by a large environment light quantity in the face recognition process, the under-screen camera is adopted in the embodiment of the invention, so that the shooting mode can be adjusted according to the environment light quantity, and therefore, a proper face image to be recognized is acquired for face recognition, the recognition success rate is improved, the use state of the under-screen camera can be reasonably allocated, and the power consumption of electronic equipment is saved. For example, when the light quantity of the environment is small, the camera in the extending state is adopted to collect the face image to be identified, the area of the light-entering part of the camera in the extending state is larger, and the requirement on the light quantity of the environment is smaller, so that the resolution of the collected image can be improved; when the light quantity of the environment is large, the camera in the non-stretching state is adopted to collect the face image to be recognized, so that face recognition can be performed.
Therefore, the electronic equipment can flexibly allocate the extension or retraction of the camera according to the current target parameters, and can greatly improve the recognition success rate and save the use power consumption of the electronic equipment.
Optionally, in an embodiment of the present invention, after the step 201, the method includes step B1:
Step B1: and the electronic equipment matches the face image to be identified with a preset face image according to the target matching template.
In the embodiment of the invention, the matching template is used for face recognition matching. It can be appreciated that the electronic device may extract facial feature information that needs to be matched from the face image to be identified based on the matching template.
Optionally, in the embodiment of the present invention, one or more matching templates are pre-stored in the electronic device, where the matching precision of each matching template is different. Illustratively, the target matching template is a matching template matching the target parameter from the one or more matching templates.
For example, the electronic device may pre-configure a matching template list, where the matching template list includes a correspondence between a matching template and a parameter, and the matching template list includes: a plurality of matching templates and a plurality of parameters or parameter combinations, wherein one matching template corresponds to one parameter or one parameter combination. For example, after the electronic device obtains the target parameter, the electronic device may use the target parameter as an index to retrieve a target matching template corresponding to the target parameter from the matching template list.
Optionally, in the embodiment of the present invention, each matching template adapts to different application scenarios of the electronic device, and the application scenario corresponding to each matching template is a combination of one or more parameters of the electronic device. Wherein the above parameters are parameters for characterizing the security level and/or the amount of ambient light of the electronic device. Reference may be made in particular to the following examples:
Example 1: the electronic device may determine the security level of the electronic device based on whether face recognition payment is currently involved (i.e. whether payment rights are currently required to be opened), i.e. the electronic device may determine the security level of the electronic device based on application rights information that the electronic device is opened. For example, when the electronic device is in a payment transaction state, determining that the current security level of the electronic device is higher; otherwise, when the electronic equipment is not in the payment transaction state, the corresponding security level of the electronic equipment is determined to be lower.
Example 2: the electronic device may determine a security level of the electronic device based on usage time information according to usage of the electronic device. For example, at 24:00 midnight, the user is typically in a safe environment at home, at which point it may be determined that the security level of the electronic device is low; on the contrary, when the time is 12:00 noon, the user is generally in an outdoor place and easily exposes the secret information of the electronic equipment to other people, and at the moment, the security level of the electronic equipment can be determined to be high.
Example 3: the electronic device may determine a security level of the electronic device based on geographic location information from the electronic device. For example, when the electronic device is used at a place where the electronic device first appears, the security level of the electronic device may be determined to be high; conversely, when the electronic device is used at a location where the electronic device is often present, it may be determined that the security level of the electronic device is low.
It should be noted that, when the above-mentioned target parameter is used to characterize the security level and the ambient light amount, the above-mentioned target parameter is also used to embody that the security level information of the electronic device is prioritized over the ambient light amount information, that is, the electronic device may preferentially use the security level to determine the matching template when acquiring the security level information and the ambient light amount information.
Optionally, in the embodiment of the present invention, the electronic device extracts facial feature information to be identified from the face image to be identified according to the target matching template, determines preset facial feature information from the preset face image, and then matches the facial feature information to be identified with the preset facial feature information.
Optionally, in an embodiment of the present invention, in a case where the target parameter is used to characterize the security level, the matching precision of the target matching template is proportional to the security level of the electronic device. That is, the higher the security level of the electronic device is, the higher the matching precision of the corresponding matching template, and conversely, the lower the security level of the electronic device is, the lower the matching precision of the corresponding matching template.
Optionally, in an embodiment of the present invention, in a case where the target parameter is used to characterize the light quantity of the environment, the matching precision of the target matching template is proportional to the light quantity of the environment of the electronic device. That is, the greater the light amount of the current environment of the electronic device, the higher the matching accuracy of the corresponding matching template, whereas the smaller the light amount of the current environment of the electronic device, the lower the matching accuracy of the corresponding matching template.
Therefore, when the electronic equipment acquires the face image to be recognized for face recognition, the fixed single template in the related technology is not used any more, and the target matching template matched with the target parameter is determined according to the target parameter, so that the accuracy of the matching template configured in the electronic equipment is improved, the electronic equipment can perform face recognition according to the matching template suitable for the current use scene, the success rate of face recognition is improved, and potential safety hazards in the face recognition process can be avoided.
Further optionally, in an embodiment of the present invention, the number of matching feature points in the target matching template is used to characterize matching accuracy of the target matching template. The matching feature points in the matching template are feature points that need to be matched in the face matching process. In general, the higher the matching accuracy of the matching template is, the more the number of matching feature points to be matched is, and the higher the recognition accuracy is, whereas the lower the matching accuracy of the matching template is, the fewer the number of matching feature points to be matched is, and the lower the recognition accuracy is.
Illustratively, the matching feature points are typically facial feature points of a human face. For example, facial features such as eyes, eyebrows, facial contours, and the like.
For example, it is assumed that two matching templates are pre-stored in the electronic device, namely, a matching template 1 and a matching template 2, wherein the matching template 1 is a high-precision matching template (i.e., a high-priority matching template), the matching template 2 is a low-precision matching template (i.e., a low-priority matching template), specifically, the number of feature points of the matching template 1 is 300, and the number of feature points of the matching template 2 is 150.
Based on the above, when the electronic device is in the payment transaction state, that is, the current security level of the electronic device is higher, in order to ensure the payment security, the electronic device selects the matching template 1 with higher matching precision according to the current security level of the electronic device of the terminal device to perform face matching. And when the light quantity of the current surrounding environment of the electronic equipment is weak, in order to ensure that the face recognition can be successfully performed, the electronic equipment can select the matching template 2 with lower matching precision to perform the face recognition.
Therefore, the electronic equipment can flexibly select matching templates with different matching precision to perform face matching according to the use habit of the user and the environment, so that the potential safety hazard is avoided while the recognition rate is improved.
Fig. 3 is a schematic diagram of a possible structure of an electronic device according to an embodiment of the present invention, and as shown in fig. 3, an electronic device 300 includes: an execution module 301, wherein: the execution module 301 is configured to extend out of the camera of the electronic device 300 to collect the face image to be identified when the target parameter meets a predetermined condition; and the method is also used for executing target operation under the condition that the face image to be recognized is matched with the preset face image.
Optionally, in an embodiment of the present invention, the protruding camera is an under-screen camera when not protruding, and the predetermined condition includes at least one of the following: the light quantity of the environment is smaller than a first threshold value, and the safety level is larger than or equal to a preset level.
Optionally, in the embodiment of the present invention, the executing module 301 is further configured to retract the camera when the face image to be recognized is acquired or the face image to be recognized is matched with the preset face image.
Optionally, in an embodiment of the present invention, the electronic device further includes: a matching module 302; the matching module is used for matching the face image to be identified with a preset face image according to a target matching template matched with the target parameter.
Under the condition that the target parameter is used for representing the security level, the matching precision of the target matching template is in direct proportion to the security level of the electronic equipment; or in the case that the target parameter is used to characterize the light quantity of the environment, the matching accuracy of the target matching template is proportional to the light quantity of the environment of the electronic device.
Optionally, in an embodiment of the present invention, the number of matching feature points in the target matching template is used to characterize matching accuracy of the target matching template.
According to the electronic equipment provided by the embodiment of the invention, as the target parameter is used for representing the safety level of the electronic equipment and/or the light quantity of the environment, the electronic equipment controls the camera to extend out under the condition that whether the target parameter meets the preset condition or not so as to acquire the face image to be identified for face recognition, and executes corresponding target operation under the condition that the face image to be identified is matched with the preset face image. Therefore, the electronic equipment performs face recognition by acquiring the face image to be recognized, which is suitable for the current environmental condition, so that the recognition success rate is greatly improved, and potential safety hazards in the face recognition process can be avoided.
The electronic device provided by the embodiment of the present invention can implement each process implemented by the electronic device in the above method embodiment, and in order to avoid repetition, details are not repeated here.
It should be noted that, as shown in fig. 3, modules that are necessarily included in the electronic device 300 are illustrated by solid line boxes, such as the execution module 301; modules, which may or may not be included in the electronic device 300, are illustrated with dashed boxes, such as the matching module 302.
Taking an electronic device as an example of a terminal device, fig. 4 is a schematic hardware structure of a terminal device for implementing various embodiments of the present invention, where the terminal device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. It will be appreciated by those skilled in the art that the structure of the terminal device 100 shown in fig. 4 does not constitute a limitation of the terminal device, and that the terminal device 100 may comprise more or less components than illustrated, or certain components may be combined, or different arrangements of components. In an embodiment of the present invention, the terminal device 100 includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like.
The processor 110 is configured to extend the camera of the electronic device and collect the face image to be identified when the target parameter meets a predetermined condition; and executing target operation under the condition that the face image to be identified is matched with the preset face image.
According to the terminal equipment provided by the embodiment of the invention, as the target parameter is used for representing the safety level of the terminal equipment and/or the light quantity of the environment, the terminal equipment controls the camera to extend out under the condition that whether the target parameter meets the preset condition or not so as to acquire the face image to be recognized for face recognition, and executes corresponding target operation under the condition that the face image to be recognized is matched with the preset face image. Therefore, the terminal equipment performs face recognition by acquiring the face image to be recognized, which is suitable for the current environmental condition, so that the recognition success rate is greatly improved, and potential safety hazards in the face recognition process can be avoided.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be configured to receive and send information or signals during a call, specifically, receive downlink data from a base station, and then process the received downlink data with the processor 110; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 may also communicate with networks and other devices through a wireless communication system.
Terminal device 100 provides wireless broadband internet access to users, such as helping users send and receive e-mail, browse web pages, access streaming media, etc., via network module 102.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the terminal device 100. The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used for receiving an audio or video signal. The input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g. a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphics processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. Microphone 1042 may receive sound and be capable of processing such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 101 in the case of a telephone call mode.
The terminal device 100 further comprises at least one sensor 105, such as a light sensor, a motion sensor and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and the proximity sensor can turn off the display panel 1061 and/or the backlight when the terminal device 100 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when the accelerometer sensor is stationary, and can be used for recognizing the gesture (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking) and the like of the terminal equipment; the sensor 105 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the terminal device 100. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 1071 or thereabout using any suitable object or accessory such as a finger, stylus, etc.). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. Further, the touch panel 1071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 107 may include other input devices 1072 in addition to the touch panel 1071. In particular, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 110 to determine the type of touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of touch event. Although in fig. 4, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device 100, in some embodiments, the touch panel 1071 may be integrated with the display panel 1061 to implement the input and output functions of the terminal device 100, which is not limited herein.
The interface unit 108 is an interface to which an external device is connected to the terminal apparatus 100. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and an external device.
Memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 109 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 110 is a control center of the terminal device 100, connects respective parts of the entire terminal device 100 using various interfaces and lines, and performs various functions of the terminal device 100 and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device 100. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power source 111 (e.g., a battery) for supplying power to the respective components, and optionally, the power source 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system.
In addition, the terminal device 100 includes some functional modules, which are not shown, and will not be described herein.
Optionally, the embodiment of the present invention further provides a terminal device, including a processor, a memory, and a computer program stored in the memory and capable of running on the processor 110, where the computer program when executed by the processor implements each process of the foregoing face recognition method embodiment, and the process can achieve the same technical effect, so that repetition is avoided, and details are not repeated here.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the processes of the above-mentioned face recognition method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here. The computer readable storage medium is, for example, a Read-Only Memory (ROM), a random access Memory (Random Access Memory RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (10)

1. A face recognition method applied to an electronic device, the method comprising:
Under the condition that target parameters meet preset conditions, a camera of the electronic equipment stretches out along a direction parallel to a screen to acquire a face image to be identified, the camera is an under-screen camera when not stretched out, an opening is formed in a screen area where the camera is located under the condition that the camera is in an under-screen state, the opening is used for photographing by the camera, and the light incoming quantity of the camera is larger than that of the camera when the camera is in an under-screen state when the camera is in a stretched-out state;
Executing target operation under the condition that the face image to be recognized is matched with a preset face image;
The target parameters are used for representing the safety level of the electronic equipment and/or the light quantity of the environment;
After the camera is extended, the method further comprises:
Retracting the camera under the condition that the face image to be recognized is acquired or the face image to be recognized is matched with the preset face image;
the method further comprises the steps of:
Determining the security level according to a target condition, wherein the target condition is at least one of the following: time of use information of the electronic device, geographic location information of the electronic device;
The target parameters are also used for determining a target matching template, and the target matching template is a matching template for matching the face image to be recognized with a preset face image;
When the target parameters are used to characterize the security level and the amount of light, the security level is preferentially used to determine the target match template.
2. The method of claim 1, wherein the predetermined condition comprises at least one of: the amount of light of the environment is less than a first threshold, and the security level is greater than or equal to a predetermined level.
3. The method of claim 1, wherein after the acquiring the face image to be identified, the method further comprises:
matching the face image to be identified with a preset face image according to a target matching template matched with the target parameter;
Wherein, in the case that the target parameter is used to characterize the security level, the matching accuracy of the target matching template is proportional to the security level of the electronic device;
or in the case that the target parameter is used to characterize the amount of light of the environment, the matching accuracy of the target matching template is proportional to the amount of light of the environment of the electronic device.
4. A method according to claim 3, wherein the number of matching feature points in the target matching template is used to characterize the matching accuracy of the target matching template.
5. An electronic device, comprising an execution module, wherein:
The execution module is used for controlling the camera to extend along the direction parallel to the screen under the condition that the target parameters meet the preset conditions, collecting the face image to be identified, wherein the camera is an under-screen camera when not extending, an opening is formed in the screen area where the camera is located under the condition that the camera is in the under-screen state, the opening is used for photographing by the camera, and the light inlet quantity of the camera is larger than that of the camera when the camera is in the under-screen state when the camera is in the extending state;
the execution module is further used for executing target operation under the condition that the face image to be recognized is matched with a preset face image;
The target parameters are used for representing the safety level of the electronic equipment and/or the light quantity of the environment;
the execution module is further used for controlling the camera to retract under the condition that the face image to be recognized is acquired or the face image to be recognized is matched with the preset face image;
the execution module is further configured to determine the security level according to a target condition, where the target condition is at least one of: time of use information of the electronic device, geographic location information of the electronic device;
The target parameters are also used for determining a target matching template, and the target matching template is a matching template for matching the face image to be recognized with a preset face image;
When the target parameters are used to characterize the security level and the amount of light, the security level is preferentially used to determine the target match template.
6. The electronic device of claim 5, wherein the predetermined condition comprises: at least one of the following: the amount of light of the environment is less than a first threshold, and the security level is greater than or equal to a predetermined level.
7. The electronic device of claim 5, wherein the electronic device further comprises: and a matching module:
The matching module is used for matching the face image to be identified with a preset face image according to a target matching template matched with the target parameter;
Wherein, in the case that the target parameter is used to characterize the security level, the matching accuracy of the target matching template is proportional to the security level of the electronic device;
or in the case that the target parameter is used to characterize the amount of light of the environment, the matching accuracy of the target matching template is proportional to the amount of light of the environment of the electronic device.
8. The electronic device of claim 7, wherein a number of matching feature points in the target matching template is used to characterize matching accuracy of the target matching template.
9. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor, performs the steps of the method of face recognition as claimed in any one of claims 1 to 4.
10. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method of face recognition according to any one of claims 1 to 4.
CN201911046236.8A 2019-10-30 2019-10-30 Face recognition method and electronic equipment Active CN110852217B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911046236.8A CN110852217B (en) 2019-10-30 2019-10-30 Face recognition method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911046236.8A CN110852217B (en) 2019-10-30 2019-10-30 Face recognition method and electronic equipment

Publications (2)

Publication Number Publication Date
CN110852217A CN110852217A (en) 2020-02-28
CN110852217B true CN110852217B (en) 2024-04-26

Family

ID=69599429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911046236.8A Active CN110852217B (en) 2019-10-30 2019-10-30 Face recognition method and electronic equipment

Country Status (1)

Country Link
CN (1) CN110852217B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112672021B (en) * 2020-12-25 2022-05-17 维沃移动通信有限公司 Language identification method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895108A (en) * 2017-10-27 2018-04-10 维沃移动通信有限公司 A kind of operation management method and mobile terminal
CN207410427U (en) * 2017-11-01 2018-05-25 信丰世嘉科技有限公司 A kind of camera camera
CN108229420A (en) * 2018-01-22 2018-06-29 维沃移动通信有限公司 A kind of face identification method, mobile terminal
CN108446665A (en) * 2018-03-30 2018-08-24 维沃移动通信有限公司 A kind of face identification method and mobile terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895108A (en) * 2017-10-27 2018-04-10 维沃移动通信有限公司 A kind of operation management method and mobile terminal
CN207410427U (en) * 2017-11-01 2018-05-25 信丰世嘉科技有限公司 A kind of camera camera
CN108229420A (en) * 2018-01-22 2018-06-29 维沃移动通信有限公司 A kind of face identification method, mobile terminal
CN108446665A (en) * 2018-03-30 2018-08-24 维沃移动通信有限公司 A kind of face identification method and mobile terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
何志灏等.《手机图片DIY》.国防工业出版社,2007,42-54. *

Also Published As

Publication number Publication date
CN110852217A (en) 2020-02-28

Similar Documents

Publication Publication Date Title
CN110913132B (en) Object tracking method and electronic equipment
CN110719402B (en) Image processing method and terminal equipment
CN107886321B (en) Payment method and mobile terminal
CN108073458B (en) Memory recovery method, mobile terminal and computer-readable storage medium
CN109213407B (en) Screenshot method and terminal equipment
CN109819168B (en) Camera starting method and mobile terminal
JP7371254B2 (en) Target display method and electronic equipment
CN109618218B (en) Video processing method and mobile terminal
CN108958936B (en) Application program switching method, mobile terminal and computer readable storage medium
CN109246351B (en) Composition method and terminal equipment
CN108008892B (en) Function starting method and terminal
CN107832067B (en) Application updating method, mobile terminal and computer readable storage medium
CN110505660B (en) Network rate adjusting method and terminal equipment
CN110602387B (en) Shooting method and electronic equipment
CN110012151B (en) Information display method and terminal equipment
CN111325746A (en) Skin detection method and electronic equipment
CN109104573B (en) Method for determining focusing point and terminal equipment
CN108833791B (en) Shooting method and device
CN108628534B (en) Character display method and mobile terminal
CN107895108B (en) Operation management method and mobile terminal
CN107809515B (en) Display control method and mobile terminal
CN111026263B (en) Audio playing method and electronic equipment
CN111443818B (en) Screen brightness regulation and control method, equipment and computer readable storage medium
CN110852217B (en) Face recognition method and electronic equipment
CN110610146B (en) Face recognition method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant