WO2019047694A1 - 人脸识别方法及移动终端 - Google Patents

人脸识别方法及移动终端 Download PDF

Info

Publication number
WO2019047694A1
WO2019047694A1 PCT/CN2018/100634 CN2018100634W WO2019047694A1 WO 2019047694 A1 WO2019047694 A1 WO 2019047694A1 CN 2018100634 W CN2018100634 W CN 2018100634W WO 2019047694 A1 WO2019047694 A1 WO 2019047694A1
Authority
WO
WIPO (PCT)
Prior art keywords
mobile terminal
state
preset
image
camera
Prior art date
Application number
PCT/CN2018/100634
Other languages
English (en)
French (fr)
Inventor
夏亮
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Priority to US16/645,674 priority Critical patent/US11100312B2/en
Priority to EP18853709.6A priority patent/EP3681136A4/en
Publication of WO2019047694A1 publication Critical patent/WO2019047694A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72463User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions to restrict the functionality of the device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72463User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions to restrict the functionality of the device
    • H04M1/724631User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions to restrict the functionality of the device by limiting the access to the user interface, e.g. locking a touch-screen or a keypad
    • H04M1/724634With partially locked states, e.g. when some telephonic functional locked states or applications remain accessible in the locked states
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72484User interfaces specially adapted for cordless or mobile telephones wherein functions are triggered by incoming communication events
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/20Details of telephonic subscriber devices including a rotatable camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/22Details of telephonic subscriber devices including a touch pad, a touch sensor or a touch detector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Definitions

  • the present disclosure relates to the field of communications technologies, and in particular, to a face recognition method and a mobile terminal.
  • the mobile terminal in the related art can implement various applications through the face recognition technology, such as password unlocking and permission control through face recognition.
  • the password When the password is unlocked, it may be a lock screen unlocking for the mobile terminal or an unlocking of the application.
  • the mobile terminal in the related art generally has a screen protection function, and the user can actively control the mobile terminal to enter or the mobile terminal automatically enters the lock screen state in an unused state.
  • the lock screen state the user can unlock by using various unlocking methods, wherein the face recognition unlocking is one of the commonly used unlocking methods.
  • the user needs to wake up the mobile terminal first, then start the camera, and finally adjust the posture of the mobile terminal for face recognition, which is cumbersome to operate, resulting in a long time for face recognition.
  • the embodiment of the present disclosure provides a method for recognizing a face, which is applied to a mobile terminal, and includes:
  • the camera is activated, and an image is collected by the camera;
  • the embodiment of the present disclosure further provides a mobile terminal, including:
  • a first detecting module configured to detect whether the mobile terminal changes from a stationary state to a raised state
  • a startup module configured to start a camera and acquire an image through the camera if the mobile terminal changes from a stationary state to a raised state
  • a determining module configured to determine whether the image matches a preset face template
  • a determining module configured to determine that the recognition is successful if the image matches the preset face template.
  • the embodiment of the present disclosure further provides a mobile terminal, including:
  • One or more processors are One or more processors;
  • One or more programs wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the face recognition method being implemented when the program is executed The steps in .
  • Embodiments of the present disclosure also provide a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the steps of the face recognition method described above.
  • FIG. 1 is a flowchart of a face recognition method according to an embodiment of the present disclosure
  • FIG. 2 is a second flowchart of a face recognition method according to an embodiment of the present disclosure
  • FIG. 3 is a third flowchart of a face recognition method according to an embodiment of the present disclosure.
  • FIG. 4 is a fourth flowchart of a face recognition method according to an embodiment of the present disclosure.
  • FIG. 5 is a structural diagram of a mobile terminal according to an embodiment of the present disclosure.
  • FIG. 6 is a structural diagram of a mobile terminal according to another embodiment of the present disclosure.
  • FIG. 7 is a structural diagram of a mobile terminal according to another embodiment of the present disclosure.
  • FIG. 1 is a flowchart of a method for recognizing a face according to an embodiment of the present disclosure. As shown in FIG. 1 , the method includes the following steps:
  • Step 101 Detect whether the mobile terminal changes from a stationary state to a raised state.
  • the face recognition method provided by the embodiment of the present disclosure is mainly applied to a mobile terminal, and is used for automatically implementing a face recognition operation of the mobile terminal, and performs a corresponding operation after the face recognition is matched.
  • the current state of the mobile terminal can be detected by setting a sensor to determine whether the state of the mobile terminal changes.
  • Step 102 If the mobile terminal changes from a stationary state to a raised state, the camera is activated, and an image is acquired by the camera.
  • the camera of the mobile terminal is automatically activated to collect an image.
  • the camera may be a front camera, and of course, other cameras, for example, a rotatable camera.
  • Step 103 Determine whether the image matches a preset face template.
  • Step 104 If the image matches the preset face template, it is determined that the recognition is successful.
  • the face in the image is subjected to face recognition according to a preset face recognition algorithm to extract the face feature. Then, the recognized face feature is matched with the preset face template, and when the matching is successful, the corresponding operation can be performed. For example, unlock or set permissions and other operations.
  • the embodiment of the present disclosure starts the camera to acquire an image, and then performs face recognition, thereby shortening the operation time of the face recognition, improving the convenience of the face recognition, and improving the movement.
  • the degree of intelligence of the terminal is the degree of intelligence of the terminal.
  • the face recognition method may be applied to different scenarios, such as unlocking, encryption, and permission setting.
  • the following embodiments will be described in detail in an unlocked application scenario.
  • the foregoing step 101 includes:
  • Step 1011 When the mobile terminal is in a preset state, detecting whether the mobile terminal changes from a stationary state to a raised state, where the preset state includes a blackout state or a lock screen state.
  • the mobile terminal has a lock screen function, and the face recognition unlock mode is configured, that is, the user first sets the face unlock mode, and when the face unlock mode is set, the mobile device collects the user face and performs face recognition. And extracting a preset face feature as a face feature that can unlock the mobile terminal.
  • the user can manually control the mobile terminal to enter the lock screen state or the screen-off state, or the mobile terminal automatically enters the lock screen state or the screen-off state after a certain period of time below the unused state.
  • the mobile terminal enters the lock screen state or the screen-off state, it can detect in real time whether the mobile terminal changes from the stationary state to the raised state. In this embodiment, when the face recognition is successful, the unlocking will be performed.
  • the camera may collect images in the preset state, that is, without changing the state of the mobile terminal, directly start the camera for image acquisition in the background, and perform face recognition, when the face recognition is successful. After that, you can unlock it. This will be a non-perceived unlock for the user, which improves the user experience and reduces the time for the face to be unlocked.
  • the unlocking operation is not performed. That is, the above step 103 includes: determining whether there is only one face in the collected image, and if there is only one face, determining whether the image matches the preset face template, and if there are two faces, maintaining the lock screen state Or the screen is off.
  • a preview interface of the image captured by the camera may also be output to enable the user to perform the face unlocking operation better.
  • the method further includes:
  • Step 105 Display a preview interface for loading the image.
  • the size and position of the preview interface can be displayed according to actual needs, for example, full screen display, half screen display, or small window display.
  • the embodiment also recognizes the angles of the face and the mobile terminal. Specifically, referring to FIG. 4, before the foregoing step 103, the method further includes:
  • Step 106 Calculate an angle between a target human face of the image and the mobile terminal
  • the above step 103 includes determining whether the image matches a preset face template if the included angle is within a preset range.
  • the above angle may include a horizontal angle and/or a vertical angle.
  • the horizontal angle is 0°
  • the angle is increased from 0° to plus 90°
  • the angle is Decrease from 0° to minus 90°.
  • the calculation method of the angle between the target human face and the mobile terminal can be set according to actual needs, for example, the Active Appearance Model (AAM) can be used to estimate the face rotation angle and the linear discriminant analysis method (Linear) Discriminant Analysis, LDA).
  • AAM Active Appearance Model
  • LDA linear discriminant Analysis
  • the AAM algorithm builds a model based on the training data, and then uses the model to perform matching operations on the face. It can use shape information to perform statistical analysis on important facial texture information.
  • a face detection algorithm based on an elastic invariant model can be used. This method defines a face as a combination of different parts of the face, such as the nose, eyes, mouth, ears, etc. These different parts pass the "spring” (spring)” combines to find these parts and the 3D geometric relationship between them, and comprehensively use these models to calculate the angle between the face and the mobile terminal.
  • the preview interface may be displayed to enable the user to perform the face unlocking operation better; if the angle is not within the preset range, the preview may not be displayed.
  • the above preview interface prevents unnecessary display by the user's misoperation, thereby affecting the user experience.
  • the angle between the target face and the mobile terminal is analyzed, so that the accuracy of determining the user's need to unlock the face can be improved.
  • the method for detecting whether the mobile terminal is changed from the quiescent state to the raised state may be set according to actual needs.
  • the foregoing step 101 includes:
  • Step 1012 The three-axis acceleration sensor is read every preset time period to detect the acceleration value of the mobile terminal in three-dimensional space;
  • Step 1013 Determine spatial angle information of the mobile terminal according to the acceleration value.
  • Step 1014 Determine, according to the amount of change of the spatial angle information in the preset time period, whether the mobile terminal changes from a stationary state to a raised state within the preset time period.
  • the preset duration may be set according to actual needs.
  • the triaxial acceleration sensor outputs the currently detected acceleration value every 0.1 seconds, that is, the acceleration values in the X-axis, the Y-axis, and the Z-axis direction. Therefore, the preset duration may be 0.1 second.
  • other values may be set according to the sensitivity, which is not further limited herein.
  • the current spatial angle information of the mobile terminal can be calculated, and the spatial angle information can be continuously monitored to determine the state of the mobile terminal. For example, during the cycle time, the amount of change of the spatial angle information is greater than the set value. Determining that the mobile terminal is in a raised state; determining that the mobile terminal is in a stationary state when the amount of change in the spatial angle information is less than or equal to the set value during the cycle time. Therefore, the user can better recognize the action of the user lifting the wrist to pick up the mobile terminal, and improve the accuracy of determining that the user needs to unlock the face.
  • FIG. 5 is a structural diagram of a mobile terminal according to an embodiment of the present disclosure, which can implement the details of the face recognition method in the foregoing embodiment, and achieve the same effect.
  • the mobile terminal includes:
  • the detecting module 501 is configured to detect whether the mobile terminal changes from a stationary state to a raised state
  • the startup module 502 is configured to start a camera and acquire an image through the camera if the mobile terminal changes from a stationary state to a raised state;
  • the determining module 503 is configured to determine whether the image matches a preset face template.
  • the determining module 504 is configured to determine that the recognition is successful if the image matches the preset face template.
  • the detecting module 501 is specifically configured to: when the mobile terminal is in a preset state, detect whether the mobile terminal changes from a static state to a raised state, where the preset state includes a blackout state or a lock screen state. .
  • the acquiring the image by using the camera includes: acquiring an image in the preset state by using the camera.
  • the mobile terminal further includes:
  • a calculation module configured to calculate an angle between the target human face in the image and the mobile terminal
  • triggering the determining module to perform the operation of determining whether the image matches a preset face template.
  • the detecting module 501 includes:
  • a reading unit configured to read a three-axis acceleration sensor to detect an acceleration value of the mobile terminal in three-dimensional space every preset time period
  • An angle determining unit configured to determine spatial angle information of the mobile terminal according to the acceleration value
  • the determining unit is configured to determine, according to the amount of change of the spatial angle information in the preset time period, whether the mobile terminal changes from a stationary state to a raised state within the preset time period.
  • the embodiment of the present disclosure starts the camera to acquire an image, and then performs face recognition, thereby shortening the operation time of the face recognition, improving the convenience of the face recognition, and improving the movement.
  • the degree of intelligence of the terminal is the degree of intelligence of the terminal.
  • FIG. 6 is a structural diagram of a mobile terminal according to an embodiment of the present disclosure, which can implement the details of the face recognition method in the foregoing embodiment, and achieve the same effect.
  • the mobile terminal 600 includes at least one processor 601, a memory 602, at least one network interface 604, and a user interface 603.
  • the various components in mobile terminal 600 are coupled together by a bus system 605.
  • the bus system 605 is used to implement connection communication between these components.
  • the bus system 605 includes a power bus, a control bus, and a status signal bus in addition to the data bus.
  • various buses are labeled as bus system 605 in FIG.
  • the user interface 603 may include a display, a keyboard, or a pointing device (eg, a mouse, a track ball, a touch pad, or a touch screen, etc.).
  • a pointing device eg, a mouse, a track ball, a touch pad, or a touch screen, etc.
  • the memory 602 in an embodiment of the present disclosure may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory may be a read-only memory (ROM), a programmable read only memory (PROM), an erasable programmable read only memory (Erasable PROM, EPROM), or an electric Erase programmable read only memory (EEPROM) or flash memory.
  • the volatile memory can be a Random Access Memory (RAM) that acts as an external cache.
  • RAM Random Access Memory
  • many forms of RAM are available, such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (Synchronous DRAM).
  • SDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • ESDRAM Enhanced Synchronous Dynamic Random Access Memory
  • SLDRAM Synchronous Connection Dynamic Random Access Memory
  • DRRAM direct memory bus random access memory
  • memory 602 stores elements, executable modules or data structures, or a subset thereof, or their set of extensions: operating system 6021 and application 6022.
  • the operating system 6021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks.
  • the application 6022 includes various applications, such as a media player (Media Player), a browser, and the like, for implementing various application services.
  • a program implementing the method of the embodiments of the present disclosure may be included in the application 6022.
  • the mobile terminal further includes: a computer program stored on the memory 602 and executable on the processor 601, and specifically, may be a computer program in the application 6022, when the computer program is executed by the processor 601
  • Processor 601 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 601 or an instruction in a form of software.
  • the processor 601 may be a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or the like. Programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in connection with the embodiments of the present disclosure may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software modules can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 602, and the processor 601 reads the information in the memory 602 and completes the steps of the above method in combination with its hardware.
  • the embodiments described herein can be implemented in hardware, software, firmware, middleware, microcode, or a combination thereof.
  • the processing unit can be implemented in one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processing (DSP), Digital Signal Processing Equipment (DSP Device, DSPD), programmable Programmable Logic Device (PLD), Field-Programmable Gate Array (FPGA), general purpose processor, controller, microcontroller, microprocessor, other for performing the functions described herein In an electronic unit or a combination thereof.
  • ASICs Application Specific Integrated Circuits
  • DSP Digital Signal Processing
  • DSP Device Digital Signal Processing Equipment
  • PLD programmable Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • the techniques described herein can be implemented by modules (eg, procedures, functions, and so on) that perform the functions described herein.
  • the software code can be stored in memory and executed by the processor.
  • the memory can be implemented in the processor or external to the processor.
  • the following steps may be further implemented: when the mobile terminal is in a preset state, detecting whether the mobile terminal changes from a stationary state to a raised state, where the preset state includes a screen blanking Status or lock screen status.
  • the acquiring the image by using the camera includes: acquiring an image in the preset state by using the camera.
  • the following steps may be further: calculating an angle between the target human face in the image and the mobile terminal; if the angle is within a preset range, executing the A step of determining whether the image matches a preset face template.
  • the following steps may be performed: acquiring an acceleration value of the mobile terminal in a three-dimensional space every preset time length; determining spatial angle information of the mobile terminal according to the acceleration value; The amount of change of the spatial angle information within the preset time period determines whether the mobile terminal changes from the stationary state to the raised state within the preset time period.
  • the embodiment of the present disclosure starts the camera to acquire an image, and then performs face recognition, thereby shortening the operation time of the face recognition, improving the convenience of the face recognition, and improving the movement.
  • the degree of intelligence of the terminal is the degree of intelligence of the terminal.
  • FIG. 7 is a structural diagram of a mobile terminal according to an embodiment of the present disclosure, which can implement the details of the face recognition method in the foregoing embodiment, and achieve the same effect.
  • the mobile terminal 700 includes a radio frequency (RF) circuit 710, a memory 720, an input unit 730, a display unit 740, a processor 750, an audio circuit 760, a communication module 770, and a power source 780, and further includes a camera. (not shown in the figure).
  • RF radio frequency
  • the input unit 730 can be configured to receive numeric or character information input by the user, and generate signal input related to user settings and function control of the mobile terminal 700.
  • the input unit 730 may include a touch panel 731.
  • the touch panel 731 also referred to as a touch screen, can collect touch operations on or near the user (such as the operation of the user using any suitable object or accessory such as a finger or a stylus on the touch panel 731), and according to the preset
  • the programmed program drives the corresponding connection device.
  • the touch panel 731 can include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 750 is provided and can receive commands from the processor 750 and execute them.
  • the touch panel 731 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 730 may further include other input devices 732, which may include, but are not limited to, physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like. One or more of them.
  • the display unit 740 can be used to display information input by the user or information provided to the user and various menu interfaces of the mobile terminal 700.
  • the display unit 740 can include a display panel 741.
  • the display panel 741 can be configured in the form of an LCD or an Organic Light-Emitting Diode (OLED).
  • the touch panel 731 can cover the display panel 741 to form a touch display screen, and when the touch display screen detects a touch operation on or near it, it is transmitted to the processor 750 to determine the type of the touch event, and then the processor The 750 provides a corresponding visual output on the touch display depending on the type of touch event.
  • the processor 750 is a control center of the mobile terminal 700, and connects various parts of the entire mobile phone by using various interfaces and lines, by running or executing software programs and/or modules stored in the first memory 721, and calling the second storage.
  • the data in the memory 722 performs various functions and processing data of the mobile terminal 700, thereby performing overall monitoring of the mobile terminal 700.
  • processor 750 can include one or more processing units.
  • the following steps may be implemented: detecting the movement Whether the terminal changes from a stationary state to a raised state; if the mobile terminal changes from a stationary state to a raised state, the camera is activated, an image is captured by the camera, and it is determined whether the image matches a preset face template; The image is matched with the preset face template, and the recognition is determined to be successful.
  • the following steps may be further implemented: when the mobile terminal is in a preset state, detecting whether the mobile terminal changes from a stationary state to a raised state, where the preset state includes a screen blanking Status or lock screen status.
  • the acquiring the image by using the camera includes: acquiring an image in the preset state by using the camera.
  • the following steps may be further: calculating an angle between the target human face in the image and the mobile terminal; if the angle is within a preset range, executing the A step of determining whether the image matches a preset face template.
  • the following steps may be performed: acquiring an acceleration value of the mobile terminal in a three-dimensional space every preset time length; determining spatial angle information of the mobile terminal according to the acceleration value; The amount of change of the spatial angle information within the preset time period determines whether the mobile terminal changes from the stationary state to the raised state within the preset time period.
  • the mobile terminal when the mobile terminal changes from the stationary state to the raised state, the mobile terminal starts the camera to acquire an image, and then performs face recognition, thereby shortening the time of the face recognition operation and improving the intelligence degree of the mobile terminal.
  • the embodiment of the present disclosure further provides a computer readable storage medium having stored thereon a computer program, the computer program being executed by a processor to implement the steps in the face recognition method in any one of the above method embodiments.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the embodiments of the present disclosure.
  • each functional unit in various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product. Based on such understanding, the portion of the technical solution of the present disclosure that contributes in essence or to the prior art or the portion of the technical solution may be embodied in the form of a software product stored in a storage medium, including The instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • General Business, Economics & Management (AREA)
  • Business, Economics & Management (AREA)
  • Environmental & Geological Engineering (AREA)
  • Telephone Function (AREA)

Abstract

本公开提供一种人脸识别方法及移动终端,该方法包括:检测移动终端是否从静止状态变为抬起状态;若所述移动终端从静止状态变为抬起状态,则启动摄像头,通过所述摄像头采集图像;判断所述图像是否与预设人脸模板匹配;若所述图像与预设人脸模板匹配,则确定识别成功。

Description

人脸识别方法及移动终端
相关申请的交叉引用
本申请主张在2017年9月7日在中国提交的中国专利申请号No.201710800895.0的优先权,其全部内容通过引用包含于此。
技术领域
本公开涉及通信技术领域,尤其涉及一种人脸识别方法及移动终端。
背景技术
众所周知,相关技术中的移动终端,通过人脸识别技术可以实现多种应用,如可以通过人脸识别进行密码解锁和权限控制等。进行密码解锁时,可以是针对移动终端的锁屏解锁或是应用的使用解锁等。例如,相关技术中的移动终端通常都具有屏幕保护功能,用户可以主动控制移动终端进入或者移动终端在未使用的状态下自动进入锁屏状态。在锁屏状态下,用户可以通过多种解锁方式进行解锁,其中,人脸识别解锁是常用的解锁方式之一。相关技术中的移动终端,用户需要首先唤醒移动终端,然后启动摄像头,最后调整移动终端的姿态进行人脸识别,其操作方式繁琐,导致人脸识别的时间较长。
发明内容
本公开实施例提供了一种人脸识别方法,应用于移动终端,包括:
检测移动终端是否从静止状态变为抬起状态;
若所述移动终端从静止状态变为抬起状态,则启动摄像头,通过所述摄像头采集图像;
判断所述图像是否与预设人脸模板匹配;
若所述图像与预设人脸模板匹配,则确定识别成功。
本公开实施例还提供了一种移动终端,包括:
第一检测模块,用于检测移动终端是否从静止状态变为抬起状态;
启动模块,用于若所述移动终端从静止状态变为抬起状态,则启动摄像头,通过所述摄像头采集图像;
判断模块,用于判断所述图像是否与预设人脸模板匹配;
确定模块,用于若所述图像与预设人脸模板匹配,则确定识别成功。
本公开实施例还提供了一种移动终端,包括:
一个或多个处理器;
存储器;以及
一个或多个程序,其中,所述一个或多个程序被存储在所述存储器中,并且被配置成由所述一个或多个处理器执行,所述程序被执行时实现上述人脸识别方法中的步骤。
本公开实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述人脸识别方法的步骤。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对本公开实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开一实施例提供的人脸识别方法的流程图之一;
图2为本公开一实施例提供的人脸识别方法的流程图之二;
图3为本公开一实施例提供的人脸识别方法的流程图之三;
图4为本公开一实施例提供的人脸识别方法的流程图之四;
图5为本公开一实施例提供的移动终端的结构图;
图6为本公开另一实施例提供的移动终端的结构图;
图7为本公开又一实施例提供的移动终端的结构图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是 全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
参见图1,图1是本公开实施例提供的一种人脸识别方法的流程图,如图1所示,包括以下步骤:
步骤101,检测移动终端是否从静止状态变为抬起状态。
本公开实施例提供的人脸识别方法主要应用在移动终端中,用于移动终端的自动实现人脸识别操作,并在人脸识别匹配后,执行相应的操作。
该步骤中,可以实时检测移动终端是否从静止状态变为抬起状态。具体的,可以通过设置传感器检测移动终端当前的状态,从而确定移动终端的状态是否发生变化。
步骤102,若所述移动终端从静止状态变为抬起状态,则启动摄像头,通过所述摄像头采集图像。
当移动终端从静止状态变为抬起状态时,即可确定移动终端被拿起,此时,将会自动启动移动终端的摄像头,从而采集图像。本实施例中,上述摄像头可以为前置摄像头,当然还可以是其他摄像头,例如还可以是可旋转的摄像头。
步骤103,判断所述图像是否与预设人脸模板匹配;
步骤104,若所述图像与预设人脸模板匹配,则确定识别成功。
应理解,采集的图像中理想状态下仅存在一个人脸图像,当然也可以存在多个人脸图像,在进行人脸识别时,识别到的每一个人脸特征均为独立的,并不能将识别到的所有人脸特征进行组合。
该步骤中当采集的图像中存在人脸时,将会对图像中的人脸按照预设人脸识别算法进行人脸识别,以提取人脸特征。然后将识别获得的人脸特征与预设人脸模板进行匹配,当匹配成功时,可以执行相应的操作。例如,解锁或者设置权限等操作。
这样,本公开实施例在移动终端从静止状态变为抬起状态时,启动摄像头采集图像,然后进行人脸识别,从而能够缩短人脸识别的操作时间,提升人脸识别的便捷性,提高移动终端的智能化程度。
应理解,本公开实施例中,人脸识别方法可以应用在多种不同的场景下, 例如解锁、加密和权限设置等不同的场景,以下实施例将以解锁的应用场景进行详细说明。具体的,参照图2,本实施例中,上述步骤101包括:
步骤1011,当所述移动终端处于预设状态时,检测移动终端是否从静止状态变为抬起状态,所述预设状态包括熄屏状态或锁屏状态。
该步骤中,移动终端具有锁屏功能,并配置有人脸识别解锁方式,即用户首先设置人脸解锁模式,并在设置人脸解锁模式时,由移动设备采集用户人脸,并进行人脸识别,提取预设的人脸特征作为可以解锁移动终端的人脸特征。用户可以手动控制移动终端进入锁屏状态或熄屏状态,或者移动终端在未被使用的状态以下持续一定时间后自动进入锁屏状态或熄屏状态。当移动终端进入锁屏状态或熄屏状态后,可以实时检测移动终端是否从静止状态变为抬起状态。本实施例中,当人脸识别成功后,将会进行解锁。
应当说明的是,在本公开实施例中,摄像头可以在上述预设状态下采集图像,即不改变移动终端的状态,直接后台启动摄像头进行图像采集,并进行人脸识别,当人脸识别成功后,即可进行解锁操作。这样将会对用户而言是一种无感知的解锁,提升了用户的体验,减小了人脸解锁的时间。
应当说明的是,本公开实施例中,为了提高移动终端使用的安全性,当识别到图像中存在至少两个人脸时,则不进行解锁操作。即上述步骤103包括:判断采集的图像中是否仅存在一个人脸,若仅存在一个人脸,则判断所述图像是否与预设人脸模板匹配,若存在两个人脸,则保持锁屏状态或熄屏状态。
进一步的,为了提高人脸识别的准确度,在本实施例中,还可以输出摄像头采集的图像的预览界面,以使用户更好的进行人脸解锁操作。具体的,参照图3,在上述步骤102之后,该方法还包括:
步骤105,显示预览界面,所述预览界面用于加载所述图像。
本实施例中,上述预览界面的大小以及位置均可以根据实际需要进行显示,例如,可以全屏显示,也可以半屏显示,或者小窗口显示。
进一步的,为了提高判断移动终端是否被拿起的准确度,本实施例还对人脸与移动终端的角度进行了识别。具体的,参照图4,在上述步骤103之前,该方法还包括:
步骤106,计算所述图像的目标人脸与所述移动终端的夹角;
上述步骤103包括:若所述夹角位于预设范围内,则判断所述图像是否与预设人脸模板匹配。
本实施例中,上述夹角可以包括水平夹角和/或竖直夹角。例如,目标人脸正对移动终端的摄像头时,其水平角度为0°,当人脸向右转动时,角度从0°到正90°度增加;当人脸向左转动时,角度从0°到负90°度减少。当人脸逐渐往下低头或者抬头时,改变的是人脸与移动终端的在竖直方向上的夹角。
具体的,目标人脸与所述移动终端的夹角的计算方法可以根据实际需要进行设置,例如可以利用主动表观模型(Active Appearance Model,AAM)估计人脸旋转角度和线性判别分析法(Linear Discriminant Analysis,LDA)。AAM算法是在训练数据的基础上建立模型,然后利用模型对人脸进行匹配运算,它可以利用形状信息,对重要的面部纹理信息进行统计分析。另外,还可以采用基于弹性不变模型的人脸检测算法,这类方法将人脸定义为人脸不同部位(parts)的组合,如鼻子、眼睛、嘴、耳朵等,这些不同的部位通过“弹簧(spring)”进行组合,找到这些部位及他们间的3D几何关系,综合利用这些模型计算人脸与移动终端之间的夹角。
本实施例中,若上述夹角位于预设范围内,也可以显示上述预览界面,以使用户更好的进行人脸解锁操作;若所述夹角未位于预设范围内,则可以不显示上述预览界面,以避免用户的误操作造成不必要的显示,从而影响用户的体验。
由于在本实施例中,对目标人脸与移动终端之间的夹角进行分析,从而可以提高判定用户需要进行人脸解锁的准确度。
进一步的,对于检测移动终端是否从静止状态变为抬起状态的方式可以根据实际需要进行设置,具体的,本实施例中,上述步骤101包括:
步骤1012,每隔预设时长读取三轴加速度传感器检测到移动终端在三维空间上的加速度值;
步骤1013,根据所述加速度值确定移动终端的空间角度信息;
步骤1014,根据所述空间角度信息在预设时间段内的变化量,判断移动 终端是否在所述预设时间段内从静止状态变为抬起状态。
本实施例中,上述预设时长可以根据实际需要进行设置,通常情况下三轴加速度传感器每隔0.1秒输出一次当前检测的加速度值,即包括X轴、Y轴和Z轴方向上的加速度值,因此上述预设时长可以为0.1秒,当然还可以根据灵敏度设置其他值,在此不做进一步的限定。
根据每次读取的加速度值可以计算确定移动终端当前的空间角度信息,持续监测空间角度信息即可确定移动终端的状态,例如在周期时间内,空间角度信息的变化量大于设定值,则确定移动终端为抬起状态;在周期时间内空间角度信息变化量小于或等于设定值,则确定移动终端为静止状态。从而可以更好的识别用户抬腕拿起移动终端的动作,提高判定用户需要进行人脸解锁的准确度。
参见图5,图5是本公开实施例提供的移动终端的结构图,能够实现上述实施例中人脸识别方法的细节,并达到相同的效果。如图5所示,移动终端包括:
检测模块501,用于检测移动终端是否从静止状态变为抬起状态;
启动模块502,用于若所述移动终端从静止状态变为抬起状态,则启动摄像头,通过所述摄像头采集图像;
判断模块503,用于判断所述图像是否与预设人脸模板匹配;
确定模块504,用于若所述图像与预设人脸模板匹配,则确定识别成功。
可选的,所述检测模块501具体用于,当所述移动终端处于预设状态时,检测移动终端是否从静止状态变为抬起状态,所述预设状态包括熄屏状态或锁屏状态。
可选的,所述通过所述摄像头采集图像包括:通过所述摄像头在所述预设状态下采集图像。
可选的,所述移动终端还包括:
计算模块,用于计算所述图像中的目标人脸与所述移动终端的夹角;
若所述夹角位于预设范围内,则触发所述判断模块执行所述判断所述图像是否与预设人脸模板匹配的操作。
可选的,所述检测模块501包括:
读取单元,用于每隔预设时长读取三轴加速度传感器检测到移动终端在三维空间上的加速度值;
角度确定单元,用于根据所述加速度值确定移动终端的空间角度信息;
判断单元,用于根据所述空间角度信息在预设时间段内的变化量,判断移动终端是否在所述预设时间段内从静止状态变为抬起状态。
这样,本公开实施例在移动终端从静止状态变为抬起状态时,启动摄像头采集图像,然后进行人脸识别,从而能够缩短人脸识别的操作时间,提升人脸识别的便捷性,提高移动终端的智能化程度。
参见图6,图6是本公开实施例提供的移动终端的结构图,能够实现上述实施例中人脸识别方法的细节,并达到相同的效果。如图6所示,移动终端600包括:至少一个处理器601、存储器602、至少一个网络接口604和用户接口603。移动终端600中的各个组件通过总线系统605耦合在一起。可理解,总线系统605用于实现这些组件之间的连接通信。总线系统605除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图6中将各种总线都标为总线系统605。
其中,用户接口603可以包括显示器、键盘或者点击设备(例如,鼠标,轨迹球(track ball)、触感板或者触摸屏等。
可以理解,本公开实施例中的存储器602可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synch Link DRAM,SLDRAM)和 直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本文描述的系统和方法的存储器602旨在包括但不限于这些和任意其它适合类型的存储器。
在一些实施方式中,存储器602存储了如下的元素,可执行模块或者数据结构,或者他们的子集,或者他们的扩展集:操作系统6021和应用程序6022。
其中,操作系统6021,包含各种系统程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务。应用程序6022,包含各种应用程序,例如媒体播放器(Media Player)、浏览器(Browser)等,用于实现各种应用业务。实现本公开实施例方法的程序可以包含在应用程序6022中。
在本公开实施例中,移动终端还包括:存储在存储器602上并可在处理器601上运行的计算机程序,具体地,可以是应用程序6022中的计算机程序,计算机程序被处理器601执行时实现如下步骤:检测移动终端是否从静止状态变为抬起状态;若所述移动终端从静止状态变为抬起状态,则启动摄像头,通过所述摄像头采集图像;判断所述图像是否与预设人脸模板匹配;若所述图像与预设人脸模板匹配,则确定识别成功。
上述本公开实施例揭示的方法可以应用于处理器601中,或者由处理器601实现。处理器601可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器601中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器601可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本公开实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本公开实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程 存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器602,处理器601读取存储器602中的信息,结合其硬件完成上述方法的步骤。
可以理解的是,本文描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,处理单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processing,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本申请所述功能的其它电子单元或其组合中。
对于软件实现,可通过执行本文所述功能的模块(例如过程、函数等)来实现本文所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。
可选的,计算机程序被处理器601执行时还可实现如下步骤:当所述移动终端处于预设状态时,检测移动终端是否从静止状态变为抬起状态,所述预设状态包括熄屏状态或锁屏状态。
可选的,所述通过所述摄像头采集图像包括:通过所述摄像头在所述预设状态下采集图像。
可选的,计算机程序被处理器601执行时还可实现如下步骤:计算所述图像中的目标人脸与所述移动终端的夹角;若所述夹角位于预设范围内,则执行所述判断所述图像是否与预设人脸模板匹配的步骤。
可选的,计算机程序被处理器601执行时还可实现如下步骤:每隔预设时长获取移动终端在三维空间上的加速度值;根据所述加速度值确定移动终端的空间角度信息;根据所述空间角度信息在预设时间段内的变化量,判断移动终端是否在所述预设时间段内从静止状态变为抬起状态。
这样,本公开实施例在移动终端从静止状态变为抬起状态时,启动摄像头采集图像,然后进行人脸识别,从而能够缩短人脸识别的操作时间,提升人脸识别的便捷性,提高移动终端的智能化程度。
请参阅图7,图7是本公开实施例提供的移动终端的结构图,能够实现上述实施例中人脸识别方法的细节,并达到相同的效果。如图7所示,移动 终端700包括射频(Radio Frequency,RF)电路710、存储器720、输入单元730、显示单元740、处理器750、音频电路760、通信模块770、和电源780,还包括摄像头(图中未示出)。
其中,输入单元730可用于接收用户输入的数字或字符信息,以及产生与移动终端700的用户设置以及功能控制有关的信号输入。具体地,本公开实施例中,该输入单元730可以包括触控面板731。触控面板731,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板731上的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板731可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给该处理器750,并能接收处理器750发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板731。除了触控面板731,输入单元730还可以包括其他输入设备732,其他输入设备732可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
其中,显示单元740可用于显示由用户输入的信息或提供给用户的信息以及移动终端700的各种菜单界面。显示单元740可包括显示面板741,可选的,可以采用LCD或有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板741。
应注意,触控面板731可以覆盖显示面板741,形成触摸显示屏,当该触摸显示屏检测到在其上或附近的触摸操作后,传送给处理器750以确定触摸事件的类型,随后处理器750根据触摸事件的类型在触摸显示屏上提供相应的视觉输出。
其中处理器750是移动终端700的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在第一存储器721内的软件程序和/或模块,以及调用存储在第二存储器722内的数据,执行移动终端700的各种功能和处理数据,从而对移动终端700进行整体监控。可选的,处理器 750可包括一个或多个处理单元。
在本公开实施例中,通过调用存储该第一存储器721内的软件程序和/或模块和/或该第二存储器722内的数据,计算机程序被处理器750执行时可以实现如下步骤:检测移动终端是否从静止状态变为抬起状态;若所述移动终端从静止状态变为抬起状态,则启动摄像头,通过所述摄像头采集图像;判断所述图像是否与预设人脸模板匹配;若所述图像与预设人脸模板匹配,则确定识别成功。
可选的,计算机程序被处理器750执行时还可实现如下步骤:当所述移动终端处于预设状态时,检测移动终端是否从静止状态变为抬起状态,所述预设状态包括熄屏状态或锁屏状态。
可选的,所述通过所述摄像头采集图像包括:通过所述摄像头在所述预设状态下采集图像。
可选的,计算机程序被处理器750执行时还可实现如下步骤:计算所述图像中的目标人脸与所述移动终端的夹角;若所述夹角位于预设范围内,则执行所述判断所述图像是否与预设人脸模板匹配的步骤。
可选的,计算机程序被处理器750执行时还可实现如下步骤:每隔预设时长获取移动终端在三维空间上的加速度值;根据所述加速度值确定移动终端的空间角度信息;根据所述空间角度信息在预设时间段内的变化量,判断移动终端是否在所述预设时间段内从静止状态变为抬起状态。
这样,本公开实施例当移动终端从静止状态变为抬起状态时,启动摄像头采集图像,然后进行人脸识别,从而可以缩短人脸识别操作的时间,提高移动终端的智能化程度。
本公开实施例还提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述任意一个方法实施例中的人脸识别方法中的步骤。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方 法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本公开实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护 范围应以权利要求的保护范围为准。

Claims (12)

  1. 一种人脸识别方法,应用于移动终端,包括:
    检测移动终端是否从静止状态变为抬起状态;
    若所述移动终端从静止状态变为抬起状态,则启动摄像头,通过所述摄像头采集图像;
    判断所述图像是否与预设人脸模板匹配;
    若所述图像与预设人脸模板匹配,则确定识别成功。
  2. 根据权利要求1所述的方法,其中,所述检测移动终端是否从静止状态变为抬起状态,包括:
    当所述移动终端处于预设状态时,检测移动终端是否从静止状态变为抬起状态,所述预设状态包括熄屏状态或锁屏状态。
  3. 根据权利要求2所述的方法,其中,所述通过所述摄像头采集图像包括:通过所述摄像头在所述预设状态下采集图像。
  4. 根据权利要求1所述的方法,其中,所述判断所述图像是否与预设人脸模板匹配的步骤之前,所述方法还包括:
    计算所述图像中的目标人脸与所述移动终端的夹角;
    若所述夹角位于预设范围内,则执行所述判断所述图像是否与预设人脸模板匹配的步骤。
  5. 根据权利要求1所述的方法,其中,所述检测移动终端是否从静止状态变为抬起状态的步骤,包括:
    每隔预设时长获取移动终端在三维空间上的加速度值;
    根据所述加速度值确定移动终端的空间角度信息;
    根据所述空间角度信息在预设时间段内的变化量,判断移动终端是否在所述预设时间段内从静止状态变为抬起状态。
  6. 一种移动终端,包括:
    检测模块,用于检测移动终端是否从静止状态变为抬起状态;
    启动模块,用于若所述移动终端从静止状态变为抬起状态,则启动摄像头,通过所述摄像头采集图像;
    判断模块,用于判断所述图像是否与预设人脸模板匹配;
    确定模块,用于若所述图像与预设人脸模板匹配,则确定识别成功。
  7. 根据权利要求6所述的移动终端,其中,所述检测模块具体用于,当所述移动终端处于预设状态时,检测移动终端是否从静止状态变为抬起状态,所述预设状态包括熄屏状态或锁屏状态。
  8. 根据权利要求7所述的移动终端,其中,所述通过所述摄像头采集图像包括:通过所述摄像头在所述预设状态下采集图像。
  9. 根据权利要求6所述的移动终端,其中,所述移动终端还包括:
    计算模块,用于计算所述图像中的目标人脸与所述移动终端的夹角;
    若所述夹角位于预设范围内,则触发所述判断模块执行所述判断所述图像是否与预设人脸模板匹配的操作。
  10. 根据权利要求6所述的移动终端,其中,所述检测模块包括:
    读取单元,用于每隔预设时长读取三轴加速度传感器检测到移动终端在三维空间上的加速度值;
    角度确定单元,用于根据所述加速度值确定移动终端的空间角度信息;
    判断单元,用于根据所述空间角度信息在预设时间段内的变化量,判断移动终端是否在所述预设时间段内从静止状态变为抬起状态。
  11. 一种移动终端,包括:
    一个或多个处理器;
    存储器;以及
    一个或多个程序,其中,所述一个或多个程序被存储在所述存储器中,并且被配置成由所述一个或多个处理器执行,所述程序被执行时实现权利要求1-5中任一项所述的人脸识别方法中的步骤。
  12. 一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1-5中任一项所述的人脸识别方法的步骤。
PCT/CN2018/100634 2017-09-07 2018-08-15 人脸识别方法及移动终端 WO2019047694A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/645,674 US11100312B2 (en) 2017-09-07 2018-08-15 Face recognition method and mobile terminal
EP18853709.6A EP3681136A4 (en) 2017-09-07 2018-08-15 METHOD OF HUMAN FACE DETECTION AND MOBILE TERMINAL DEVICE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710800895.0A CN107707738A (zh) 2017-09-07 2017-09-07 一种人脸识别方法及移动终端
CN201710800895.0 2017-09-07

Publications (1)

Publication Number Publication Date
WO2019047694A1 true WO2019047694A1 (zh) 2019-03-14

Family

ID=61172212

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/100634 WO2019047694A1 (zh) 2017-09-07 2018-08-15 人脸识别方法及移动终端

Country Status (4)

Country Link
US (1) US11100312B2 (zh)
EP (1) EP3681136A4 (zh)
CN (1) CN107707738A (zh)
WO (1) WO2019047694A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107707738A (zh) * 2017-09-07 2018-02-16 维沃移动通信有限公司 一种人脸识别方法及移动终端
CN110519443B (zh) * 2018-05-22 2021-06-01 维沃移动通信有限公司 一种亮屏方法及移动终端
CN110516515B (zh) * 2018-05-22 2021-06-01 维沃移动通信有限公司 一种解锁方法和移动终端
CN112929469B (zh) * 2018-06-12 2023-09-01 Oppo广东移动通信有限公司 滑动机构控制方法、装置、电子设备及存储介质
CN113794833B (zh) * 2021-08-16 2023-05-26 维沃移动通信(杭州)有限公司 拍摄方法、装置和电子设备
CN114025093A (zh) * 2021-11-09 2022-02-08 维沃移动通信有限公司 拍摄方法、装置、电子设备和可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090262078A1 (en) * 2008-04-21 2009-10-22 David Pizzi Cellular phone with special sensor functions
CN102111490A (zh) * 2009-12-23 2011-06-29 索尼爱立信移动通讯有限公司 移动终端的键盘自动解锁方法及装置
CN104284004A (zh) * 2013-07-02 2015-01-14 华为终端有限公司 一种屏幕解锁方法及移动终端
CN105468950A (zh) * 2014-09-03 2016-04-06 阿里巴巴集团控股有限公司 身份认证方法、装置、终端及服务器
CN107707738A (zh) * 2017-09-07 2018-02-16 维沃移动通信有限公司 一种人脸识别方法及移动终端

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101754462A (zh) 2009-12-24 2010-06-23 深圳华为通信技术有限公司 一种设置移动终端状态的方法和终端
US8994499B2 (en) 2011-03-16 2015-03-31 Apple Inc. Locking and unlocking a mobile device using facial recognition
US8560004B1 (en) * 2012-08-31 2013-10-15 Google Inc. Sensor-based activation of an input device
CN103227869B (zh) * 2013-04-28 2015-08-05 广东欧珀移动通信有限公司 一种移动终端及开启移动终端摄像头的方法
US9467403B2 (en) * 2013-11-28 2016-10-11 Tencent Technology (Shenzhen) Company Limited Method and mobile terminal for speech communication
CN104735337A (zh) * 2013-12-20 2015-06-24 深圳桑菲消费通信有限公司 一种拍摄方法、装置及移动终端
CN104869220A (zh) * 2014-02-25 2015-08-26 昆山研达电脑科技有限公司 手机安全接听方法
CN103885593B (zh) 2014-03-14 2016-04-06 努比亚技术有限公司 一种手持终端及其屏幕防抖方法和装置
KR101598771B1 (ko) 2014-06-11 2016-03-02 주식회사 슈프리마에이치큐 얼굴 인식 생체 인증 방법 및 장치
CN104539838A (zh) * 2014-12-02 2015-04-22 厦门美图移动科技有限公司 一种快速打开手机相机进行视频录制的方法和装置
CN104700017B (zh) 2015-03-18 2018-03-23 上海卓易科技股份有限公司 一种基于人脸识别自动解锁方法、系统及终端
CN104935823A (zh) * 2015-06-23 2015-09-23 上海卓易科技股份有限公司 进入拍摄状态方法、进入拍摄状态装置及智能终端
CN106713665A (zh) * 2017-02-08 2017-05-24 上海与德信息技术有限公司 一种快速开启相机的方法及装置
CN107015745B (zh) 2017-05-19 2020-03-24 广东小天才科技有限公司 屏幕操作方法、装置、终端设备及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090262078A1 (en) * 2008-04-21 2009-10-22 David Pizzi Cellular phone with special sensor functions
CN102111490A (zh) * 2009-12-23 2011-06-29 索尼爱立信移动通讯有限公司 移动终端的键盘自动解锁方法及装置
CN104284004A (zh) * 2013-07-02 2015-01-14 华为终端有限公司 一种屏幕解锁方法及移动终端
CN105468950A (zh) * 2014-09-03 2016-04-06 阿里巴巴集团控股有限公司 身份认证方法、装置、终端及服务器
CN107707738A (zh) * 2017-09-07 2018-02-16 维沃移动通信有限公司 一种人脸识别方法及移动终端

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3681136A4 *

Also Published As

Publication number Publication date
US20200272809A1 (en) 2020-08-27
US11100312B2 (en) 2021-08-24
EP3681136A4 (en) 2020-09-23
EP3681136A1 (en) 2020-07-15
CN107707738A (zh) 2018-02-16

Similar Documents

Publication Publication Date Title
WO2019047694A1 (zh) 人脸识别方法及移动终端
US10339402B2 (en) Method and apparatus for liveness detection
US11061480B2 (en) Apparatus, method and recording medium for controlling user interface using input image
KR102080183B1 (ko) 전자 장치 및 전자 장치에서 잠금 해제 방법
WO2018166399A1 (zh) 一种显示控制方法及移动终端
WO2016127437A1 (zh) 活体人脸验证方法及系统、计算机程序产品
WO2015081820A1 (en) Voice-activated shooting method and device
EP2605172A2 (en) Multi-person gestural authentication and authorization system and method of operation thereof
US20150286281A1 (en) Generating a screenshot
US20140118520A1 (en) Seamless authorized access to an electronic device
US8897490B2 (en) Vision-based user interface and related method
US10984082B2 (en) Electronic device and method for providing user information
CN105426730A (zh) 登录验证处理方法、装置及终端设备
WO2013046373A1 (ja) 情報処理装置、制御方法及びプログラム
US20190080065A1 (en) Dynamic interface for camera-based authentication
WO2013114806A1 (ja) 生体認証装置及び生体認証方法
WO2019101096A1 (zh) 安全验证的方法、装置及移动终端
CN106445328B (zh) 一种移动终端屏幕的解锁方法及移动终端
CA2955072C (en) Reflection-based control activation
WO2013178151A1 (zh) 屏幕翻转方法及装置、移动终端
US20160357301A1 (en) Method and system for performing an action based on number of hover events
WO2019000817A1 (zh) 手势识别控制方法和电子设备
WO2018177156A1 (zh) 一种移动终端的操作方法及移动终端
WO2019037257A1 (zh) 密码输入的控制设备、方法及计算机可读存储介质
JP5958319B2 (ja) 情報処理装置、プログラム、及び方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18853709

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018853709

Country of ref document: EP

Effective date: 20200407