WO2018223270A1 - 一种显示的处理方法及装置 - Google Patents

一种显示的处理方法及装置 Download PDF

Info

Publication number
WO2018223270A1
WO2018223270A1 PCT/CN2017/087217 CN2017087217W WO2018223270A1 WO 2018223270 A1 WO2018223270 A1 WO 2018223270A1 CN 2017087217 W CN2017087217 W CN 2017087217W WO 2018223270 A1 WO2018223270 A1 WO 2018223270A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
user
description data
fingerprint
fingerprint collection
Prior art date
Application number
PCT/CN2017/087217
Other languages
English (en)
French (fr)
Inventor
徐杰
周轩
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to MYPI2019006928A priority Critical patent/MY201311A/en
Priority to PCT/CN2017/087217 priority patent/WO2018223270A1/zh
Priority to US16/619,786 priority patent/US11868604B2/en
Priority to CN202010693617.1A priority patent/CN112015502A/zh
Priority to EP22164715.9A priority patent/EP4109218B1/en
Priority to CN201780007924.6A priority patent/CN108701043B/zh
Priority to EP17912689.1A priority patent/EP3637225B1/en
Publication of WO2018223270A1 publication Critical patent/WO2018223270A1/zh
Priority to HK18116136.1A priority patent/HK1257015A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • G06F1/3209Monitoring remote activity, e.g. over telephone lines or network connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • G06F1/3231Monitoring the presence, absence or movement of users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/325Power saving in peripheral device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/325Power saving in peripheral device
    • G06F1/3265Power saving in display device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/60Static or dynamic means for assisting the user to position a body part for biometric acquisition
    • G06V40/67Static or dynamic means for assisting the user to position a body part for biometric acquisition by interactive indications to the user
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the embodiments of the present invention relate to the field of communications technologies, and in particular, to a display processing method and apparatus.
  • the fingerprint recognition technology can be used for functions such as wake-up, unlocking, and mobile payment of the terminal.
  • the fingerprint collection device can be integrated with the touch screen to implement the fingerprint recognition function of the terminal.
  • the fingerprint collection device 102 can be integrated in a certain area of the touch screen, for example, the area 101, so that when the user touches the area 101, the fingerprint collection device 102 in the trigger area 101 can capture the user fingerprint, thereby completing Fingerprint unlocking or fingerprint payment and other functions.
  • a fingerprint pattern or a lighting area is generally displayed in the area 101.
  • 101 to prompt the user to touch the area.
  • the terminal does not know when the user touches the area 101 to perform fingerprint unlocking, the terminal needs to always light the area 101 in the black screen state to prompt the user to touch the area, then the display screen at the area 101 It is always in working condition, which is easy to cause the device to age or even burn out, and increase the power consumption of the terminal.
  • the embodiment of the invention provides a display processing method and device.
  • the fingerprint collection device is integrated in the touch screen, the probability of the burn-in phenomenon of the touch screen is reduced, and the power consumption of the terminal is reduced.
  • an embodiment of the present invention provides a display processing method, including: indicating that prompt information is displayed on a display screen of a terminal, the prompt information is used to prompt a user to input a fingerprint in a target display area where the fingerprint collection device is disposed; a first scene description data of the terminal, where the first scene description data is used to indicate an application scenario in which the terminal is currently located; determining, according to the scenario description data, that the terminal is currently in a motion scene or a call scenario; indicating that the prompt information is stopped being displayed in the target display region .
  • the terminal when the terminal displays a prompt message for the user in the target display area where the fingerprint collection device is disposed, if the terminal determines that the user does not need the fingerprint collection according to the current application scenario, the terminal may stop at the target.
  • the display area displays the prompt information.
  • the target display area integrated with the fingerprint collection device does not need to be always in the working state, thereby reducing the occurrence of burn-in in the area. The probability of reducing the power consumption of the terminal at the same time.
  • the foregoing first scene description data includes: posture information when the user uses the terminal; wherein determining, according to the scene description data, that the terminal is currently in a motion scene, the method includes: when the gesture information is used to indicate that the user is holding When the terminal is walking or running, it is determined that the terminal is currently in a motion scene. That is to say, when the terminal is in a motion scene, it can be considered that the user does not need to make The fingerprint collection function is used, so that the prompt information can be stopped, thereby saving power consumption of the terminal and avoiding the phenomenon of burn-in.
  • the first scene description data includes: an operating state of the earpiece and an operating state of the proximity light sensor; wherein determining, according to the scene description data, that the terminal is currently in a call scene, includes: when determining that the earpiece is working When the sound is played, and the proximity light sensor detects that a light-shielding object is disposed around the earpiece, it is determined that the terminal is currently in a call scene.
  • the foregoing first scenario description data may further include: an operating state of the proximity photosensor and an operating state of the distance sensor; wherein determining, according to the scenario description data, that the terminal is currently in a call scenario, including: when the terminal is in a voice call state, and The distance sensor detects that there is an object within a preset distance from the terminal, and when the proximity light sensor detects that a light-shielding object is disposed around the earpiece, it determines that the terminal is currently in a call scene.
  • the terminal when the terminal is in the call state of the handset mode, it can be considered that the user does not need to use the fingerprint collection function at this time, and therefore, the prompt information can be stopped, thereby saving power consumption of the terminal and avoiding the phenomenon of burn-in.
  • the method further includes: acquiring second scene description data of the terminal, where the second scene description data is used to indicate an application scenario where the terminal is currently located; When the second scene description data indicates that the user needs to use the fingerprint collection function, the indication is displayed in the target display area.
  • the terminal can determine the current user's use of the fingerprint collection function according to the second scene description data, and output the prompt information in the target display area where the fingerprint collection device is set, so that the user can accurately know the subsequent touch screen.
  • the specific location of the fingerprint is entered, thereby improving the recognition efficiency of fingerprint recognition in the screen.
  • the foregoing second scenario description data includes: posture information when the user uses the terminal; and determining, according to the scenario description data, that the user needs to use the fingerprint collection function, including: when the gesture information is used to indicate the user When picking up the terminal, make sure the user needs to use the fingerprint capture function. That is to say, when the terminal is in the scene of raising the hand, it can be considered that the user needs to use the fingerprint collection function at this time. Therefore, the prompt information can be displayed in the target display area, so that the user can accurately know the specific location of the fingerprint input on the touch screen.
  • the foregoing second scenario description data includes: a trigger event that is performed by the user on the terminal; and determining, according to the scenario description data, that the user needs to use the fingerprint collection function, including: when the trigger event is a wake-up terminal screen
  • the preset operation it is determined that the user needs to use the fingerprint collection function. That is to say, when the user intends to wake up the terminal screen, the user can be considered to need to use the fingerprint collection function at this time. Therefore, the prompt information can be displayed in the target display area, so that the user can accurately know the specific location of the fingerprint input on the touch screen.
  • the foregoing second scenario description data includes: a real-time signal received by the terminal; wherein, determining, according to the scenario description data, that the user needs to use the fingerprint collection function, including: when the real-time signal is a new incoming event Or when the message is an event, it is determined that the user needs to use the fingerprint collection function. That is to say, when the terminal receives a new incoming call or message, it can be considered that the user needs to use the fingerprint collection function at this time. Therefore, the prompt information can be displayed in the target display area, so that the user can accurately know the subsequent fingerprint input on the touch screen. Specific location
  • indicating that the prompt information is displayed on the display screen of the terminal includes: indicating that the terminal lights the target display area; or indicating that the fingerprint pattern is displayed in the target display area.
  • the method when the user needs to use the fingerprint collection function, the method further includes: setting the fingerprint collection device to a standby state. In this way, once the user's finger is pressed on the fingerprint collection device, the fingerprint collection device can immediately start collecting fingerprint information, thereby improving the response speed of the fingerprint collection by the terminal.
  • the method further includes: setting the fingerprint collection device to an inactive state. In this way, even if a finger is pressed on the fingerprint collection device, the fingerprint collection device does not perform fingerprint collection, thereby avoiding the misoperation caused by the user accidentally touching the fingerprint acquisition device while operating in the screen, and at the same time reducing the power consumption of the terminal.
  • the method further includes: detecting an operation event of the user in the target display area; and receiving the operation event in response to the preset condition, by using the fingerprint collection device Collecting a user fingerprint; wherein the preset condition includes: the pressing force of the operating event is greater than the first preset value, the duration of the operating event is greater than the second preset value, and the moving displacement of the operating event is greater than the third preset value And the number of touches of the operation event is greater than at least one of the fourth preset values.
  • the fingerprint collection device when detecting that the user operates the target display area of the touch screen, the fingerprint collection device is triggered to collect the user fingerprint only when the operation event satisfies the preset condition. This can prevent the user from accidentally touching the target display area and trigger the terminal to collect the user's fingerprint misoperation.
  • an embodiment of the present invention provides a terminal, including: a display unit, configured to: display prompt information on a display screen of the terminal, where the prompt information is used to prompt the user to enter in a target display area where the fingerprint collection device is disposed a acquiring unit, configured to: acquire first scenario description data of the terminal, where the first scenario description data is used to indicate an application scenario that the terminal is currently located, and the determining unit is configured to: determine, according to the scenario description data, that the terminal is currently in a motion scenario. Or a call scene; an execution unit, configured to: stop displaying the prompt information in the target display area.
  • the first scene description data includes: posture information when the user uses the terminal; the determining unit is specifically configured to: when the posture information is used to indicate that the user holds the terminal while walking or running, determining The terminal is currently in a motion scene.
  • the first scene description data includes: an operating state of the earpiece and an operating state of the proximity light sensor; the determining unit is specifically configured to: when the earpiece is playing a sound, and the proximity light sensor detects When a light-shielding object is placed around the earpiece, it is determined that the terminal is currently in a call scene.
  • the acquiring unit is further configured to: acquire second scenario description data of the terminal, where the second scenario description data is used to indicate an application scenario that the terminal is currently located; and the display unit is further configured to: When the second scene description data indicates that the user needs to use the fingerprint collection function, the prompt information is displayed in the target display area.
  • the second scene description data includes: posture information when the user uses the terminal; the determining unit is further configured to: when the trigger event is a preset operation of waking up the terminal screen When it is determined, the user needs to use the fingerprint collection function.
  • the second scene description data includes: a trigger event performed by the user on the terminal; the determining unit is further configured to: when the trigger event is a preset operation of waking up the terminal screen, determine that the user needs to use This fingerprint acquisition function.
  • the second scene description data includes: a real-time signal received by the terminal; the determining unit is further configured to: when the real-time signal is a new incoming event or a message event, determine that the user needs to use the Fingerprint collection function.
  • the display unit is specifically configured to: illuminate the target display area; or display a fingerprint pattern in the target display area.
  • the execution unit is further configured to: when the user needs to use the fingerprint collection function, set the fingerprint collection device to a standby state.
  • the execution unit is further configured to: set the fingerprint collection device to an inoperative state.
  • the acquiring unit is further configured to: detect an operation event of the user in the target display area; the execution unit is further configured to: collect, by the fingerprint, in response to an operation event that meets a preset condition The device collects a user's fingerprint; wherein the preset condition includes: the pressing force of the operation event is greater than the first preset value, the duration of the operation event is greater than the second preset value, and the movement displacement of the operation event is greater than the third preset The value, and the number of touches of the operation event are greater than at least one of the fourth preset values.
  • an embodiment of the present invention provides a terminal, including: a processor, a memory, a bus, and a communication interface; the memory is configured to store a computer execution instruction, and the processor is connected to the memory through the bus, when the terminal is running The processor executes the computer-executed instructions stored in the memory to cause the terminal to execute any of the display processing methods described above.
  • an embodiment of the present invention provides a computer readable storage medium, where the computer readable storage medium stores an instruction, when the instruction is run on any of the foregoing terminals, causing the terminal to perform any one of the foregoing display processing method.
  • an embodiment of the present invention provides a computer program product including instructions, which, when run on any of the above terminals, causes the terminal to execute any of the above display processing methods.
  • the names of the foregoing terminals are not limited to the devices themselves, and in actual implementation, the devices may appear under other names. As long as the functions of the respective devices are similar to the embodiments of the present invention, they are within the scope of the claims and the equivalents thereof.
  • FIG. 1 is a schematic diagram of an application scenario of a fingerprint in a screen in the prior art
  • FIG. 2 is a schematic structural diagram 1 of a terminal according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a display screen according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic flowchart of a display processing method according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of an application scenario of a floating touch according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram 1 of an application scenario of a display processing method according to an embodiment of the present disclosure
  • FIG. 7 is a second schematic diagram of an application scenario of a display processing method according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram 3 of an application scenario of a display processing method according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram 4 of an application scenario of a display processing method according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram 5 of an application scenario of a display processing method according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram 6 of an application scenario of a display processing method according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic diagram 7 of an application scenario of a display processing method according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic structural diagram 2 of a terminal according to an embodiment of the present disclosure.
  • FIG. 14 is a schematic structural diagram 3 of a terminal according to an embodiment of the present invention.
  • first and second are used for descriptive purposes only, and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” and “second” may include one or more of the features either explicitly or implicitly.
  • the meaning of "a plurality" is two or more unless otherwise specified.
  • the embodiment of the present invention provides a display processing method.
  • the terminal can determine the application scenario in which the current terminal is located by acquiring the scenario description data, for example, the current call scenario, the running scenario, or the hovering touch scenario. Further, it is further determined whether the user needs to use the fingerprint collection function in the current application scenario, that is, whether the user has an intention to use the fingerprint recognition function. For example, in the fingerprint payment scenario, the user needs to collect fingerprints to complete the identity authentication and payment process, and when the terminal is placed in the pocket or backpack by the user, the fingerprint collection function does not need to be enabled.
  • the terminal can display a prompt message to the user in the target display area (ie, an area on the touch screen) provided with the fingerprint collection device when the user has the requirement of using the fingerprint collection function, and prompt the user to input the fingerprint in the target display area.
  • the target display area ie, an area on the touch screen
  • the terminal may stop displaying the prompt information in the target display area.
  • the terminal determines that the user needs to use the fingerprint collection function, a prompt message is displayed for the user in the target display area of the touch screen, so that the user can clearly specify the specific location of the fingerprint on the touch screen, compared to the existing In the embodiment of the present invention, the target display area integrated with the fingerprint collection device does not need to be always in the working state, thereby reducing the probability of burning in the area. At the same time, the power consumption of the terminal can be reduced.
  • the foregoing display processing method provided by the embodiment of the present invention can be applied to a mobile phone, a wearable device, an AR (Augmented Reality) VR (Virtual Reality) device, a tablet computer, a notebook computer, a UMPC (Super Mobile Personal Computer), a netbook.
  • Any of the terminals, such as a PDA (Personal Digital Assistant), is not limited in this embodiment of the present invention.
  • the terminal in the embodiment of the present application may be the mobile phone 100.
  • the embodiment will be specifically described below by taking the mobile phone 100 as an example. It should be understood that the illustrated mobile phone 100 is only one example of a terminal, and the mobile phone 100 may have more or fewer components than those shown in the figures, two or more components may be combined, or may have Different component configurations.
  • the mobile phone 100 may specifically include: a processor 101, a radio frequency (RF) circuit 102, a memory 103, a touch screen 104, a Bluetooth device 105, one or more sensors 106, a Wi-Fi device 107, a positioning device 108, Components such as audio circuit 109, peripheral interface 110, and power system 111. These components can communicate over one or more communication buses or signal lines (not shown in Figure 2). It will be understood by those skilled in the art that the hardware structure shown in FIG. 2 does not constitute a limitation to the mobile phone, and the mobile phone 100 may include more or less components than those illustrated, or some components may be combined, or different component arrangements.
  • RF radio frequency
  • the processor 101 is a control center of the mobile phone 100, and connects various parts of the mobile phone 100 by using various interfaces and lines, by running or executing an application stored in the memory 103 (hereinafter referred to as App), and calling the memory stored in the memory 103. Data, performing various functions and processing data of the mobile phone 100.
  • the processor 101 may include one or more processing units; for example, the processor 101 may be a Kirin 960 chip manufactured by Huawei Technologies Co., Ltd.
  • the radio frequency circuit 102 can be used to receive and transmit wireless signals during transmission or reception of information or calls.
  • the radio frequency circuit 102 can process the downlink data of the base station and then process it to the processor 101; in addition, transmit the data related to the uplink to the base station.
  • radio frequency circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency circuit 102 can also communicate with other devices through wireless communication.
  • the wireless communication can use any communication standard or protocol, including but not limited to global mobile communication systems, general packet radio services, code division multiple access, wideband code division multiple access, long term evolution, email, short message service, and the like.
  • the memory 103 is used to store applications and data, and the processor 101 executes various functions and data processing of the mobile phone 100 by running applications and data stored in the memory 103.
  • the memory 103 mainly includes a storage program area and a storage data area, wherein the storage program area can store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.); the storage data area can be stored according to the use of the mobile phone. Data created at 100 o'clock (such as audio data, phone book, etc.).
  • the memory 103 may include a high speed random access memory, and may also include a nonvolatile memory such as a magnetic disk storage device, a flash memory device, or other volatile solid state storage device.
  • the memory 103 can store various operating systems, such as an iOS operating system developed by Apple Inc., an Android operating system developed by Google Inc., and the like.
  • the touch screen 104 can include a touch panel 104-1 and a display 104-2.
  • the touch panel 104-1 can collect touch events on or near the user of the mobile phone 100 (for example, the user uses any suitable object such as a finger, a stylus, or the like on the touch panel 104-1 or on the touchpad 104.
  • the operation near -1) and the collected touch information is transmitted to other devices such as the processor 101.
  • the touch event of the user in the vicinity of the touch panel 104-1 may be referred to as a hovering touch; the hovering touch may mean that the user does not need to directly touch the touchpad in order to select, move or drag a target (eg, an icon, etc.) And only the user is located near the terminal in order to perform the desired function.
  • the touch panel 104-1 capable of floating touch can be realized by a capacitive type, an infrared light feeling, an ultrasonic wave, or the like. In addition, resistive, capacitive, infrared, and surface acoustic waves can be used to implement the touchpad. 104-1.
  • a display (also referred to as display) 104-2 can be used to display information entered by the user or information provided to the user as well as various menus of the mobile phone 100.
  • the display 104-2 can be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the touchpad 104-1 can be overlaid on the display 104-2, and when the touchpad 104-1 detects a touch event on or near it, it is transmitted to the processor 101 to determine the type of touch event, and then the processor 101 may provide a corresponding visual output on display 104-2 depending on the type of touch event.
  • the touchpad 104-1 and the display 104-2 are implemented as two separate components to implement the input and output functions of the handset 100, in some embodiments, the touchpad 104- 1 is integrated with the display screen 104-2 to implement the input and output functions of the mobile phone 100. It is to be understood that the touch screen 104 is formed by stacking a plurality of layers of materials.
  • the touch panel 104-1 may be overlaid on the display 104-2, and the size of the touch panel 104-1 is larger than the size of the display 104-2, so that the display 104- 2 is completely covered under the touch panel 104-1, or the touch panel 104-1 may be disposed on the front side of the mobile phone 100 in the form of a full-board, that is, the user's touch on the front of the mobile phone 100 can be perceived by the mobile phone. You can achieve a full touch experience on the front of your phone.
  • the touch panel 104-1 is disposed on the front side of the mobile phone 100 in the form of a full-board
  • the display screen 104-2 may also be disposed on the front side of the mobile phone 100 in the form of a full-board, so that the front side of the mobile phone is A frameless (Bezel) structure can be realized.
  • the mobile phone 100 may further have a fingerprint recognition function.
  • the fingerprint collection device 112 can be configured in the touch screen 104 to implement the fingerprint recognition function, that is, the fingerprint collection device 112 can be integrated with the touch screen 104 to implement the fingerprint recognition function of the mobile phone 100.
  • the fingerprint capture device 112 is disposed in the touch screen 104 and may be part of the touch screen 104 or may be otherwise disposed in the touch screen 104.
  • the fingerprint acquisition device 112 can also be implemented as a full-board fingerprint acquisition device.
  • the touch screen 104 can be viewed as a panel that can be fingerprinted at any location.
  • the fingerprint collection device 112 can transmit the collected fingerprint to the processor 101 for the processor 101 to process the fingerprint (eg, fingerprint verification, etc.).
  • the main component of the fingerprint collection device 112 in the embodiment of the present application is a fingerprint sensor, which can employ any type of sensing technology, including but not limited to optical, capacitive, piezoelectric or ultrasonic sensing technologies.
  • the above-mentioned pattern collecting device 112 may be a capacitive collecting device 112-1.
  • the touch screen 104 may specifically include a capacitive fingerprint collection device 112-1, a touch panel 104-1, and a display 104-2.
  • the display 104-2 is located at the lowest layer in the touch screen 104, and the touch panel 104-1 is located on the touch screen.
  • the capacitive acquisition device 112-1 is located between the touch panel 104-1 and the display 104-2.
  • the position of the ridges and valleys of the fingerprint may be respectively determined according to the magnitude of the capacitance formed by the ridges and valleys of the fingerprint and the capacitance-inducing particles of the capacitive acquisition device 112-1, thereby acquiring fingerprint information.
  • the capacitive sensing particles on each pixel in the screen may be charged in advance, so that the capacitive sensing particles reach a preset threshold.
  • the capacitance value When the user touches the touch screen 104, there is a preset relationship between the capacitance value and the distance. Therefore, different capacitance values are formed at the positions of the ridges and valleys, and then discharged by the discharge current because the capacitance values corresponding to the ridges and valleys are different, ridges and valleys.
  • the discharge speed of the corresponding pixel is also different, the pixel corresponding to the ridge discharges slowly, and the pixel corresponding to the valley discharges quickly. Therefore, the user's fingerprint information can be acquired by charging and discharging the pixels corresponding to the ridges and valleys.
  • the above-mentioned pattern collecting device 112 may also be a radio frequency fingerprint collecting device 112-2.
  • the touch screen 104 may include a radio frequency fingerprint collection device 112-2, a touch panel 104-1, and a display 104-2.
  • the radio frequency fingerprint collection device 112-2 is located at the lowest layer of the touch screen 104.
  • the touch panel 104-1 is located at the uppermost layer in the touch screen 104, and the display 104-2 is located between the touch panel 104-1 and the radio frequency fingerprint collection device 112-2.
  • the radio frequency fingerprint collecting device 112-2 can absorb the reflected light through the CCD (Charge Coupled Device) to acquire the fingerprint information. Further, due to the difference in the depth of the ridges and valleys of the fingerprint on the touch panel 104-1 and the grease and moisture between the skin and the touch panel 104-1, the light is irradiated to the valley of the fingerprint through the touch panel 104-1. The position is totally reflected, and the position of the ridge that is irradiated to the fingerprint cannot be totally reflected, and a part of the light is absorbed by the touch panel 104-1 or diffused to other places, thereby forming fingerprint information on the CCD.
  • CCD Charge Coupled Device
  • the mobile phone 100 can also include a Bluetooth device 105 for enabling data exchange between the handset 100 and other short-range terminals (eg, mobile phones, smart watches, etc.).
  • the Bluetooth device in the embodiment of the present application may be an integrated circuit or a Bluetooth chip or the like.
  • the handset 100 can also include at least one type of sensor 106, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display of the touch screen 104 according to the brightness of the ambient light, and the proximity sensor may turn off the power of the display when the mobile phone 100 moves to the ear.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.
  • the mobile phone 100 can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, here Let me repeat.
  • the Wi-Fi device 107 is configured to provide the mobile phone 100 with network access complying with the Wi-Fi related standard protocol, and the mobile phone 100 can access the Wi-Fi access point through the Wi-Fi device 107, thereby helping the user to send and receive emails, Browsing web pages and accessing streaming media, etc., it provides users with wireless broadband Internet access.
  • the Wi-Fi device 107 can also function as a Wi-Fi wireless access point, and can provide Wi-Fi network access to other terminals.
  • the positioning device 108 is configured to provide a geographic location for the mobile phone 100. It can be understood that the positioning device 108 can be specifically a receiver of a positioning system such as a Global Positioning System (GPS) or a Beidou satellite navigation system, or a Russian GLONASS. After receiving the geographical location sent by the positioning system, the positioning device 108 sends the information to the processor 101 for processing, or sends it to the memory 103. Save it.
  • GPS Global Positioning System
  • Beidou satellite navigation system Beidou satellite navigation system
  • Russian GLONASS Russian GLONASS
  • the positioning device 108 may also be a receiver that assists the Global Positioning System (AGPS), which assists the positioning device 108 in performing ranging and positioning services by acting as a secondary server, in which case The secondary location server provides location assistance over a wireless communication network in communication with a location device 108, such as a GPS receiver, of the handset 100.
  • AGPS Global Positioning System
  • the positioning device 108 can also be a Wi-Fi access point based positioning technology. Since each Wi-Fi access point has a globally unique MAC address, the terminal can scan and collect the broadcast signals of the surrounding Wi-Fi access points when Wi-Fi is turned on, so that Wi- can be obtained.
  • the geographic location combined with the strength of the Wi-Fi broadcast signal, calculates the geographic location of the terminal and sends it to the location device 108 of the terminal.
  • the audio circuit 109, the speaker 113, and the microphone 114 can provide an audio interface between the user and the handset 100.
  • the audio circuit 109 can transmit the converted electrical data of the received audio data to the speaker 113 for conversion to the sound signal output by the speaker 113; on the other hand, the microphone 114 converts the collected sound signal into an electrical signal by the audio circuit 109. After receiving, it is converted into audio data, and then the audio data is output to the RF circuit 102 for transmission to, for example, another mobile phone, or the audio data is output to the memory 103 for further processing.
  • the peripheral interface 110 is used to provide various interfaces for external input/output devices (such as a keyboard, a mouse, an external display, an external memory, a subscriber identity module card, etc.).
  • external input/output devices such as a keyboard, a mouse, an external display, an external memory, a subscriber identity module card, etc.
  • a universal serial bus (USB) interface is connected to the mouse, and a metal contact on the card slot of the subscriber identification module is connected to a subscriber identity module card (SIM) card provided by the telecommunications carrier.
  • SIM subscriber identity module card
  • Peripheral interface 110 can be used to couple the external input/output peripherals described above to processor 101 and memory 103.
  • the mobile phone 100 may further include a power supply device 111 (such as a battery and a power management chip) that supplies power to the various components.
  • the battery may be logically connected to the processor 101 through the power management chip to manage charging, discharging, and power management through the power supply device 111. And other functions.
  • the mobile phone 100 may further include a camera (front camera and/or rear camera), a flash, a micro projection device, a near field communication (NFC) device, and the like, and details are not described herein.
  • a camera front camera and/or rear camera
  • a flash a flash
  • micro projection device a micro projection device
  • NFC near field communication
  • the method includes:
  • the terminal acquires scenario description data, where the scenario description data is used to indicate an application scenario where the terminal is currently located.
  • the terminal has a scenario for collecting fingerprint requirements during the running process, for example, when the terminal runs an application with a fingerprint payment function, or when the terminal is in a lock screen or a black screen state (the user may need to unlock the terminal screen through the fingerprint), etc.
  • the terminal can be triggered to obtain current scene description data.
  • the scene description data may specifically be a distance between the user's finger and the touch screen, touch data of the user outside the target display area, posture information when the user uses the terminal, current ambient light intensity, and the working state of the earpiece are close to the light.
  • At least one of the data such as the working state of the sensor, the data reflects the application scenario in which the terminal is currently located, for example, a call scene, a hovering touch scene, and the like. In this way, the terminal can further determine the current terminal based on the scenario description data. In the application scenario, whether the user has the need to use the fingerprint collection function.
  • the distance recognition sensor may be disposed in the terminal, so that the terminal can obtain the distance between the user's finger and the touch screen through the distance recognition sensor; or
  • the touch screen of the terminal has a floating touch function.
  • the capacitive device in the touch screen can determine the distance between the user's finger and the touch screen according to the change of the capacitance signal, and the terminal can determine whether the current position is based on the distance. For floating touch scenes.
  • the touch event of the user near the touch screen may be referred to as a hovering touch; the hovering touch may mean that the user does not need to directly touch the touchpad in order to select, move or drag a target (eg, an icon, etc.), and only The user is required to be located near the terminal in order to perform the desired function.
  • a hovering touch may mean that the user does not need to directly touch the touchpad in order to select, move or drag a target (eg, an icon, etc.), and only The user is required to be located near the terminal in order to perform the desired function.
  • two types of capacitive sensors that is, a mutual capacitance sensor and a self-capacitance sensor, may be disposed in the touch panel of the terminal 100, and the two types of capacitance sensors may be alternately arranged on the touch panel.
  • the mutual capacitance sensor is used to realize the normal multi-touch; the self-capacitance sensor can generate a stronger signal than the mutual capacitance, thereby detecting the finger sensing farther from the touch panel. Therefore, as shown in FIG. 5, when the user's finger hovers over the screen, the signal generated by the self-capacitance sensor is larger than the signal generated by the mutual capacitance sensor, so that the terminal 100 can detect the top of the screen, for example, the touch.
  • the user touches an area other than the target display area, for example, an area of the touch screen except the target display area or a border of the terminal
  • the terminal can use the capacitance signal generated during the touch process as the touch data, and the terminal can determine the gesture of the user according to the touch data.
  • the terminal may obtain the value of the gyroscope and/or the acceleration sensor set in the terminal, and use the value as the posture information when the user uses the terminal.
  • the posture information may reflect the posture of the user currently using the terminal, for example, the running posture, the raising posture of the terminal, and the like.
  • the proximity light sensor may be disposed in the terminal.
  • the scene description data includes a detection value close to the light sensor.
  • the terminal can determine the intensity of the current ambient light and whether the terminal is currently occluded by the proximity light sensor. For example, when the user puts the terminal into the pocket, the intensity of the ambient light that is approached by the light sensor approaches 0, and the terminal can detect that the light-shielding object is disposed around the terminal.
  • the scene description data may further include an operating state of the earpiece. Then, when it is detected that the handset is playing a sound, it indicates that the user is making a call using the terminal, that is, the terminal is currently in a call state.
  • the working state of the fingerprint collection device can be directly adjusted to the inactive state. For example, turn off the power of the fingerprint capture device, or switch the fingerprint capture device to sleep. In this way, even if a finger is pressed on the fingerprint collection device, the fingerprint collection device does not perform fingerprint collection, thereby avoiding the misoperation caused by the user accidentally touching the fingerprint acquisition device while operating in the screen, and at the same time reducing the power consumption of the terminal.
  • the terminal determines, according to the scenario description data, whether the user needs to use a fingerprint collection function.
  • the terminal when the terminal is in a black screen state, if a trigger event performed by the user on the terminal is detected, and the trigger event is determined to be a preset operation of waking up the terminal screen (eg, a hovering gesture, pressing a power button) Or double-click operation, etc.), indicating that the user intentionally wakes up or unlocks the terminal screen. Therefore, the terminal can determine that the user needs to use the fingerprint collection function at this time.
  • the terminal can determine that the user needs to use the fingerprint collection function at this time.
  • the terminal is in a black screen state, when the touch data in the scene description data is a preset touch gesture, for example, a gesture of sliding, long pressing, pressing, clicking, etc., indicating that the user intentionally wakes up or unlocks the terminal screen, so The terminal can determine that the user needs to use the fingerprint collection function at this time.
  • a preset touch gesture for example, a gesture of sliding, long pressing, pressing, clicking, etc.
  • the acceleration sensor of the terminal detects that the current acceleration is changed in the preset mobile phone. Within the threshold range of the state, the user picks up the terminal to use the terminal at this time. At this time, the terminal screen needs to be awake or unlocked by fingerprint recognition. Therefore, the terminal can determine that the user needs to use the fingerprint collection function at this time.
  • the scenario description data may also be a real-time signal received by the terminal. Then, if the real-time signal received by the terminal is a new incoming call event or a message event, it can also be determined that the user needs to use the fingerprint collecting function at this time.
  • the touch data is a preset touch gesture
  • the gesture information is used to indicate that the user picks up the terminal
  • the terminal receives the real-time.
  • the signal is at least one of a new incoming call event or a message event
  • the terminal can determine that the user needs to use the fingerprint collection function at this time.
  • the following steps 403b-404b may be continued.
  • the gesture information in the scene description data is used to indicate that the user is holding the terminal to walk or run, it may be determined that the user does not need to use the fingerprint collection function at this time.
  • the terminal when the proximity photosensor detects that the intensity of the ambient light is within the second threshold range, for example, approaches 0, and a shading object is disposed around the terminal, the terminal is likely to be the user at this time. Put it in a pocket or backpack, so the terminal can be sure that the user does not need to use the fingerprint capture function at this time.
  • the terminal can determine The user does not need to use the fingerprint collection function at this time.
  • the terminal when the terminal is in a voice call state, if the distance sensor detects an object within a preset distance from the terminal, and the proximity light sensor detects that a light-shielding object is disposed around the earpiece, it may also indicate that the user is using the terminal in the earpiece mode.
  • the terminal can determine that the user does not need to use the fingerprint collection function.
  • the terminal can determine The user does not need to use the fingerprint collection function at this time.
  • the following steps 403a-404a may continue to be performed.
  • the terminal stops displaying prompt information in the target display area where the fingerprint collection device is disposed, and the prompt information is used to prompt the user to input a fingerprint in the target display area.
  • step 403a as shown in FIG. 6, when the posture information is used to indicate that the user is holding the terminal to walk or run (shown in (a) of FIG. 6), or the terminal is in the pocket mode (in FIG. 6 (b) shown), or, when the terminal makes a call or plays a voice in the handset mode (shown in (c) of FIG. 6), the terminal can determine that the user does not need to use the fingerprint collection function at this time. Then, if the display screen of the terminal is originally displayed with the prompt information in the target display area 400 where the fingerprint collection device is disposed, the terminal can stop displaying the prompt information, for example, hiding the fingerprint pattern being displayed in the target display area 400.
  • the terminal does not display the prompt information in the target display area, the terminal still maintains the black screen state, and the user does not need to prompt the user to perform the specific location of the fingerprint collection on the display screen, thereby saving the power consumption of the terminal and avoiding the phenomenon of burn-in.
  • the terminal adjusts the working state of the fingerprint collection device to an inactive state.
  • step 404a when it is determined that the user does not need to use the fingerprint collection function at this time, the terminal may also adjust the working state of the fingerprint collection device to an inoperative state.
  • the fingerprint collection device For example, turning off the power of the fingerprint collection device, adjusting the fingerprint acquisition device to a sleep state, adjusting the fingerprint acquisition device to a low power consumption state, or reducing the scanning frequency of the scanning fingerprint acquisition device.
  • the fingerprint collection device does not perform fingerprint collection, thereby avoiding the misoperation caused by the user accidentally touching the fingerprint acquisition device while operating in the screen, and at the same time reducing the power consumption of the terminal.
  • step 403a may be performed before step 404a is performed, or step 404a is performed first, and then step 403a is performed. Or, the steps 403a and 404a are performed at the same time, and the embodiment of the present invention does not impose any limitation.
  • step 402 when it is determined in step 402 that the user needs to use the fingerprint collection function, the following steps 403b-404b may be continued:
  • the terminal displays prompt information in a target display area where the fingerprint collection device is disposed, and the prompt information is used to prompt the user to input a fingerprint in the target display area.
  • step 403b as shown in FIG. 7, when the terminal detects that the trigger event performed by the user is a preset operation of waking up the terminal screen (the hovering gesture shown in (a) of FIG. 7), or when the gesture information is used. Instructing the user to pick up the terminal (shown in (b) of FIG. 7), or when the terminal receives a new incoming call event or message event (shown in (c) of FIG. 7), the terminal may determine that the user is at this time. Need to use the fingerprint collection function. Then, if the terminal is originally in a black screen state, then the terminal outputs a prompt message in the target display area 400 provided with the fingerprint collection device to prompt the user to enter a fingerprint in the target display area.
  • a fingerprint pattern may be displayed in the target display area 400 of the terminal, or the edge of the target display area 400 or the target display area 400 may be illuminated with a different color, and of course, may be animated.
  • the form of the text prompts the user to enter a fingerprint in the target display area, which is not limited in this embodiment of the present invention.
  • the terminal can timely and accurately determine the current user's use of the fingerprint collection function according to the scene description data, and then output the prompt information in the target display area 400 provided with the fingerprint collection device, so that the user can accurately obtain the subsequent entry on the touch screen.
  • the specific location of the fingerprint thereby improving the recognition efficiency of fingerprint recognition within the screen.
  • the terminal adjusts the working state of the fingerprint collection device to a standby state.
  • the working state of the fingerprint collection device may also be adjusted to the standby state. For example, powering up the fingerprint capture device, waiting for the user's finger to press. In this way, once the user's finger is pressed on the fingerprint collection device, the fingerprint collection device can immediately start collecting fingerprint information, thereby improving the response speed of the fingerprint collection by the terminal.
  • step 403b may be performed before step 404b is performed, or step 404b is performed first, and then step 403b is performed. Or, the steps 403b and 404b are performed at the same time, and the embodiment of the present invention does not impose any limitation.
  • the terminal may further perform the following steps 405-406.
  • the terminal detects an operation event of the user in the target display area.
  • the terminal collects the user fingerprint by using the fingerprint collection device in the target display area.
  • the preset condition includes: the pressing force of the operation event is greater than the first preset value, the duration of the operation event is greater than the second preset value, the movement displacement of the operation event is greater than the third preset value, and the number of touches of the operation event More than at least one of the fourth preset values.
  • the fingerprint collection device when detecting that the user operates the target display area of the touch screen, the fingerprint collection device is triggered to collect the user fingerprint only when the operation event satisfies the preset condition. This can prevent the user from accidentally touching the target display area and trigger the terminal to collect the user's fingerprint misoperation.
  • the terminal detects that the pressing force is greater than a preset pressure threshold (ie, the first preset value).
  • the user intentionally triggers the fingerprint collection function.
  • the terminal collects the user fingerprint through the fingerprint collection device in the target display area, and authenticates the user identity according to the collected fingerprint. After the authentication is passed, the terminal screen is unlocked.
  • the terminal detects that the user's finger stays in the target display area for a time greater than a preset time threshold (ie, the second The preset value indicates that the user intentionally triggers the fingerprint collection function. Then, the terminal collects the user fingerprint through the fingerprint collection device in the target display area, and authenticates the user identity according to the collected fingerprint. After the authentication is passed, Unlock the terminal screen.
  • a preset time threshold ie, the second The preset value indicates that the user intentionally triggers the fingerprint collection function.
  • the terminal detects that the number of touches of the user touching the target display area is greater than the preset touch number threshold (ie, the fourth preset value), the user intentionally triggers the fingerprint collection function, and the terminal passes through the target display area.
  • the fingerprint collection device collects the user fingerprint, and authenticates the user identity according to the collected fingerprint. After the authentication is passed, the terminal screen is unlocked.
  • the terminal when the user performs an operation event in the target display area, if the terminal detects that the moving distance (ie, the movement displacement) of the user's finger in a certain preset direction is greater than the third pre- When setting a value, the terminal may preset a correspondence between opening an application (for example, a camera) and moving the user's finger in the preset direction, and then the terminal can collect the fingerprint in the target display area at this time.
  • the device collects the user's fingerprint and authenticates the user's identity according to the collected fingerprint. After the authentication is passed, the camera can be directly opened and entered into the camera application, so that the user can directly enter the relevant application through the on-screen fingerprint collection function.
  • the user can also enter the setting interface to manually set the related in-screen fingerprint function involved in the above embodiment.
  • the above-mentioned on-screen fingerprint function is turned on or off; or, the scene management option can be provided, whether to output prompt information in each application scenario, how to output specific parameters of the prompt information, or to provide anti-missing Management options, set the specific touch parameters when the user triggers the fingerprint acquisition device to work, so that the user can improve the above-mentioned fingerprint function according to his own usage habits, and improve the efficiency of the user in performing fingerprint collection on the screen.
  • the foregoing terminal and the like include hardware structures and/or software modules corresponding to each function in order to implement the above method functions.
  • Those skilled in the art will readily appreciate that the embodiments of the present invention can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the embodiments of the invention.
  • the embodiment of the present invention may perform the division of the function module on the terminal or the like according to the foregoing method example.
  • each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present invention is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 13 is a schematic diagram of a possible structure of the terminal involved in the foregoing embodiment, where the terminal is used to implement the foregoing method embodiment, and includes: an acquiring unit 1101. The determining unit 1102, the display unit 1103, and the executing unit 1104.
  • the obtaining unit 1101 is configured to support the terminal to perform the processes 401 and 405 in FIG. 4; the determining unit 1102 is configured to support the terminal to execute the process 402 in FIG. 4; the display unit 1103 is configured to support the terminal to execute the process 403b in FIG. 4; and the executing unit 1104 Used to support the terminal to perform the processes 403a-404a, 404b, and 406 in FIG. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.
  • FIG. 14 shows a possible structural diagram of the terminal involved in the above embodiment.
  • the terminal is used to implement the foregoing method embodiments, and includes: a processing module 1302 and a communication module 1303.
  • the processing module 1302 is configured to control and manage the actions of the terminal.
  • the communication module 1303 is configured to support communication between the terminal and other network entities.
  • the terminal may further include a storage module 1301 for storing program codes and data of the terminal.
  • the processing module 1302 may be a processor or a controller, for example, may be a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), and an application specific integrated circuit (Application-Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 1303 may be a transceiver, a transceiver circuit, a communication interface, or the like.
  • the storage module 1301 may be a memory.
  • the terminal provided by the embodiment of the present invention may be the mobile phone 100 shown in FIG.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transfer to another website site, computer, server, or data center by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL), or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer readable storage medium can be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that includes one or more available media.
  • the usable medium may be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a DVD), or a semiconductor medium (such as a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种显示处理方法及装置,涉及通信技术领域,当指纹采集器件集成在触摸屏内时,可降低触摸屏发生烧屏现象的几率,同时降低终端的功耗。该方法包括:在终端的显示屏上显示提示信息,该提示信息用于提示用户在设置有指纹采集器件的目标显示区域录入指纹;获取终端的第一场景描述数据,第一场景描述数据用于指示终端当前所处的应用场景;根据该场景描述数据确定终端当前处于运动场景或通话场景;在该目标显示区域停止显示该提示信息。

Description

一种显示的处理方法及装置 技术领域
本发明实施例涉及通信技术领域,尤其涉及一种显示处理方法及装置。
背景技术
指纹识别技术可以用于终端的屏幕唤醒,解锁,以及移动支付等功能,随着终端触摸屏的尺寸越来越大,指纹采集器件可以与触摸屏集成在一起来实现终端的指纹识别功能。
如图1所示,可以在触摸屏的某个区域,例如区域101内集成指纹采集器件102,这样,当用户手指触摸区域101后,可触发区域101内的指纹采集器件102采集用户指纹,从而完成指纹解锁或者指纹支付等功能。
为了使得用户能够准确触摸到区域101,当终端进入与采集指纹相关的应用场景(例如,锁屏状态,黑屏状态或者指纹支付状态)时,一般会在区域101内显示一个指纹图案或者点亮区域101以提示用户触摸该区域。以黑屏状态为例,由于终端并不知道用户何时会触摸区域101进行指纹解锁,因此,终端在黑屏状态下需要一直点亮区域101以提示用户触摸该区域,那么,区域101处的显示屏一直处于工作状态,容易导致器件老化甚至烧屏问题,并且增加了终端的功耗。
发明内容
本发明的实施例提供一种显示处理方法及装置,当指纹采集器件集成在触摸屏内时,可降低触摸屏发生烧屏现象的几率,同时降低终端的功耗。
为达到上述目的,本发明的实施例采用如下技术方案:
第一方面,本发明的实施例提供一种显示处理方法,包括:指示在终端的显示屏上显示提示信息,该提示信息用于提示用户在设置有指纹采集器件的目标显示区域录入指纹;获取终端的第一场景描述数据,第一场景描述数据用于指示终端当前所处的应用场景;根据该场景描述数据确定终端当前处于运动场景或通话场景;指示在该目标显示区域停止显示该提示信息。
也就是说,当终端在设置有指纹采集器件的目标显示区域内为用户显示一个提示信息时,如果终端根据当前的应用场景确定出用户此时没有进行指纹采集的需求时,可以停止在该目标显示区域显示该提示信息。这样,相比于现有技术中一直点亮触摸屏内的显示区域而言,在本发明实施例中,集成有指纹采集器件的目标显示区域无需一直处于工作状态,从而降低了该区域发生烧屏的几率,同时可降低终端的功耗。
在一种可能的设计方法中,上述第一场景描述数据包括:用户使用终端时的姿态信息;其中,根据该场景描述数据确定终端当前处于运动场景,包括:当该姿态信息用于指示用户握持终端在行走或跑步时,确定终端当前处于运动场景。也就是说,当终端处于运动场景时,可认为用户此时不需要使 用指纹采集功能,因此,可以停止显示该提示信息,从而节省终端功耗,避免烧屏现象。
在一种可能的设计方法中,上述第一场景描述数据包括:听筒的工作状态和接近光传感器的工作状态;其中,根据该场景描述数据确定终端当前处于通话场景,包括:当确定该听筒正在播放声音,且该接近光传感器检测到听筒周围设置有遮光物体时,确定终端当前处于通话场景。或者,上述第一场景描述数据也可以包括:接近光传感器的工作状态和距离传感器的工作状态;其中,根据该场景描述数据确定终端当前处于通话场景,包括:当该终端处于语音通话状态,且距离传感器检测到距离该终端预设距离内有物体,且该接近光传感器检测到听筒周围设置有遮光物体时,确定终端当前处于通话场景。也就是说,当终端处于听筒模式的通话状态时,可认为用户此时不需要使用指纹采集功能,因此,可以停止显示该提示信息,从而节省终端功耗,避免烧屏现象。
在一种可能的设计方法中,指示在该目标显示区域停止显示该提示信息之后,还包括:获取终端的第二场景描述数据,第二场景描述数据用于指示终端当前所处的应用场景;当第二场景描述数据指示用户需要使用指纹采集功能时,指示在该目标显示区域显示该提示信息。
这样一来,终端可以根据第二场景描述数据及时准确的判断出当前用户有使用指纹采集功能需求,进而在设置有指纹采集器件的目标显示区域输出提示信息,使得用户可以准确获知后续在触摸屏上录入指纹的具体位置,从而提高了屏幕内指纹识别的识别效率。
在一种可能的设计方法中,上述第二场景描述数据包括:用户使用终端时的姿态信息;其中,根据该场景描述数据确定用户需要使用指纹采集功能,包括:当该姿态信息用于指示用户拿起终端时,确定用户需要使用该指纹采集功能。也就是说,当终端处于抬手场景时,可认为用户此时需要使用指纹采集功能,因此,可以在目标显示区域显示该提示信息,使得用户可以准确获知后续在触摸屏上录入指纹的具体位置。
在一种可能的设计方法中,上述第二场景描述数据包括:用户对终端执行的触发事件;其中,根据该场景描述数据确定用户需要使用指纹采集功能,包括:当该触发事件为唤醒终端屏幕的预设操作时,确定用户需要使用该指纹采集功能。也就是说,当用户有意图唤醒终端屏幕时,可认为用户此时需要使用指纹采集功能,因此,可以在目标显示区域显示该提示信息,使得用户可以准确获知后续在触摸屏上录入指纹的具体位置。
在一种可能的设计方法中,上述第二场景描述数据包括:终端接收到的实时信号;其中,根据该场景描述数据确定用户需要使用指纹采集功能,包括:当该实时信号为新的来电事件或消息事件时,确定用户需要使用该指纹采集功能。也就是说,当终端接收到新的来电或消息时,可认为用户此时需要使用指纹采集功能,因此,可以在目标显示区域显示该提示信息,使得用户可以准确获知后续在触摸屏上录入指纹的具体位置
在一种可能的设计方法中,指示在终端的显示屏上显示提示信息,包括:指示终端点亮该目标显示区域;或者,指示在该目标显示区域显示指纹图案。
在一种可能的设计方法中,当用户需要使用指纹采集功能时,该方法还包括:将该指纹采集器件设置为待命状态。这样,一旦用户手指按压在指纹采集器件上,指纹采集器件可以立即开始采集指纹信息,从而提高了终端采集指纹的响应速度。
在一种可能的设计方法中,在根据第一场景描述数据确定终端当前处于运动场景或通话场景之后,还包括:将该指纹采集器件设置为不工作状态。这样,即使有手指按压在指纹采集器件上,指纹采集器件也不会进行指纹采集,从而避免用户在屏幕内操作时误触指纹采集器件导致的误操作现象,同时可以降低终端的功耗。
在一种可能的设计方法中,指示在终端的显示屏上显示提示信息之后,还包括:在该目标显示区域检测用户的操作事件;响应于满足预设条件的操作事件,通过该指纹采集器件采集用户指纹;其中,该预设条件包括:该操作事件的按压力度大于第一预设值,该操作事件的持续时间大于第二预设值,该操作事件的移动位移大于第三预设值,以及该操作事件的触摸次数大于第四预设值中的至少一个。
也就是说,当检测到用户操作触摸屏的目标显示区域时,只有当这一操作事件满足上述预设条件时,才会触发指纹采集器件采集用户指纹。这样可以避免用户误触目标显示区域后触发终端采集用户指纹的误操作现象。
第二方面,本发明的实施例提供一种终端,包括:显示单元,用于:在终端的显示屏上显示提示信息,该提示信息用于提示用户在设置有指纹采集器件的目标显示区域录入指纹;获取单元,用于:获取终端的第一场景描述数据,第一场景描述数据用于指示终端当前所处的应用场景;确定单元,用于:根据该场景描述数据确定终端当前处于运动场景或通话场景;执行单元,用于:指示在该目标显示区域停止显示该提示信息。
在一种可能的设计方法中,第一场景描述数据包括:用户使用终端时的姿态信息;该确定单元,具体用于:当该姿态信息用于指示用户握持终端在行走或跑步时,确定终端当前处于运动场景。
在一种可能的设计方法中,第一场景描述数据包括:听筒的工作状态和接近光传感器的工作状态;该确定单元,具体用于:当该听筒正在播放声音,且该接近光传感器检测到听筒周围设置有遮光物体时,确定终端当前处于通话场景。
在一种可能的设计方法中,该获取单元,还用于:获取终端的第二场景描述数据,第二场景描述数据用于指示终端当前所处的应用场景;该显示单元,还用于:当第二场景描述数据指示用户需要使用指纹采集功能时,在该目标显示区域显示该提示信息。
在一种可能的设计方法中,第二场景描述数据包括:用户使用终端时的姿态信息;该确定单元,还用于:当该触发事件为唤醒终端屏幕的预设操作 时,确定用户需要使用该指纹采集功能。
在一种可能的设计方法中,第二场景描述数据包括:用户对终端执行的触发事件;该确定单元,还用于:当该触发事件为唤醒终端屏幕的预设操作时,确定用户需要使用该指纹采集功能。
在一种可能的设计方法中,第二场景描述数据包括:终端接收到的实时信号;该确定单元,还用于:当该实时信号为新的来电事件或消息事件时,确定用户需要使用该指纹采集功能。
在一种可能的设计方法中,该显示单元,具体用于:点亮该目标显示区域;或者,在该目标显示区域显示指纹图案。
在一种可能的设计方法中,该执行单元,还用于:当用户需要使用指纹采集功能时,将该指纹采集器件设置为待命状态。
在一种可能的设计方法中,该执行单元,还用于:将该指纹采集器件设置为不工作状态。
在一种可能的设计方法中,该获取单元,还用于:在该目标显示区域检测用户的操作事件;该执行单元,还用于:响应于满足预设条件的操作事件,通过该指纹采集器件采集用户指纹;其中,该预设条件包括:该操作事件的按压力度大于第一预设值,该操作事件的持续时间大于第二预设值,该操作事件的移动位移大于第三预设值,以及该操作事件的触摸次数大于第四预设值中的至少一个。
第三方面,本发明的实施例提供一种终端,包括:处理器、存储器、总线和通信接口;该存储器用于存储计算机执行指令,该处理器与该存储器通过该总线连接,当终端运行时,该处理器执行该存储器存储的该计算机执行指令,以使终端执行上述任一项显示处理方法。
第四方面,本发明实施例提供一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当该指令在上述任一项终端上运行时,使得终端执行上述任一项显示处理方法。
第五方面,本发明实施例提供一种包含指令的计算机程序产品,当其在上述任一项终端上运行时,使得终端执行上述任一项显示处理方法。
本发明的实施例中,上述终端的名字对设备本身不构成限定,在实际实现中,这些设备可以以其他名称出现。只要各个设备的功能和本发明的实施例类似,即属于本发明权利要求及其等同技术的范围之内。
另外,第二方面至第五方面中任一种设计方式所带来的技术效果可参见上述第一方面中不同设计方法所带来的技术效果,此处不再赘述。
附图说明
图1为现有技术中屏幕内指纹的应用场景示意图;
图2为本发明实施例提供的一种终端的结构示意图一;
图3为本发明实施例提供的一种显示屏的结构示意图;
图4为本发明实施例提供的一种显示处理方法的流程示意图;
图5为本发明实施例提供的一种悬浮触控的应用场景示意图;
图6为本发明实施例提供的一种显示处理方法的应用场景示意图一;
图7为本发明实施例提供的一种显示处理方法的应用场景示意图二;
图8为本发明实施例提供的一种显示处理方法的应用场景示意图三;
图9为本发明实施例提供的一种显示处理方法的应用场景示意图四;
图10为本发明实施例提供的一种显示处理方法的应用场景示意图五;
图11为本发明实施例提供的一种显示处理方法的应用场景示意图六;
图12为本发明实施例提供的一种显示处理方法的应用场景示意图七;
图13为本发明实施例提供的一种终端的结构示意图二;
图14为本发明实施例提供的一种终端的结构示意图三。
具体实施方式
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本发明实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
本发明的实施例提供一种显示处理方法,终端可以通过获取其所处的场景描述数据判断当前终端所处的应用场景,例如,当前处于通话场景、跑步场景或者悬浮触控场景等。进而,可进一步确定在当前所处的应用场景下用户是否需要使用指纹采集功能,即用户是否有使用指纹识别功能的意图。例如,在指纹支付场景下,用户需要采集指纹来完成身份鉴权和支付过程,而当终端被用户放进口袋或背包时,则无需开启指纹采集功能。
这样,终端可以在用户有使用指纹采集功能的需求时,在设置有指纹采集器件的目标显示区域(即触摸屏上的某个区域)向用户显示提示信息,提示用户在这个目标显示区域录入指纹。
相应的,在用户没有使用指纹采集功能的需求时,终端可停止在上述目标显示区域显示上述提示信息。
可以看出,当终端确定出用户需要使用指纹采集功能时,才会在触摸屏的目标显示区域内为用户显示一个提示信息,使用户可以明确在触摸屏上录入指纹的具体位置,相比于现有技术中一直点亮触摸屏内集成有指纹采集器件的显示区域而言,在本发明实施例中,集成有指纹采集器件的目标显示区域无需一直处于工作状态,从而降低了该区域发生烧屏的几率,同时可降低终端的功耗。
另外,本发明实施例提供的上述显示处理方法,可应用于手机、可穿戴设备、AR(增强现实)\VR(虚拟现实)设备、平板电脑、笔记本电脑、UMPC(超级移动个人计算机)、上网本、PDA(个人数字助理)等任意终端,本发明实施例对此不作任何限制。
如图2所示,本申请实施例中的终端可以为手机100。下面以手机100为例对实施例进行具体说明。应该理解的是,图示手机100仅是终端的一个范例,并且手机100可以具有比图中所示出的更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。
如图2所示,手机100具体可以包括:处理器101、射频(RF)电路102、存储器103、触摸屏104、蓝牙装置105、一个或多个传感器106、Wi-Fi装置107、定位装置108、音频电路109、外设接口110以及电源系统111等部件。这些部件可通过一根或多根通信总线或信号线(图2中未示出)进行通信。本领域技术人员可以理解,图2中示出的硬件结构并不构成对手机的限定,手机100可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图2对手机100的各个部件进行具体的介绍:
处理器101是手机100的控制中心,利用各种接口和线路连接手机100的各个部分,通过运行或执行存储在存储器103内的应用程序(以下可以简称App),以及调用存储在存储器103内的数据,执行手机100的各种功能和处理数据。在一些实施例中,处理器101可包括一个或多个处理单元;举例来说,处理器101可以是华为技术有限公司制造的麒麟960芯片。
射频电路102可用于在收发信息或通话过程中,无线信号的接收和发送。特别地,射频电路102可以将基站的下行数据接收后,给处理器101处理;另外,将涉及上行的数据发送给基站。通常,射频电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频电路102还可以通过无线通信和其他设备通信。所述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统、通用分组无线服务、码分多址、宽带码分多址、长期演进、电子邮件、短消息服务等。
存储器103用于存储应用程序以及数据,处理器101通过运行存储在存储器103的应用程序以及数据,执行手机100的各种功能以及数据处理。存储器103主要包括存储程序区以及存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等);存储数据区可以存储根据使用手机100时所创建的数据(比如音频数据、电话本等)。此外,存储器103可以包括高速随机存取存储器,还可以包括非易失存储器,例如磁盘存储器件、闪存器件或其他易失性固态存储器件等。存储器103可以存储各种操作系统,例如,苹果公司所开发的iOS操作系统,谷歌公司所开发的Android操作系统等。
触摸屏104可以包括触控板104-1和显示器104-2。其中,触控板104-1可采集手机100的用户在其上或附近的触摸事件(比如用户使用手指、触控笔等任何适合的物体在触控板104-1上或在触控板104-1附近的操作),并将采集到的触摸信息发送给其他器件例如处理器101。其中,用户在触控板104-1附近的触摸事件可以称之为悬浮触控;悬浮触控可以是指,用户无需为了选择、移动或拖动目标(例如图标等)而直接接触触控板,而只需用户位于终端附近以便执行所想要的功能。在悬浮触控的应用场景下,术语“触摸”、“接触”等不会暗示用于直接接触触摸屏,而是附近或接近的接触。能够进行悬浮触控的触控板104-1可以采用电容式、红外光感以及超声波等实现。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型来实现触控板 104-1。显示器(也称为显示屏)104-2可用于显示由用户输入的信息或提供给用户的信息以及手机100的各种菜单。可以采用液晶显示器、有机发光二极管等形式来配置显示器104-2。触控板104-1可以覆盖在显示器104-2之上,当触控板104-1检测到在其上或附近的触摸事件后,传送给处理器101以确定触摸事件的类型,随后处理器101可以根据触摸事件的类型在显示器104-2上提供相应的视觉输出。虽然在图2中,触控板104-1与显示屏104-2是作为两个独立的部件来实现手机100的输入和输出功能,但是在某些实施例中,可以将触控板104-1与显示屏104-2集成而实现手机100的输入和输出功能。可以理解的是,触摸屏104是由多层的材料堆叠而成,本申请实施例中只展示出了触控板(层)和显示屏(层),其他层在本申请实施例中不予记载。另外,在本申请其他一些实施例中,触控板104-1可以覆盖在显示器104-2之上,并且触控板104-1的尺寸大于显示屏104-2的尺寸,使得显示屏104-2全部覆盖在触控板104-1下面,或者,上述触控板104-1可以以全面板的形式配置在手机100的正面,也即用户在手机100正面的触摸均能被手机感知,这样就可以实现手机正面的全触控体验。在其他一些实施例中,触控板104-1以全面板的形式配置在手机100的正面,显示屏104-2也可以以全面板的形式配置在手机100的正面,这样在手机的正面就能够实现无边框(Bezel)的结构。
在本申请实施例中,手机100还可以具有指纹识别功能。例如,可以在触摸屏104中配置指纹采集器件112来实现指纹识别功能,即指纹采集器件112可以与触摸屏104集成在一起来实现手机100的指纹识别功能。在这种情况下,该指纹采集器件112配置在触摸屏104中,可以是触摸屏104的一部分,也可以以其他方式配置在触摸屏104中。另外,该指纹采集器件112还可以被实现为全面板指纹采集器件。因此,可以把触摸屏104看成是任何位置都可以进行指纹识别的一个面板。该指纹采集器件112可以将采集到的指纹发送给处理器101,以便处理器101对该指纹进行处理(例如指纹验证等)。本申请实施例中的指纹采集器件112的主要部件是指纹传感器,该指纹传感器可以采用任何类型的感测技术,包括但不限于光学式、电容式、压电式或超声波传感技术等。
可选的,如图3中的(a)所示,上述纹采集器件112可以为电容式采集器件112-1。此时,触摸屏104具体可以包括电容式指纹采集器件112-1、触控板104-1以及显示器104-2,该显示器104-2位于触摸屏104中的最下层,触控板104-1位于触摸屏104中的最上层,所述电容式采集器件112-1位于触控板104-1与显示器104-2之间。
具体实现中,可以根据指纹的脊和谷与电容式采集器件112-1的电容感应颗粒形成的电容值大小不同,分别判断指纹的脊和谷的位置,从而获取指纹信息。进一步的,可以预先对屏幕中每个像素点上的电容感应颗粒进行充电,使电容感应颗粒达到预设阈值,当用户手指接触到触摸屏104时,由于电容值与距离之间存在预设关系,因此,会在脊和谷的位置形成不同的电容值,然后通过放电电流进行放电,因为脊和谷所对应的电容值不同,脊和谷 对应的像素点的放电速度也不同,脊对应的像素点放电慢,谷对应的像素点放电速度快。因此,通过脊和谷对应的像素点的充电与放电,可以获取用户的指纹信息。
可选的,如图3中的(b)所示,上述纹采集器件112还可以为射频式指纹采集器件112-2。此时,触摸屏104可以包括射频式指纹采集器件112-2、触控板104-1以及显示器104-2,所述射频式指纹采集器件112-2位于所述触摸屏104中的最下层,所述触控板104-1位于触摸屏104中的最上层,所述显示器104-2位于触控板104-1与射频式指纹采集器件112-2之间。
具体实现中,当光线照射到压有指纹的触控板104-1的表面时,射频式指纹采集器件112-2可通过CCD(电荷耦合器件)吸收反射光从而获取指纹信息。进一步的,由于触控板104-1上的指纹的脊和谷的深度不同以及皮肤与触控板104-1之间的油脂和水分,光线经过触控板104-1照射到指纹的谷的位置发生全反射,而照射到指纹的脊的位置不能发生全反射,一部分光线被触控板104-1吸收或者漫反射到其他地方,从而在CCD上形成指纹信息。
另外,关于本申请实施例中在触摸屏中集成指纹采集器件的具体技术方案,可以参见美国专利与商标局公告的申请号为US 2015/0036065 A1,名称为“在电子设备中的指纹传感器”的专利申请,其全部内容通过引用结合在本申请各个实施例中。
手机100还可以包括蓝牙装置105,用于实现手机100与其他短距离的终端(例如手机、智能手表等)之间的数据交换。本申请实施例中的蓝牙装置可以是集成电路或者蓝牙芯片等。
手机100还可以包括至少一种传感器106,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节触摸屏104的显示器的亮度,接近传感器可在手机100移动到耳边时,关闭显示器的电源。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机100还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
Wi-Fi装置107,用于为手机100提供遵循Wi-Fi相关标准协议的网络接入,手机100可以通过Wi-Fi装置107接入到Wi-Fi接入点,进而帮助用户收发电子邮件、浏览网页和访问流媒体等,它为用户提供了无线的宽带互联网访问。在其他一些实施例中,该Wi-Fi装置107也可以作为Wi-Fi无线接入点,可以为其他终端提供Wi-Fi网络接入。
定位装置108,用于为手机100提供地理位置。可以理解的是,该定位装置108具体可以是全球定位系统(GPS)或北斗卫星导航系统、俄罗斯GLONASS等定位系统的接收器。定位装置108在接收到上述定位系统发送的地理位置后,将该信息发送给处理器101进行处理,或者发送给存储器103 进行保存。在另外的一些实施例中,该定位装置108还可以是辅助全球卫星定位系统(AGPS)的接收器,AGPS系统通过作为辅助服务器来协助定位装置108完成测距和定位服务,在这种情况下,辅助定位服务器通过无线通信网络与终端例如手机100的定位装置108(即GPS接收器)通信而提供定位协助。在另外的一些实施例中,该定位装置108也可以是基于Wi-Fi接入点的定位技术。由于每一个Wi-Fi接入点都有一个全球唯一的MAC地址,终端在开启Wi-Fi的情况下即可扫描并收集周围的Wi-Fi接入点的广播信号,因此可以获取到Wi-Fi接入点广播出来的MAC地址;终端将这些能够标示Wi-Fi接入点的数据(例如MAC地址)通过无线通信网络发送给位置服务器,由位置服务器检索出每一个Wi-Fi接入点的地理位置,并结合Wi-Fi广播信号的强弱程度,计算出该终端的地理位置并发送到该终端的定位装置108中。
音频电路109、扬声器113、麦克风114可提供用户与手机100之间的音频接口。音频电路109可将接收到的音频数据转换后的电信号,传输到扬声器113,由扬声器113转换为声音信号输出;另一方面,麦克风114将收集的声音信号转换为电信号,由音频电路109接收后转换为音频数据,再将音频数据输出至RF电路102以发送给比如另一手机,或者将音频数据输出至存储器103以便进一步处理。
外设接口110,用于为外部的输入/输出设备(例如键盘、鼠标、外接显示器、外部存储器、用户识别模块卡等)提供各种接口。例如通过通用串行总线(USB)接口与鼠标连接,通过用户识别模块卡卡槽上的金属触点与电信运营商提供的用户识别模块卡(SIM)卡进行连接。外设接口110可以被用来将上述外部的输入/输出外围设备耦接到处理器101和存储器103。
手机100还可以包括给各个部件供电的电源装置111(比如电池和电源管理芯片),电池可以通过电源管理芯片与处理器101逻辑相连,从而通过电源装置111实现管理充电、放电、以及功耗管理等功能。
尽管图2未示出,手机100还可以包括摄像头(前置摄像头和/或后置摄像头)、闪光灯、微型投影装置、近场通信(NFC)装置等,在此不再赘述。
以下,将结合具体实施例详细阐述本发明实施例提供的一种显示处理方法,如图4所示,该方法包括:
401、终端获取场景描述数据,该场景描述数据用于指示终端当前所处的应用场景。
具体的,当终端在运行过程中出现采集指纹需求的场景时,例如,终端运行具有指纹支付功能的应用时,或者,终端在锁屏或黑屏状态时(用户可能需要通过指纹解锁终端屏幕)等,可触发终端获取当前的场景描述数据。
其中,上述场景描述数据具体可以为用户手指与触摸屏之间的距离,用户在目标显示区域之外的触摸数据,用户使用终端时的姿态信息,当前环境光的强度,以及听筒的工作状态接近光传感器的工作状态等数据中的至少一项,这些数据反映出了终端当前所处的应用场景,例如,通话场景、悬浮触控场景等。这样,终端可以基于上述场景描述数据进一步判断在当前终端所 处的应用场景下,用户是否有使用指纹采集功能的需求。
示例性的,当上述场景描述数据包括用户手指与触摸屏之间的距离时,可在终端内设置距离识别传感器,这样,终端可通过该距离识别传感器获取到用户手指与触摸屏之间的距离;或者,终端的触摸屏具有悬浮触控功能,那么,当用户手指靠近触摸屏时,触摸屏内的电容器件可以根据电容信号的变化情况确定出用户手指与触摸屏之间的距离,终端可以根据该距离确定当前是否为悬浮触控场景。
其中,用户在触控屏附近的触摸事件可以称之为悬浮触控;悬浮触控可以是指,用户无需为了选择、移动或拖动目标(例如图标等)而直接接触触控板,而只需用户位于终端附近以便执行所想要的功能。
例如,可以在终端100的触控板内设置两种电容式传感器,即互电容传感器和自电容传感器,这两种电容传感器可以交替地阵列排布在触控板上。其中,互电容传感器用于实现正常传统的多点触控;而自电容传感器能够产生比互电容更为强大的信号,从而检测到距离触控板更远的手指感应。因此,如图5所示,当用户的手指在屏幕上悬停时,由于自电容传感器产生的信号要比互电容传感器产生的信号大,使得终端100可以检测到在屏幕上方,例如,距离触控板上方20mm处用户的手势。
又或者,当上述场景描述数据包括用户在目标显示区域之外的触摸数据时,一旦检测到用户手指触摸到目标显示区域之外的区域,例如,触摸屏中除目标显示区域的区域或者终端的边框等,终端便可将触摸过程中产生的电容信号等作为上述触摸数据,终端可根据该触摸数据确定出用户的手势。
又或者,当上述场景描述数据包括用户使用终端时的姿态信息时,终端可以获取其内设置的陀螺仪和/或加速度传感器的取值,并将该取值作为用户使用终端时的姿态信息,该姿态信息可以反映出用户当前使用终端的姿态,例如,跑步姿态、拿起终端的抬手姿态等。
又或者,可在终端内设置接近光传感器,此时,上述场景描述数据包括接近光传感器的检测值。这样,终端可通过该接近光传感器确定出当前环境光的强度,以及终端当前是否被遮挡。例如,用户将终端放入口袋时,其接近光传感器获取到的环境光的强度趋近于0,且终端可检测出终端周围设置有遮光物体。
又或者,上述场景描述数据还可以包括听筒的工作状态。那么,当检测到听筒正在播放声音时,说明用户在使用终端打电话,即终端当前处于通话状态。
需要说明的是,当终端没有在与指纹相关的场景运行时,则说明终端并没有采集指纹的需求,因此,可直接将指纹采集器件的工作状态调整为不工作状态。例如,关闭指纹采集器件的电源,或者,将指纹采集器件切换为休眠状态。这样,即使有手指按压在指纹采集器件上,指纹采集器件也不会进行指纹采集,从而避免用户在屏幕内操作时误触指纹采集器件导致的误操作现象,同时可以降低终端的功耗。
402、终端根据上述场景描述数据确定用户是否需要使用指纹采集功能。
在本发明的一些实施例中,当终端处于黑屏状态时,如果检测到用户对终端执行的触发事件,且判断出该触发事件为唤醒终端屏幕的预设操作(例如,悬浮手势、按压电源按钮或者双击操作等)时,说明用户有意唤醒或解锁终端屏幕,因此,终端可确定用户此时需要使用指纹采集功能。
例如,终端处于黑屏状态,如果上述用户手指与触摸屏之间的距离在第一阈值范围内,例如,1厘米以内时,说明用户有意向触摸触摸屏,此时,需要通过指纹识别先唤醒或解锁终端屏幕,因此,终端可确定用户此时需要使用指纹采集功能。
又例如,终端处于黑屏状态,当上述场景描述数据中的触摸数据为预设的触摸手势时,例如,滑动、长按、压力按、点击等手势,说明用户有意向唤醒或解锁终端屏幕,因此,终端可确定此时用户需要使用指纹采集功能。
在本发明的另一些实施例中,当上述场景描述数据中的姿态信息用于指示用户拿起终端时,例如,终端的加速度传感器检测到当前的加速度的变化量转处于预设的手机被拿起状态的阈值范围内,则说明用户此时拿起终端准备使用终端,此时,需要通过指纹识别先唤醒或解锁终端屏幕,因此,终端可确定用户此时需要使用指纹采集功能。
在本发明的另一些实施例中,上述场景描述数据还可以为终端接收到的实时信号。那么,如果终端接收到的实时信号是一个新的来电事件或消息事件时,也可确定用户此时需要使用指纹采集功能。
可以理解的是,当满足上述用户手指与触摸屏之间的距离在第一阈值范围内,上述触摸数据为预设的触摸手势,上述姿态信息用于指示用户拿起终端,以及终端接收到的实时信号是一个新的来电事件或消息事件中的至少一项时,终端便可确定用户此时需要使用指纹采集功能。当确定用户需要使用指纹采集功能时,可继续执行下述步骤403b-404b。
相应的,如果上述场景描述数据中的姿态信息用于指示用户正在握持该终端行走或跑步时,则可确定用户此时不需要使用指纹采集功能。
又或者,当上述场景描述数据中接近光传感器检测到环境光的强度在第二阈值范围内时,例如,趋近于0,且终端周围设置有遮光物体时,说明终端此时很可能被用户放入口袋或者背包中,因此,终端可确定用户此时不需要使用指纹采集功能。
又或者,当上述场景描述数据中听筒正在播放声音,且终端内的接近光传感器检测到听筒周围设置有遮光物体时,说明用户正在听筒模式下使用终端打电话或者播放语音,因此,终端可确定用户此时不需要使用指纹采集功能。
或者,在终端处于语音通话状态时,如果距离传感器检测到距离该终端预设距离内有物体,且该接近光传感器检测到听筒周围设置有遮光物体时,也可以说明用户正在听筒模式下使用终端打电话或者播放语音,此时,终端可确定用户不需要使用指纹采集功能。
可以理解的是,当满足上述姿态信息用于指示用户正在握持该终端行走或跑步,终端处于口袋模式,以及终端在听筒模式下打电话或者播放语音中的至少一项时,终端便可确定用户此时不需要使用指纹采集功能。当确定用户不需要使用指纹采集功能时,可继续执行下述步骤403a-404a。
403a、终端在设置有指纹采集器件的目标显示区域停止显示提示信息,该提示信息用于提示用户在目标显示区域录入指纹。
在步骤403a中,如图6所示,当上述姿态信息用于指示用户正在握持该终端行走或跑步(图6中的(a)所示),或者,终端处于口袋模式(图6中的(b)所示),或者,终端在听筒模式下打电话或者播放语音(图6中的(c)所示)时,终端可确定用户此时不需要使用指纹采集功能。那么,如果终端的显示屏中原本在设置有指纹采集器件的目标显示区域400显示有提示信息,则此时终端可以停止显示该提示信息,例如,隐藏目标显示区域400内正在显示的指纹图案。
当然,如果终端原本就没有在目标显示区域显示提示信息,则此时终端仍保持黑屏状态,无需在显示屏上提示用户进行指纹采集的具体位置,从而节省终端功耗,避免烧屏现象。
404a、终端将指纹采集器件的工作状态调整为不工作状态。
在步骤404a中,当确定用户此时不需要使用指纹采集功能时,终端还可以将指纹采集器件的工作状态调整为不工作状态。
例如,关闭指纹采集器件的电源,将指纹采集器件调整为休眠状态,将指纹采集器件调整为低功耗状态,或者降低扫描指纹采集器件的扫描频率等。
这样,即使有手指按压在指纹采集器件上,指纹采集器件也不会进行指纹采集,从而避免用户在屏幕内操作时误触指纹采集器件导致的误操作现象,同时可以降低终端的功耗。
另外,本发明实施例不限制上述步骤403a与404a之间的执行顺序,当终端确定用户需要使用指纹采集功能时,可以先执行步骤403a再执行步骤404a,或者,先执行步骤404a再执行步骤403a,或者,同时执行步骤403a和404a,本发明实施例对此不作任何限制。
相应的,当步骤402中确定用户需要使用指纹采集功能时,可继续执行下述步骤403b-404b:
403b、终端在设置有指纹采集器件的目标显示区域显示提示信息,该提示信息用于提示用户在目标显示区域录入指纹。
在步骤403b中,如图7所示,当终端检测到用户执行的触发事件为唤醒终端屏幕的预设操作(图7中的(a)所示的悬浮手势),或者,当上述姿态信息用于指示用户拿起终端(图7中的(b)所示),或者,当终端接收到新的来电事件或消息事件(图7中的(c)所示)时,终端可确定用户此时需要使用指纹采集功能。那么,如果终端原本处于黑屏状态,则此时终端在设置有指纹采集器件的目标显示区域400输出一个提示信息,以提示用户在目标显示区域录入指纹。
例如,如图7所示,可以在终端的目标显示区域400内显示一个指纹图案,或者,还可以使用不同的颜色点亮目标显示区域400或目标显示区域400的边缘,当然,还可以以动画或文字的形式提示用户在目标显示区域录入指纹,本发明实施例对此不作任何限制。
这样一来,终端可以根据场景描述数据及时准确的判断出当前用户有使用指纹采集功能需求,进而在设置有指纹采集器件的目标显示区域400输出提示信息,使得用户可以准确获知后续在触摸屏上录入指纹的具体位置,从而提高了屏幕内指纹识别的识别效率。
404b、终端将指纹采集器件的工作状态调整为待命状态。
在步骤404b中,当终端确定用户需要使用指纹采集功能时,还可以将指纹采集器件的工作状态调整为待命状态。例如,为指纹采集器件上电,等待用户手指按压。这样,一旦用户手指按压在指纹采集器件上,指纹采集器件可以立即开始采集指纹信息,从而提高了终端采集指纹的响应速度。
当然,本发明实施例不限制上述步骤403b与404b之间的执行顺序,当终端确定用户需要使用指纹采集功能时,可以先执行步骤403b再执行步骤404b,或者,先执行步骤404b再执行步骤403b,或者,同时执行步骤403b和404b,本发明实施例对此不作任何限制。
可选的,仍如图4所示,终端在目标显示区域显示上述提示信息之后,还可以继续执行下述步骤405-406。
405、终端在目标显示区域检测用户的操作事件。
406、当上述操作事件满足预设条件时,终端通过目标显示区域内的指纹采集器件采集用户指纹。
其中,上述预设条件包括:操作事件的按压力度大于第一预设值,操作事件的持续时间大于第二预设值,操作事件的移动位移大于第三预设值,以及操作事件的触摸次数大于第四预设值中的至少一个。
也就是说,当检测到用户操作触摸屏的目标显示区域时,只有当这一操作事件满足上述预设条件时,才会触发指纹采集器件采集用户指纹。这样可以避免用户误触目标显示区域后触发终端采集用户指纹的误操作现象。
示例性的,如图8所示,在黑屏状态下,当用户在目标显示区域执行一个操作事件时,如果终端检测到按压的力度大于预设的压力阈值(即上述第一预设值),说明用户此时有意触发指纹采集功能,进而,终端通过目标显示区域内的指纹采集器件采集用户指纹,并根据采集到的指纹对用户身份进行鉴权,当鉴权通过后,解锁终端屏幕。
又或者,如图9所示,在黑屏状态下,当用户在目标显示区域执行一个操作事件时,如果终端检测到用户手指在目标显示区域停留的时间大于预设的时间阈值(即上述第二预设值),说明用户此时有意触发指纹采集功能,进而,终端通过目标显示区域内的指纹采集器件采集用户指纹,并根据采集到的指纹对用户身份进行鉴权,当鉴权通过后,解锁终端屏幕。
又或者,如图10所示,在黑屏状态下,当用户在目标显示区域执行一个 操作事件时,如果终端检测到用户触摸目标显示区域的触摸次数大于预设的触摸次数阈值(即第四预设值),说明用户此时有意触发指纹采集功能,进而,终端通过目标显示区域内的指纹采集器件采集用户指纹,并根据采集到的指纹对用户身份进行鉴权,当鉴权通过后,解锁终端屏幕。
又或者,如图11所示,在黑屏状态下,当用户在目标显示区域执行一个操作事件时,如果终端检测到用户手指沿某个预设方向的移动距离(即移动位移)大于第三预设值时,终端可能预先设置了打开某一应用(例如,相机)与用户手指沿该预设方向移动这一操作之间的对应关系,那么,终端此时可以通过目标显示区域内的指纹采集器件采集用户指纹,并根据采集到的指纹对用户身份进行鉴权,当鉴权通过后,可直接打开并进入相机应用,使得用户可以通过屏内指纹采集功能直接进入相关应用。
另外,如图12所示,用户还可以进入设置界面对上述实施例中涉及的相关屏内指纹功能进行手动设置。例如,开启或关闭上述屏内指纹功能;又或者,还可以提供场景管理选项,设置每一种应用场景下是否输出提示信息,如何输出提示信息的具体参数;又或者,还可以提供防误触管理选项,设置用户触发指纹采集器件工作时的具体触摸参数,这样,用户可以根据自己的使用习惯对上述屏内指纹功能进行完善,提高户在屏幕内执行指纹采集时的效率。
可以理解的是,上述终端等为了实现上述方法功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本发明实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明实施例的范围。
本发明实施例可以根据上述方法示例对上述终端等进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本发明实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图13示出了上述实施例中所涉及的终端的一种可能的结构示意图,该终端用于实现前述方法实施例,包括:获取单元1101、确定单元1102、显示单元1103以及执行单元1104。
获取单元1101用于支持终端执行图4中的过程401和405;确定单元1102用于支持终端执行图4中的过程402;显示单元1103用于支持终端执行图4中的过程403b;执行单元1104用于支持终端执行图4中的过程403a-404a、404b以及406。其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
在采用集成的单元的情况下,图14示出了上述实施例中所涉及的终端的一种可能的结构示意图。该终端用于实现前述方法实施例,包括:处理模块1302和通信模块1303。处理模块1302用于对终端的动作进行控制管理。通信模块1303用于支持终端与其他网络实体的通信。该终端还可以包括存储模块1301,用于存终端的程序代码和数据。
其中,处理模块1302可以是处理器或控制器,例如可以是中央处理器(Central Processing Unit,CPU),通用处理器,数字信号处理器(Digital Signal Processor,DSP),专用集成电路(Application-Specific Integrated Circuit,ASIC),现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本发明公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块1303可以是收发器、收发电路或通信接口等。存储模块1301可以是存储器。
当处理模块1302为处理器,通信模块1303为RF收发电路,存储模块1301为存储器时,本发明实施例所提供的终端可以为图2所示的手机100。
在上述实施例中,可以全部或部分的通过软件,硬件,固件或者其任意组合来实现。当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式出现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质,(例如,软盘,硬盘、磁带)、光介质(例如,DVD)或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (25)

  1. 一种显示处理方法,其特征在于,包括:
    指示在终端的显示屏上显示提示信息,所述提示信息用于提示用户在设置有指纹采集器件的目标显示区域录入指纹;
    获取所述终端的第一场景描述数据,所述第一场景描述数据用于指示所述终端当前所处的应用场景;
    根据所述场景描述数据确定所述终端当前处于运动场景或通话场景;
    指示在所述目标显示区域停止显示所述提示信息。
  2. 根据权利要求1所述的方法,其特征在于,所述第一场景描述数据包括:用户使用所述终端时的姿态信息;
    其中,根据所述场景描述数据确定所述终端当前处于运动场景,包括:
    当所述姿态信息用于指示用户握持所述终端在行走或跑步时,确定所述终端当前处于运动场景。
  3. 根据权利要求1所述的方法,其特征在于,所述第一场景描述数据包括:听筒的工作状态和接近光传感器的工作状态;
    其中,根据所述场景描述数据确定所述终端当前处于通话场景,包括:
    当确定所述听筒正在播放声音,且所述接近光传感器检测到听筒周围有遮光物体时,确定所述终端当前处于通话场景。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,指示在所述目标显示区域停止显示所述提示信息之后,还包括:
    获取所述终端的第二场景描述数据,所述第二场景描述数据用于指示所述终端当前所处的应用场景;
    当所述第二场景描述数据指示用户需要使用指纹采集功能时,指示在所述目标显示区域显示所述提示信息。
  5. 根据权利要4项所述的方法,其特征在于,所述第二场景描述数据包括:用户使用所述终端时的姿态信息;
    其中,根据所述场景描述数据确定用户需要使用指纹采集功能,包括:
    当所述姿态信息用于指示用户拿起所述终端时,确定用户需要使用所述指纹采集功能。
  6. 根据权利要4项所述的方法,其特征在于,所述第二场景描述数据包括:用户对所述终端执行的触发事件;
    其中,根据所述场景描述数据确定用户需要使用指纹采集功能,包括:
    当所述触发事件为唤醒终端屏幕的预设操作时,确定用户需要使用所述指纹采集功能。
  7. 根据权利要4项所述的方法,其特征在于,所述第二场景描述数据包括:所述终端接收到的实时信号;
    其中,根据所述场景描述数据确定用户需要使用指纹采集功能,包括:
    当所述实时信号为新的来电事件或消息事件时,确定用户需要使用所述指纹采集功能。
  8. 根据权利要求1-7中任一项所述的方法,其特征在于,指示在终端的显示屏上显示提示信息,包括:
    指示点亮所述目标显示区域;或者,
    指示在所述目标显示区域显示指纹图案。
  9. 根据权利要求4-8中任一项所述的方法,其特征在于,当用户需要使用指纹采集功能时,所述方法还包括:
    将所述指纹采集器件设置为待命状态。
  10. 根据权利要求1-9中任一项所述的方法,其特征在于,在根据所述第一场景描述数据确定所述终端当前处于运动场景或通话场景之后,还包括:
    将所述指纹采集器件设置为不工作状态。
  11. 根据权利要求1-10中任一项所述的方法,其特征在于,指示在终端的显示屏上显示提示信息之后,还包括:
    在所述目标显示区域检测用户的操作事件;
    响应于满足预设条件的操作事件,通过所述指纹采集器件采集用户指纹;
    其中,所述预设条件包括:所述操作事件的按压力度大于第一预设值,所述操作事件的持续时间大于第二预设值,所述操作事件的移动位移大于第三预设值,以及所述操作事件的触摸次数大于第四预设值中的至少一个。
  12. 一种终端,其特征在于,包括:
    显示单元,用于:在所述终端的显示屏上显示提示信息,所述提示信息用于提示用户在设置有指纹采集器件的目标显示区域录入指纹;
    获取单元,用于:获取所述终端的第一场景描述数据,所述第一场景描述数据用于指示所述终端当前所处的应用场景;
    确定单元,用于:根据所述场景描述数据确定所述终端当前处于运动场景或通话场景;
    执行单元,用于:指示在所述目标显示区域停止显示所述提示信息。
  13. 根据权利要求12所述的终端,其特征在于,所述第一场景描述数据包括:用户使用所述终端时的姿态信息;
    所述确定单元,具体用于:当所述姿态信息用于指示用户握持所述终端在行走或跑步时,确定所述终端当前处于运动场景。
  14. 根据权利要求12所述的终端,其特征在于,所述第一场景描述数据包括:听筒的工作状态和接近光传感器的工作状态;
    所述确定单元,具体用于:当所述听筒正在播放声音,且所述接近光传感器检测到听筒周围设置有遮光物体时,确定所述终端当前处于通话场景。
  15. 根据权利要求12-14中任一项所述的终端,其特征在于,
    所述获取单元,还用于:获取所述终端的第二场景描述数据,所述第二场景描述数据用于指示所述终端当前所处的应用场景;
    所述显示单元,还用于:当所述第二场景描述数据指示用户需要使用指纹采集功能时,在所述目标显示区域显示所述提示信息。
  16. 根据权利要求15所述的终端,其特征在于,所述第二场景描述数据 包括:用户使用所述终端时的姿态信息;
    所述确定单元,还用于:当所述触发事件为唤醒终端屏幕的预设操作时,确定用户需要使用所述指纹采集功能。
  17. 根据权利要求15所述的终端,其特征在于,所述第二场景描述数据包括:用户对所述终端执行的触发事件;
    所述确定单元,还用于:当所述触发事件为唤醒终端屏幕的预设操作时,确定用户需要使用所述指纹采集功能。
  18. 根据权利要求15所述的终端,其特征在于,所述第二场景描述数据包括:所述终端接收到的实时信号;
    所述确定单元,还用于:当所述实时信号为新的来电事件或消息事件时,确定用户需要使用所述指纹采集功能。
  19. 根据权利要求12-18中任一项所述的终端,其特征在于,
    所述显示单元,具体用于:点亮所述目标显示区域;或者,在所述目标显示区域显示指纹图案。
  20. 根据权利要求15-19中任一项所述的终端,其特征在于,
    所述执行单元,还用于:当用户需要使用指纹采集功能时,将所述指纹采集器件设置为待命状态。
  21. 根据权利要求12-20中任一项所述的终端,其特征在于,
    所述执行单元,还用于:将所述指纹采集器件设置为不工作状态。
  22. 根据权利要求12-21中任一项所述的终端,其特征在于,
    所述获取单元,还用于:在所述目标显示区域检测用户的操作事件;
    所述执行单元,还用于:响应于满足预设条件的操作事件,通过所述指纹采集器件采集用户指纹;
    其中,所述预设条件包括:所述操作事件的按压力度大于第一预设值,所述操作事件的持续时间大于第二预设值,所述操作事件的移动位移大于第三预设值,以及所述操作事件的触摸次数大于第四预设值中的至少一个。
  23. 一种终端,其特征在于,包括:处理器、存储器、总线和通信接口;
    所述存储器用于存储计算机执行指令,所述处理器与所述存储器通过所述总线连接,当所述终端运行时,所述处理器执行所述存储器存储的所述计算机执行指令,以使所述终端执行如权利要求1-11中任一项所述的显示处理方法。
  24. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,其特征在于,当所述指令在终端上运行时,使得所述终端执行如权利要求1-11中任一项所述的显示处理方法。
  25. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在终端上运行时,使得所述终端执行如权利要求1-11中任一项所述的显示处理方法。
PCT/CN2017/087217 2017-06-05 2017-06-05 一种显示的处理方法及装置 WO2018223270A1 (zh)

Priority Applications (8)

Application Number Priority Date Filing Date Title
MYPI2019006928A MY201311A (en) 2017-06-05 2017-06-05 Display processing method, and apparatus
PCT/CN2017/087217 WO2018223270A1 (zh) 2017-06-05 2017-06-05 一种显示的处理方法及装置
US16/619,786 US11868604B2 (en) 2017-06-05 2017-06-05 Display processing method and apparatus
CN202010693617.1A CN112015502A (zh) 2017-06-05 2017-06-05 一种显示的处理方法及装置
EP22164715.9A EP4109218B1 (en) 2017-06-05 2017-06-05 Mobile phone comprising a touch screen with an in-display fingerprint sensor
CN201780007924.6A CN108701043B (zh) 2017-06-05 2017-06-05 一种显示的处理方法及装置
EP17912689.1A EP3637225B1 (en) 2017-06-05 2017-06-05 Display processing method and apparatus
HK18116136.1A HK1257015A1 (zh) 2017-06-05 2018-12-17 一種顯示的處理方法及裝置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/087217 WO2018223270A1 (zh) 2017-06-05 2017-06-05 一种显示的处理方法及装置

Publications (1)

Publication Number Publication Date
WO2018223270A1 true WO2018223270A1 (zh) 2018-12-13

Family

ID=63843801

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/087217 WO2018223270A1 (zh) 2017-06-05 2017-06-05 一种显示的处理方法及装置

Country Status (6)

Country Link
US (1) US11868604B2 (zh)
EP (2) EP3637225B1 (zh)
CN (2) CN108701043B (zh)
HK (1) HK1257015A1 (zh)
MY (1) MY201311A (zh)
WO (1) WO2018223270A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112596598A (zh) * 2020-12-18 2021-04-02 维沃移动通信有限公司 显示控制方法、显示控制装置和电子设备
WO2021205476A1 (en) * 2020-04-10 2021-10-14 Saint-Gobain Glass France A touch activated display system for a vehicle

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108255369B (zh) * 2018-01-05 2021-07-27 北京小米移动软件有限公司 屏内指纹图标的显示方法、装置及计算机可读存储介质
CN116257836B (zh) 2018-03-26 2023-11-28 华为技术有限公司 一种指纹解锁方法及终端
CN110554815B (zh) * 2018-05-30 2021-12-28 北京小米移动软件有限公司 图标唤醒方法、电子设备和存储介质
CN109582416A (zh) * 2018-11-19 2019-04-05 Oppo广东移动通信有限公司 指纹采集方法、装置、存储介质及电子设备
CN109857241B (zh) * 2019-02-27 2021-04-23 维沃移动通信有限公司 一种显示控制方法、终端设备及计算机可读存储介质
CN111796701A (zh) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 模型训练方法、操作处理方法、装置、存储介质及设备
CN110309752B (zh) * 2019-06-24 2021-09-10 Oppo广东移动通信有限公司 超声波处理方法、装置、存储介质及电子设备
CN110308771B (zh) * 2019-07-09 2022-05-10 京东方科技集团股份有限公司 电子装置面板、光电处理单元和电子装置及其处理方法
CN111651082B (zh) * 2020-05-22 2024-03-26 北京小米移动软件有限公司 一种触控屏解锁方法、装置、电子设备及存储介质
CN111949159A (zh) * 2020-08-12 2020-11-17 Oppo(重庆)智能科技有限公司 触摸屏的校准方法及装置、设备、存储介质
KR20220046748A (ko) * 2020-10-07 2022-04-15 삼성디스플레이 주식회사 터치 패널을 포함하는 표시 장치 및 터치 패널을 포함 하는 표시 장치의 구동 방법
CN114302194B (zh) * 2021-01-14 2023-05-05 海信视像科技股份有限公司 一种显示设备及多设备切换时的播放方法
CN112948042B (zh) * 2021-03-01 2024-04-30 Oppo广东移动通信有限公司 显示控制方法及装置、计算机可读介质和电子设备
CN115033091B (zh) * 2021-03-03 2023-08-11 上海艾为电子技术股份有限公司 一种降低手持移动设备功耗的方法及装置
US11928949B2 (en) * 2021-12-07 2024-03-12 Prox Devices, Inc. Phone reminder devices, systems and methods

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8826178B1 (en) * 2012-11-06 2014-09-02 Google Inc. Element repositioning-based input assistance for presence-sensitive input devices
CN104182028A (zh) * 2014-08-25 2014-12-03 联想(北京)有限公司 一种信息处理方法、装置及电子设备
US20150036065A1 (en) 2013-08-05 2015-02-05 Apple Inc. Fingerprint Sensor in an Electronic Device
CN105388992A (zh) * 2015-11-03 2016-03-09 广东欧珀移动通信有限公司 指纹识别方法、装置以及终端
CN106200878A (zh) * 2016-07-19 2016-12-07 深圳市万普拉斯科技有限公司 指纹控制方法、装置及移动终端
CN106527668A (zh) * 2016-11-30 2017-03-22 深圳天珑无线科技有限公司 一种指纹识别功能的控制方法、装置及终端

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7957762B2 (en) * 2007-01-07 2011-06-07 Apple Inc. Using ambient light sensor to augment proximity sensor output
US20090150993A1 (en) 2007-12-10 2009-06-11 Symbol Technologies, Inc. Mobile Device with Frequently Operated Biometric Sensors
WO2010007765A1 (ja) * 2008-07-15 2010-01-21 パナソニック株式会社 携帯端末及びその位置特定方法
US8539382B2 (en) * 2009-04-03 2013-09-17 Palm, Inc. Preventing unintentional activation and/or input in an electronic device
US9032511B2 (en) * 2011-03-11 2015-05-12 Ntt Docomo, Inc. Mobile information terminal and gripping-feature authentication method
US20130342672A1 (en) 2012-06-25 2013-12-26 Amazon Technologies, Inc. Using gaze determination with device input
US9898642B2 (en) * 2013-09-09 2018-02-20 Apple Inc. Device, method, and graphical user interface for manipulating user interfaces based on fingerprint sensor inputs
US9967100B2 (en) * 2013-11-05 2018-05-08 Samsung Electronics Co., Ltd Method of controlling power supply for fingerprint sensor, fingerprint processing device, and electronic device performing the same
CN104077518A (zh) 2014-07-03 2014-10-01 南昌欧菲生物识别技术有限公司 解锁并执行应用程序的装置及方法
TWI557649B (zh) * 2014-08-01 2016-11-11 神盾股份有限公司 電子裝置及指紋辨識裝置控制方法
CN105446451B (zh) * 2015-02-13 2019-09-13 比亚迪股份有限公司 指纹识别装置、移动终端和指纹识别装置的唤醒方法
CN105353965B (zh) 2015-09-25 2019-05-17 维沃移动通信有限公司 一种电子设备的屏幕解锁方法及电子设备
CN105407191A (zh) 2015-12-08 2016-03-16 广东欧珀移动通信有限公司 前面板及移动终端
CN108268852A (zh) * 2016-03-17 2018-07-10 广东欧珀移动通信有限公司 屏幕组件、指纹采集和识别方法、移动终端及电子设备
CN105843358B (zh) * 2016-03-18 2019-12-03 Oppo广东移动通信有限公司 一种信息处理的方法及终端
KR102468191B1 (ko) * 2016-04-27 2022-11-18 삼성전자주식회사 지문 인증 방법 및 이를 수행하는 전자 장치
CN106774803B (zh) 2016-12-12 2021-02-09 北京小米移动软件有限公司 指纹识别方法及装置
CN106716431A (zh) * 2016-12-26 2017-05-24 深圳市汇顶科技股份有限公司 一种指纹识别引导方法及装置
CN106650384A (zh) * 2016-12-30 2017-05-10 深圳市万普拉斯科技有限公司 一种指纹防误触的方法及终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8826178B1 (en) * 2012-11-06 2014-09-02 Google Inc. Element repositioning-based input assistance for presence-sensitive input devices
US20150036065A1 (en) 2013-08-05 2015-02-05 Apple Inc. Fingerprint Sensor in an Electronic Device
CN104182028A (zh) * 2014-08-25 2014-12-03 联想(北京)有限公司 一种信息处理方法、装置及电子设备
CN105388992A (zh) * 2015-11-03 2016-03-09 广东欧珀移动通信有限公司 指纹识别方法、装置以及终端
CN106200878A (zh) * 2016-07-19 2016-12-07 深圳市万普拉斯科技有限公司 指纹控制方法、装置及移动终端
CN106527668A (zh) * 2016-11-30 2017-03-22 深圳天珑无线科技有限公司 一种指纹识别功能的控制方法、装置及终端

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3637225A4

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021205476A1 (en) * 2020-04-10 2021-10-14 Saint-Gobain Glass France A touch activated display system for a vehicle
EP4132786A4 (en) * 2020-04-10 2024-05-15 Saint-Gobain Glass France TOUCH ACTIVATED DISPLAY SYSTEM FOR A VEHICLE
CN112596598A (zh) * 2020-12-18 2021-04-02 维沃移动通信有限公司 显示控制方法、显示控制装置和电子设备
CN112596598B (zh) * 2020-12-18 2024-06-11 维沃移动通信有限公司 显示控制方法、显示控制装置和电子设备

Also Published As

Publication number Publication date
CN108701043A (zh) 2018-10-23
EP3637225A1 (en) 2020-04-15
CN112015502A (zh) 2020-12-01
CN108701043B (zh) 2020-08-07
EP4109218C0 (en) 2024-05-01
US20200125229A1 (en) 2020-04-23
EP3637225A4 (en) 2020-05-13
US11868604B2 (en) 2024-01-09
EP3637225B1 (en) 2022-06-08
HK1257015A1 (zh) 2019-10-11
EP4109218A2 (en) 2022-12-28
EP4109218A3 (en) 2023-04-12
EP4109218B1 (en) 2024-05-01
MY201311A (en) 2024-02-16

Similar Documents

Publication Publication Date Title
WO2018223270A1 (zh) 一种显示的处理方法及装置
CN109891379B (zh) 一种防误触方法及终端
TWI679585B (zh) 指紋識別區域顯示方法及移動終端
CN108701178B (zh) 认证方法及使用认证方法的电子设备
US11243657B2 (en) Icon display method, and apparatus
CN109828688B (zh) 屏幕损坏处理方法、移动终端及计算机可读存储介质
EP3358455A1 (en) Apparatus and method for controlling fingerprint sensor
US10667218B2 (en) Method for controlling unlocking and related products
CN106778175B (zh) 一种界面锁定方法、装置和终端设备
CN107193455B (zh) 一种信息处理方法及移动终端
CN107193471B (zh) 解锁控制方法及相关产品
WO2018196660A1 (zh) 指纹识别区域显示方法及相关产品
CN103389863B (zh) 一种显示控制方法和装置
WO2019033385A1 (zh) 一种显示方法及终端
EP3499853B1 (en) Ppg authentication method and device
CN103677633B (zh) 屏幕解锁方法、装置和终端
WO2020107401A1 (zh) 控制屏幕开闭的方法、控制屏幕开闭的装置和电子设备
US20200252502A1 (en) Method for responding to incoming call by means of fingerprint recognition, storage medium, and mobile terminal
CN110531915A (zh) 屏幕操作方法及终端设备
WO2019061512A1 (zh) 一种任务切换方法及终端
CN110753155A (zh) 一种接近检测方法及终端设备
CN107944361A (zh) 一种用于移动终端的指纹扫描方法和装置
CN113099378B (zh) 定位方法、装置、设备及存储介质
CN107943406B (zh) 一种触摸屏触控点确定方法及终端
KR102553573B1 (ko) 전자 장치 및 전자 장치의 터치 입력 감지 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17912689

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017912689

Country of ref document: EP

Effective date: 20191209