CN112817512A - False touch prevention method, mobile device and computer readable storage medium - Google Patents

False touch prevention method, mobile device and computer readable storage medium Download PDF

Info

Publication number
CN112817512A
CN112817512A CN201911128761.4A CN201911128761A CN112817512A CN 112817512 A CN112817512 A CN 112817512A CN 201911128761 A CN201911128761 A CN 201911128761A CN 112817512 A CN112817512 A CN 112817512A
Authority
CN
China
Prior art keywords
mobile device
mobile equipment
event
preset
curve
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911128761.4A
Other languages
Chinese (zh)
Inventor
翟海鹏
肖啸
李�杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN201911128761.4A priority Critical patent/CN112817512A/en
Priority to PCT/CN2020/129080 priority patent/WO2021098644A1/en
Publication of CN112817512A publication Critical patent/CN112817512A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Abstract

The application relates to the field of intelligent control, in particular to a mobile device false touch prevention control technology in the field of intelligent control. In the method for preventing the false touch applied to the mobile equipment, when the mobile equipment is in a calling state or a receiving state or a voice message listening state, the mobile equipment does not respond to the touch operation of a touch screen of the mobile equipment within a preset time length after receiving a first event, and a touch point or a touch surface of the touch operation is positioned in a partial area or any area of the touch screen; then, when the mobile equipment receives a second event, the mobile equipment starts a screen-off process; and the mobile equipment completes the screen-off process within the preset time length. Through the technical scheme provided by the application, the situation that the touch screen is touched by mistake due to untimely screen-off or irregular gesture of the mobile equipment when the mobile equipment is held by a user and is close to the side face of the head of the user for communication or voice message answering can be reduced or even avoided, and the user experience is improved.

Description

False touch prevention method, mobile device and computer readable storage medium
Technical Field
The application relates to the field of intelligent control, in particular to a mobile device false touch prevention control technology in the field of intelligent control.
Background
After the user picks up the mobile device to perform the operations of actively broadcasting, passively listening to and listening to the voice message, the mobile device will automatically detect whether the mobile device is close to the side of the user's head. And when the approach is detected, the screen is turned off to avoid the side of the head of the user from mistakenly touching to cause unexpected operation. When the mobile equipment is far away from the mobile equipment, the mobile equipment is lightened and is restored to a normal use state. However, in practice there are situations when the mobile device is already close to the side of the user's head, but the mobile device is not turned off in time. Thus, the side of the user's head erroneously touches the touch screen of the mobile device. Therefore, a good anti-touch experience is needed.
Disclosure of Invention
The application provides a false touch prevention method, mobile equipment and a computer readable storage medium, so as to solve the technical problem that the screen-off processing of the mobile equipment is not timely when a user uses the mobile equipment in the above scene.
In a first aspect, a false touch prevention method is provided. The false touch prevention method is applied to mobile equipment and comprises the following steps: when the mobile equipment is in a calling state, a receiving state or a voice message listening state, the mobile equipment does not respond to touch operation on a touch screen of the mobile equipment within a preset time after receiving a first event, and a touch point or a touch surface of the touch operation is located in a partial area or any area of the touch screen; within a preset time length, when the mobile equipment receives a second event, the mobile equipment starts a screen-off process; completing the screen-off process within a preset time length; wherein the first event comprises: after the mobile device is changed from a static state to a moving state, when the mobile device detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile device and a preset moving curve is greater than or equal to a preset similarity threshold value, determining an event generated by the movement of the mobile device towards the side of the head of a user; or after the mobile device is changed from a static state to a moving state, when the mobile device detects that an image acquired by a front camera of the mobile device comprises an ear image of a person, determining an event generated by the mobile device moving towards the side of the head of the user; or after the mobile device is changed from a static state to a moving state, when the mobile device detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile device and a preset moving curve is greater than or equal to a preset similarity threshold value and an ear image of a person is included in an image obtained by a front camera of the mobile device, determining an event generated by the movement of the mobile device towards the side face of the head of the user; the second event includes: detecting that the distance between the mobile equipment and the side face of the head of the user is smaller than a preset threshold value, and determining an event generated when the mobile equipment approaches the side face of the head of the user; or, when the intensity value of a reflected signal of a self-sent signal received by the mobile equipment is detected to be greater than or equal to a preset threshold value, determining an event generated when the mobile equipment approaches the side face of the head of the user, wherein the signal is an electromagnetic wave signal or an audio signal; or detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile equipment at a first position is greater than or equal to the intensity value of a reflected signal at a second position, and determining an event generated when the mobile equipment approaches the side face of the head of a user, wherein the first position is farther away from the part of the mobile equipment where the signal is sent than the second position, and the signal is an electromagnetic wave signal or an audio signal; or, determining an event generated when the mobile device approaches the side of the head of the user when detecting that the area of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold and/or the capacitance value is larger than or equal to a second preset threshold. Therefore, the non-response operation processing of the touch screen can be timely realized by setting the mobile equipment to not respond to the touch operation of any area or partial area of the touch screen of the mobile equipment in advance; and the screen is extinguished within the preset time length after the first event is received, so that the screen is ensured to be extinguished even after the non-response setting is finished, and the situation that the mobile equipment is touched by mistake before the screen is extinguished is reduced or even avoided.
According to a first aspect, a mobile device detects that a degree of similarity between a movement curve obtained by its own acceleration sensor and a preset movement curve is greater than or equal to a preset similarity threshold, including: the acceleration sensor is provided with an X axis, a Y axis and a Z axis which are vertical to each other; the moving curve and the preset moving curve comprise an X-axis curve, a Y-axis curve and a Z-axis curve; determining the characteristics of the moving curve according to the monotonicity change conditions of the X-axis curve, the Y-axis curve and the Z-axis curve, the number of wave crests and wave troughs contained in the X-axis curve, the Y-axis curve and the Z-axis curve and the change conditions of the wave crests and the wave troughs, and determining the similarity of the moving curve and the preset moving curve based on the characteristics of the moving curve and the characteristics of the preset moving curve. Thus, a specific obtaining mode for subsequently determining whether to generate the similarity of the first event is provided, and after the similarity is obtained in this mode, when the similarity is larger than or equal to a preset similarity threshold, the mobile device is determined to move towards the side of the head of the user to generate the first event.
According to a first aspect or any one of the above implementations of the first aspect, the partial region comprises: a status bar on the upper part of the touch screen, and a dial and a navigation bar on the middle lower part of the touch screen. Therefore, the mistaken touch on the area which is easily touched by mistake on the touch screen, such as the status bar on the upper part of the touch screen, the dial plate and the navigation bar on the middle lower part of the touch screen, and the like, can be prevented in an important way.
According to the first aspect, or any implementation manner of the first aspect, when the mobile device is in a calling state or a receiving state or a voice message listening state, the mobile device does not respond to a touch operation on a touch screen of the mobile device within a preset time period after receiving the first event, and a touch point or a touch surface of the touch operation is located in a partial area or any area of the touch screen; within a preset time length, when the mobile equipment receives a second event, the mobile equipment starts a screen-off process; completing the screen-off process within a preset time length; the method specifically comprises the following steps: when the mobile equipment is in a calling state, a receiving state or a voice message listening state, and when the mobile equipment receives a first event, the mobile equipment uploads the first event to a kernel layer of an application processor of the mobile equipment; the kernel layer of the application processor does not upload the first event to the hardware abstraction layer and the framework layer on the upper layer of the hardware abstraction layer; the method comprises the steps that within a preset duration after the mobile equipment receives a first event, a kernel layer of an application processor controls the mobile equipment not to respond to touch operation on a touch screen of the mobile equipment, and touch points or a touch surface of the touch operation are located in a partial area or any area of the touch screen; the mobile equipment uploads a second event to a kernel layer, a hardware abstraction layer and a framework layer of the application processor in sequence when the mobile equipment receives the second event within a preset time length; starting a screen-off process in the frame layer; then, the data are sequentially downloaded from the framework layer to the hardware abstraction layer and the kernel layer of the application processor; and completing the screen-off process within a preset time. In this way, the non-response flow after receiving the first event and the screen-off flow after receiving the second event are further embodied, and the non-response flow is stated to be not only earlier in detection but also shorter in processing time from a more specific point of view; and the screen extinguishing process is completed within the preset time length; the situation that the mobile device is touched by mistake before the screen is turned off is reduced or even avoided.
According to the first aspect, or any implementation manner of the first aspect, the value e [1,2] of the preset duration is in seconds. Thus, the preferred range of the preset time period is summarized further.
According to a first aspect, or any implementation manner of the first aspect above, the outgoing call state or the listening state respectively includes a voice outgoing call state or a voice listening state performed by a voice communication application program, and a voice outgoing call state or a voice listening state performed by a telecommunication carrier network. Thus, the range of the outgoing state and the incoming state is further defined.
According to the first aspect, or any implementation manner of the first aspect, the voice communication application program includes but is not limited to WeChat,
Figure BDA0002277682930000021
And
Figure BDA0002277682930000022
in this way, the voice communication application is further embodied.
In a second aspect, a false touch prevention method is provided. The false touch prevention method is applied to mobile equipment and comprises the following steps: when the mobile equipment is in a calling state, a receiving state or a voice message listening state, after receiving a first event, the mobile equipment does not respond to touch operation on a touch screen of the mobile equipment, a touch point or a touch surface of the touch operation is positioned in a partial area or any area of the touch screen, and the mobile equipment does not respond to a second event; after the calling state, the answering state or the voice message listening state is finished, or after a preset gesture operation and/or a preset side key operation are detected, the mobile equipment responds to touch operation on a part of or any area of a touch screen of the mobile equipment; the first event includes: after the mobile device is changed from a static state to a moving state, when the mobile device detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile device and a preset moving curve is greater than or equal to a preset similarity threshold value, determining an event generated by the movement of the mobile device towards the side of the head of a user; or after the mobile device is changed from a static state to a moving state, when the mobile device detects that an image acquired by a front camera of the mobile device comprises an ear image of a person, determining an event generated by the mobile device moving towards the side of the head of the user; or after the mobile device is changed from a static state to a moving state, when the mobile device detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile device and a preset moving curve is greater than or equal to a preset similarity threshold value and an ear image of a person is included in an image obtained by a front camera of the mobile device, determining an event generated by the movement of the mobile device towards the side face of the head of the user; the second event includes: detecting that the distance between the mobile equipment and the side face of the head of the user is smaller than a preset threshold value, and determining an event generated when the mobile equipment approaches the side face of the head of the user; or, when the intensity value of a reflected signal of a self-sent signal received by the mobile equipment is detected to be greater than or equal to a preset threshold value, determining an event generated when the mobile equipment approaches the side face of the head of the user, wherein the signal is an electromagnetic wave signal or an audio signal; or detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile equipment at a first position is greater than or equal to the intensity value of a reflected signal at a second position, and determining an event generated when the mobile equipment approaches the side face of the head of a user, wherein the first position is farther away from the part of the mobile equipment where the signal is sent than the second position, and the signal is an electromagnetic wave signal or an audio signal; or, determining an event generated when the mobile device approaches the side of the head of the user when detecting that the area of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold and/or the capacitance value is larger than or equal to a second preset threshold. Therefore, the mobile equipment is set in advance not to respond to the touch operation of any area or partial area of the touch screen of the mobile equipment until the corresponding state is finished or the specific operation is triggered, and the screen turn-off process is not executed any more; the non-response operation processing of the touch screen can be timely, and the situation that the mobile equipment is touched by mistake is reduced or even avoided.
According to the second aspect, or any implementation manner of the second aspect, when the mobile device is in a calling state or a listening state or a voice message listening state, after receiving the first event, the mobile device does not respond to the touch operation on the touch screen of the mobile device, a touch point or a touch surface of the touch operation is located in a partial area or any area of the touch screen, and the mobile device does not respond to the second event; the method comprises the following steps: when the mobile equipment is in a calling state, a receiving state or a voice message listening state, and when the mobile equipment receives a first event, the mobile equipment uploads the first event to a kernel layer of an application processor of the mobile equipment; the kernel layer of the application processor does not upload the first event to the hardware abstraction layer and the framework layer on the upper layer of the hardware abstraction layer; after receiving the first event, the kernel layer of the application processor controls the mobile device not to respond to the touch operation of the touch screen of the mobile device, the touch point or the touch surface of the touch operation is located in a partial area or any area of the touch screen, and the mobile device does not respond to the second event. In this way, the unresponsive process after receiving the first event is further embodied, and it is stated from a more specific point of view that the unresponsive process not only detects the unresponsive process earlier, but also has a shorter processing time, so that the situation that the mobile device is touched by mistake is reduced or even avoided.
According to the second aspect, or any implementation manner of the second aspect, in a case where the mobile device does not respond to a touch operation on the whole or part of the screen of the mobile device, the screen is still in a bright screen state. Thus, the original state of the screen is still maintained.
For additional implementation manners and corresponding technical effects in the second aspect, reference may be made to the corresponding implementation manners and corresponding technical effects in the first aspect, and details are not described here.
In a third aspect, a false touch prevention method is provided. The false touch prevention method is applied to mobile equipment and comprises the following steps: when the mobile equipment is in a calling state, a receiving state or a voice message listening state, the mobile equipment does not respond to the first event; after the mobile equipment receives the second event, the mobile equipment does not respond to the touch operation of the touch screen of the mobile equipment, and touch points or a touch surface of the touch operation are positioned in a partial area or any area of the touch screen; after the calling state, the answering state or the voice message listening state is finished, or after a preset gesture operation and/or a preset side key operation are detected, the mobile equipment responds to touch operation on a part of or any area of a touch screen of the mobile equipment; the first event includes: after the mobile device is changed from a static state to a moving state, when the mobile device detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile device and a preset moving curve is greater than or equal to a preset similarity threshold value, determining an event generated by the movement of the mobile device towards the side of the head of a user; or after the mobile device is changed from a static state to a moving state, when the mobile device detects that an image acquired by a front camera of the mobile device comprises an ear image of a person, determining an event generated by the mobile device moving towards the side of the head of the user; or after the mobile device is changed from a static state to a moving state, when the mobile device detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile device and a preset moving curve is greater than or equal to a preset similarity threshold value and an ear image of a person is included in an image obtained by a front camera of the mobile device, determining an event generated by the movement of the mobile device towards the side face of the head of the user; the second event includes: detecting that the distance between the mobile equipment and the side face of the head of the user is smaller than a preset threshold value, and determining an event generated when the mobile equipment approaches the side face of the head of the user; or, when the intensity value of a reflected signal of a self-sent signal received by the mobile equipment is detected to be greater than or equal to a preset threshold value, determining an event generated when the mobile equipment approaches the side face of the head of the user, wherein the signal is an electromagnetic wave signal or an audio signal; or detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile equipment at a first position is greater than or equal to the intensity value of a reflected signal at a second position, and determining an event generated when the mobile equipment approaches the side face of the head of the user, wherein the first position is farther away from the part of the mobile equipment where the signal is sent than the second position, and the signal is an electromagnetic wave signal or an audio signal; or, determining an event generated when the mobile device approaches the side of the head of the user when detecting that the area of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value. In this way, although the mobile device is not set in advance to not respond to the touch operation on any area or partial area of the touch screen of the mobile device, the non-response flow processing time length triggered by the second event is obviously shortened compared with the existing screen-off flow processing time length triggered by the second event, and the situation that the mobile device is touched by mistake is reduced or even avoided.
According to the third aspect, after the mobile device receives the second event, the mobile device does not respond to the touch operation on the touch screen of the mobile device, and a touch point or a touch surface of the touch operation is located in a partial area or any area of the touch screen; the method comprises the following steps: when the mobile equipment receives the second event, the mobile equipment uploads the second event to a kernel layer of an application processor of the mobile equipment; the kernel layer of the application processor does not upload the second event to the hardware abstraction layer and the framework layer on the upper layer of the hardware abstraction layer; the kernel layer of the application processor controls the mobile equipment not to respond to touch operation on a touch screen of the mobile equipment, and touch points or a touch surface of the touch operation are located in partial areas or any areas of the touch screen. In this way, the unresponsive flow after the second event is received is further embodied, and the processing time of the unresponsive flow is shorter from a more specific point of view, so that the situation that the mobile device is touched by mistake is reduced or even avoided.
For additional implementation manners and corresponding technical effects in the third aspect, reference may be made to the corresponding implementation manners and corresponding technical effects in the first aspect and the second aspect, and details are not described here.
In a fourth aspect, a method of preventing false touches is provided. The false touch prevention method is applied to mobile equipment and comprises the following steps: when the mobile equipment is in a calling state, a receiving state or a voice message listening state, after receiving a first event, the mobile equipment starts and completes a screen-off process of a touch screen of the mobile equipment, and the mobile equipment does not respond to a second event; the first event includes: after the mobile device is changed from a static state to a moving state, when the mobile device detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile device and a preset moving curve is greater than or equal to a preset similarity threshold value, determining an event generated by the movement of the mobile device towards the side of the head of a user; or after the mobile device is changed from a static state to a moving state, when the mobile device detects that an image acquired by a front camera of the mobile device comprises an ear image of a person, determining an event generated by the mobile device moving towards the side of the head of the user; or after the mobile device is changed from a static state to a moving state, when the mobile device detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile device and a preset moving curve is greater than or equal to a preset similarity threshold value and an ear image of a person is included in an image obtained by a front camera of the mobile device, determining an event generated by the movement of the mobile device towards the side face of the head of the user; the second event includes: detecting that the distance between the mobile equipment and the side face of the head of the user is smaller than a preset threshold value, and determining an event generated when the mobile equipment approaches the side face of the head of the user; or, when the intensity value of a reflected signal of a self-sent signal received by the mobile device is detected to be greater than or equal to a preset threshold value, determining an event generated when the mobile device approaches the side face of the head of the user, wherein the signal is an electromagnetic wave signal or an audio signal; or detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile equipment at a first position is greater than or equal to the intensity value of a reflected signal at a second position, and determining an event generated when the mobile equipment approaches the side face of the head of the user, wherein the first position is farther away from the part of the mobile equipment where the signal is sent than the second position, and the signal is an electromagnetic wave signal or an audio signal; or, determining an event generated when the mobile device approaches the side of the head of the user when detecting that the area of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value. In this way, although the no-response flow is not executed any more, the screen-off process of the touch screen is timely performed by setting the mobile device in advance to execute the screen-off flow, so that the situation that the mobile device is touched by mistake is reduced or even avoided.
According to a fourth aspect, after receiving the first event, the mobile device starts and completes a screen-off process of the touch screen of the mobile device, including: when receiving a first event, the mobile device uploads the first event to a kernel layer, a hardware abstraction layer and a framework layer of an application processor of the mobile device in sequence; starting a screen-off process in the frame layer; then, the data are sequentially downloaded from the framework layer to the hardware abstraction layer and the kernel layer of the application processor; the screen-off process is completed. Therefore, the screen-off process after the first event is received is further embodied, the execution time of the screen-off process is advanced from a more specific angle, the processing is earlier, and the situation that the mobile equipment is touched by mistake is reduced or even avoided.
For additional implementation manners and corresponding technical effects in the fourth aspect, reference may be made to corresponding implementation manners and corresponding technical effects in the first aspect, the second aspect, and the third aspect, which are not described herein again.
In a fifth aspect, a false touch prevention method is provided. The false touch prevention method is applied to mobile equipment and comprises the following steps: when the mobile equipment is in a calling state, a receiving state or a voice message listening state, the mobile equipment does not respond to touch operation on a touch screen of the mobile equipment within a preset time after a first preset event is received, and a touch point or a touch surface of the touch operation is located in a partial area or any area of the touch screen; within the preset duration, when the mobile equipment receives a second preset event, the mobile equipment starts a screen-off process; completing the screen-off process within the preset time length; the first preset event is an event that the mobile device moves towards the side of the head of the user; the second preset event is an event that the mobile device approaches the side of the head of the user.
According to a fifth aspect, the first preset event is determined by: after the mobile equipment is changed from a static state to a moving state, when the mobile equipment detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile equipment and a preset moving curve is greater than or equal to a preset similarity threshold value, determining the first preset event; or after the mobile device changes from a static state to a mobile state, when the mobile device detects that an image acquired by a front camera of the mobile device includes an ear image of a person, determining the first preset event; or after the mobile device changes from a static state to a moving state, when the mobile device detects that the similarity between a moving curve obtained through an acceleration sensor of the mobile device and a preset moving curve is greater than or equal to a preset similarity threshold value and an image obtained through a front camera of the mobile device includes an ear image of a person, determining the first preset event.
According to a fifth aspect, the second preset event is determined by: determining the second preset event when the distance between the mobile equipment and the side face of the head of the user is detected to be smaller than a preset threshold value; or, determining the second preset event when detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile device is greater than or equal to a preset threshold value, wherein the signal is an electromagnetic wave signal or an audio signal; or, determining the second preset event when detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile device at a first position is greater than or equal to the intensity value at a second position, wherein the first position is farther away from a part of the mobile device, where the signal is sent, of the mobile device than the second position, and the signal is an electromagnetic wave signal or an audio signal; or detecting that the area of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value, and determining the second preset event.
For additional implementation manners and corresponding technical effects in the fifth aspect, reference may be made to corresponding implementation manners and corresponding technical effects in the first aspect, the second aspect, the third aspect, and the fourth aspect, and details are not repeated here.
In a sixth aspect, a method of preventing false touches is provided. The false touch prevention method is applied to mobile equipment and comprises the following steps: when the mobile equipment is in a calling state, a receiving state or a voice message listening state, after a first preset event is received, the mobile equipment does not respond to touch operation on a touch screen of the mobile equipment, a touch point or a touch surface of the touch operation is located in a partial area or any area of the touch screen, and the mobile equipment does not respond to a second preset event; after the calling state, the answering state or the voice message listening state is finished, or after a preset gesture operation and/or a preset side key operation is detected, the mobile equipment responds to touch operation on a partial area or any area of a touch screen of the mobile equipment; the first preset event is an event that the mobile device moves towards the side of the head of the user; the second preset event is an event that the mobile device approaches the side of the head of the user.
According to a sixth aspect, the first preset event is determined by: after the mobile equipment is changed from a static state to a moving state, when the mobile equipment detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile equipment and a preset moving curve is greater than or equal to a preset similarity threshold value, determining the first preset event; or after the mobile device changes from a static state to a mobile state, when the mobile device detects that an image acquired by a front camera of the mobile device includes an ear image of a person, determining the first preset event; or after the mobile device changes from a static state to a moving state, when the mobile device detects that the similarity between a moving curve obtained through an acceleration sensor of the mobile device and a preset moving curve is greater than or equal to a preset similarity threshold value and an image obtained through a front camera of the mobile device includes an ear image of a person, determining the first preset event.
According to a sixth aspect, the second preset event is determined by: determining the second preset event when the distance between the mobile equipment and the side face of the head of the user is detected to be smaller than a preset threshold value; or, determining the second preset event when detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile device is greater than or equal to a preset threshold value, wherein the signal is an electromagnetic wave signal or an audio signal; or, determining the second preset event when detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile device at a first position is greater than or equal to the intensity value at a second position, wherein the first position is farther away from a part of the mobile device, where the signal is sent, of the mobile device than the second position, and the signal is an electromagnetic wave signal or an audio signal; or detecting that the area of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value, and determining the second preset event.
For additional implementation manners and corresponding technical effects in the sixth aspect, reference may be made to corresponding implementation manners and corresponding technical effects in the first aspect, the second aspect, the third aspect, the fourth aspect, and the fifth aspect, and details are not repeated here.
In a seventh aspect, a method for preventing false touches is provided. The false touch prevention method is applied to mobile equipment and comprises the following steps: when the mobile equipment is in a calling state, a receiving state or a voice message listening state, the mobile equipment does not respond to a first preset event; after the mobile equipment receives a second preset event, the mobile equipment does not respond to touch operation on a touch screen of the mobile equipment, and touch points or a touch surface of the touch operation are located in a partial area or any area of the touch screen; after the calling state, the answering state or the voice message listening state is finished, or after a preset gesture operation and/or a preset side key operation is detected, the mobile equipment responds to touch operation on a partial area or any area of a touch screen of the mobile equipment; the first preset event is an event that the mobile device moves towards the side of the head of the user; the second preset event is an event that the mobile device approaches the side of the head of the user.
According to a seventh aspect, the first preset event is determined by: after the mobile equipment is changed from a static state to a moving state, when the mobile equipment detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile equipment and a preset moving curve is greater than or equal to a preset similarity threshold value, determining the first preset event; or after the mobile device changes from a static state to a mobile state, when the mobile device detects that an image acquired by a front camera of the mobile device includes an ear image of a person, determining the first preset event; or after the mobile device changes from a static state to a moving state, when the mobile device detects that the similarity between a moving curve obtained through an acceleration sensor of the mobile device and a preset moving curve is greater than or equal to a preset similarity threshold value and an image obtained through a front camera of the mobile device includes an ear image of a person, determining the first preset event.
According to a seventh aspect, the second preset event is determined by: determining the second preset event when the distance between the mobile equipment and the side face of the head of the user is detected to be smaller than a preset threshold value; or, determining the second preset event when detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile device is greater than or equal to a preset threshold value, wherein the signal is an electromagnetic wave signal or an audio signal; or, determining the second preset event when detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile device at a first position is greater than or equal to the intensity value at a second position, wherein the first position is farther away from a part of the mobile device, where the signal is sent, of the mobile device than the second position, and the signal is an electromagnetic wave signal or an audio signal; or detecting that the area of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value, and determining the second preset event.
For further implementation manners and corresponding technical effects in the seventh aspect, reference may be made to corresponding implementation manners and corresponding technical effects in the first aspect, the second aspect, the third aspect, the fourth aspect, the fifth aspect, and the sixth aspect, and details are not repeated here.
In an eighth aspect, a method for preventing false touches is provided. The false touch prevention method is applied to mobile equipment and comprises the following steps: when the mobile equipment is in a calling state, a receiving state or a voice message listening state, after a first preset event is received, the mobile equipment starts and completes a screen turn-off process of a touch screen of the mobile equipment, and the mobile equipment does not respond to a second preset event; the first preset event is an event that the mobile device moves towards the side of the head of the user; the second preset event is an event that the mobile device approaches the side of the head of the user.
According to an eighth aspect, the first preset event is determined by: after the mobile equipment is changed from a static state to a moving state, when the mobile equipment detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile equipment and a preset moving curve is greater than or equal to a preset similarity threshold value, determining the first preset event; or after the mobile device changes from a static state to a mobile state, when the mobile device detects that an image acquired by a front camera of the mobile device includes an ear image of a person, determining the first preset event; or after the mobile device changes from a static state to a moving state, when the mobile device detects that the similarity between a moving curve obtained through an acceleration sensor of the mobile device and a preset moving curve is greater than or equal to a preset similarity threshold value and an image obtained through a front camera of the mobile device includes an ear image of a person, determining the first preset event.
According to an eighth aspect, the second preset event is determined by: determining the second preset event when the distance between the mobile equipment and the side face of the head of the user is detected to be smaller than a preset threshold value; or, determining the second preset event when detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile device is greater than or equal to a preset threshold value, wherein the signal is an electromagnetic wave signal or an audio signal; or, determining the second preset event when detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile device at a first position is greater than or equal to the intensity value at a second position, wherein the first position is farther away from a part of the mobile device, where the signal is sent, of the mobile device than the second position, and the signal is an electromagnetic wave signal or an audio signal; or detecting that the area of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value, and determining the second preset event.
For additional implementation manners and corresponding technical effects in the eighth aspect, reference may be made to corresponding implementation manners and corresponding technical effects in the first, second, third, fourth, fifth, sixth, and seventh aspects described above, and details are not repeated here.
In a ninth aspect, a mobile device is provided. The mobile device includes at least: memory, one or more processors, one or more applications, and one or more computer programs; wherein the one or more computer programs are stored in the memory; the one or more processors, when executing the one or more computer programs, cause the mobile device to implement a false touch prevention method as in the foregoing first, second, third, fourth, fifth, sixth, seventh, eighth aspect, and any one of the possible implementations.
In addition, for any implementation manner and corresponding technical effects of the ninth aspect, reference may be made to different implementation manners and corresponding technical effects of the first aspect, the second aspect, the third aspect, the fourth aspect, the fifth aspect, the sixth aspect, the seventh aspect, and the eighth aspect, and details are not repeated here.
In a tenth aspect, a computer-readable storage medium is provided. The computer-readable storage medium comprises instructions which, when run on the mobile device according to the ninth aspect, cause the mobile device to perform the anti-false touch method according to the first, second, third, fourth, fifth, sixth, seventh, eighth aspect, and any possible implementation manner.
In addition, for any implementation manner and corresponding technical effects of the tenth aspect, reference may be made to different implementation manners and corresponding technical effects of the first aspect, the second aspect, the third aspect, the fourth aspect, the fifth aspect, the sixth aspect, the seventh aspect, and the eighth aspect, and details are not repeated here.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below. It is obvious that the drawings in the following description relate to some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic structural diagram of a mobile device according to an embodiment of the present application.
Fig. 2(a) -2(b) are schematic diagrams of two scenarios in which a user makes a false touch while using a mobile device.
FIG. 3 is a schematic diagram of a screen blanking process when a mobile device receives a second event proximate a side of a user's head.
Fig. 4(a) -4(c) are schematic scene diagrams of a false touch prevention method in which the mobile device has a starting position 30cm right in front of the chest of the user according to an embodiment of the present application.
Fig. 4(d) is a schematic view of a scene in which the starting position of the mobile device is 30cm in front of the right of the chest of the user in the first embodiment of the present application.
Fig. 5(a) -5(b) are a schematic diagram and a flowchart of a false touch prevention method provided in a first embodiment of the present application, respectively.
Fig. 5(c) is a schematic time relationship diagram of a first event and a second event and a corresponding triggered flow execution in a false touch prevention method according to a first embodiment of the present application; fig. 5(d) is a simplified schematic diagram of the time relationship of fig. 5 (c).
Fig. 6(a) is a schematic diagram of the time-varying components of the measured acceleration in the X, Y and Z axes during movement of the mobile device from a starting position 30cm directly in front of the user's chest, toward the front of the user's head.
Fig. 6(b) -6(d) are schematic diagrams of time-varying components of acceleration measured in the X, Y and Z axes during the movement of the mobile device from the initial position to the side of the user's head in the first, second and fourth embodiments of the present application, wherein the initial position of the mobile device is 30cm directly in front of the user's chest, 30cm in front of the user's left and 30cm in front of the user's right, respectively.
Fig. 7 is a schematic diagram of each area on a touch screen of a mobile device in a false touch prevention method provided in the first embodiment, the second embodiment and the third embodiment and in a mobile device provided in the fifth embodiment of the present application.
Fig. 8(a) -8(b) are a schematic diagram and a flowchart of a false touch prevention method provided in the second embodiment of the present application, respectively.
Fig. 9(a) -9(b) are a schematic diagram and a flowchart of a false touch prevention method provided in a third embodiment of the present application, respectively.
Fig. 10(a) -10(b) are a schematic diagram and a flowchart of a false touch prevention method provided in the fourth embodiment of the present application, respectively.
Fig. 11 is a block diagram of a mobile device according to an eighth embodiment of the present application.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present invention.
The method provided by the embodiment of the application can be applied to the mobile device 100 shown in fig. 1. Fig. 1 shows a schematic structural diagram of a mobile device 100.
The mobile device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiments of the present invention does not constitute a specific limitation to the mobile device 100. In other embodiments of the present application, the mobile device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement the touch function of the mobile device 100.
The I2S interface may be used for audio communication. The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. The UART interface is a universal serial data bus used for asynchronous communications.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate over a CSI interface to implement the camera functionality of mobile device 100. The processor 110 and the display screen 194 communicate via the DSI interface to implement the display functionality of the mobile device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. It should be understood that the interfacing relationship between the modules according to the embodiment of the present invention is only illustrative, and does not limit the structure of the mobile device 100. In other embodiments of the present application, the mobile device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The wireless communication function of the mobile device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in mobile device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied on the mobile device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the mobile device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of mobile device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that mobile device 100 can communicate with networks and other devices via wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The mobile device 100 implements display functions via the GPU, the display screen 194, and the application processor, among other things. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the mobile device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The mobile device 100 may implement a camera function via the ISP, camera 193, video codec, GPU, display screen 194, application processor, etc.
The ISP is used to process the data fed back by the camera 193. The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the mobile device 100 is in frequency bin selection, the digital signal processor is used to perform a fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The mobile device 100 may support one or more video codecs. In this way, the mobile device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the mobile device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The stored data area may store data (e.g., audio data, phone book, etc.) created during use of the mobile device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications of the mobile device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The gyro sensor 180B may be used to determine the motion attitude of the mobile device 100. The air pressure sensor 180C is used to measure air pressure. In some embodiments, mobile device 100 calculates altitude, aiding in positioning and navigation from barometric pressure values measured by barometric pressure sensor 180C. The magnetic sensor 180D includes a hall sensor. The acceleration sensor 180E may detect the magnitude of acceleration of the mobile device 100 in various directions (typically three axes). A distance sensor 180F for measuring a distance. The mobile device 100 may measure distance by infrared or laser. In some embodiments, taking a picture of a scene, the mobile device 100 may utilize the range sensor 180F to range for fast focus. The proximity sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The ambient light sensor 180L is used to sense the ambient light level. The fingerprint sensor 180H is used to collect a fingerprint. The mobile device 100 can utilize the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photographing, fingerprint incoming call answering, and the like. The temperature sensor 180J is used to detect temperature.
The proximity sensor 180G mainly has the following proximity detection modes:
1. the proximity sensor 180G sends out a signal in real time and receives a reflected signal; when the reflected signal is not received, the detection result is not close; when the reflected signal is received, the detection result is close. More precisely, the distance between the obstruction and the mobile device 100 is calculated according to the propagation speed and the duration of the signal, and the distance is compared with a preset distance threshold; if the detection result is less than the preset value, the detection result is close; otherwise, the detection result is not close. The signals include audio signals, ultrasonic signals, infrared signals, and visible light signals.
2. The proximity sensor 180G transmits and receives the reflected signal, compares the received reflected signal intensity with a preset signal intensity threshold, and when the comparison result is greater than or equal to the preset signal intensity threshold, the detection result is proximity; otherwise, the detection result is not close. More precisely, the proximity sensor 180G determines whether or not to approach based on the magnitude relationship of the signal intensities of the received reflected signals at different receiving positions. Specifically, if the intensity of the reflected signal received by the first receiving position of the proximity sensor 180G is greater than or equal to the intensity of the reflected signal received by the second receiving position, the detection result is proximity; otherwise, the detection result is not close; the first receiving position is farther from the transmission signal site of the proximity sensor 180G than the second receiving position. The signals include audio signals, ultrasonic signals, infrared signals, and visible light signals.
The acceleration sensor 180E detects the acceleration of the mobile device 100 in real time. The mobile device 100 is provided with X, Y and Z-axis along its width, length and height directions, respectively. The X, Y and Z axes set forth above may also be along other directions of the mobile device 100 as long as the X, Y and Z axes are perpendicular to each other. The acceleration sensor 180E detects and acquires acceleration and acceleration components of the acceleration in three axes X, Y and Z in real time, and plots a movement curve of the acceleration with time, and an X-axis curve, a Y-axis curve and a Z-axis curve of the acceleration components along three axes X, Y and Z with time according to the acceleration acquired by real time detection.
Fig. 2(a) -2(b) are schematic diagrams of two scenarios in which a user makes a false touch while using a mobile device. As shown in fig. 2(a), when a user uses a mobile device to answer a call, answer a three-party voice message, or make an outgoing call, when the mobile device is close to the side of the head of the user, and the touch screen of the mobile device is not turned off, the user may touch the head by mistake due to the twisting motion. This affects the normal use of the user. Sometimes the posture of the user placing the mobile device to the side of the head is not normal, such as the user presenting the mobile device in the posture shown in fig. 2 (b). And the proximity sensor 180G is generally disposed near a front camera (not shown) on the upper portion of the mobile device. Therefore, when the mobile device is in the posture as shown in fig. 2(b), the proximity sensor 180G cannot trigger the screen-off process because sensing data reflecting the actual situation cannot be acquired, and thus a false touch is likely to occur. In addition, for the mode that the mobile device determines whether the mobile device is close to the side face of the head of the user through the change of the capacitance value on the capacitive screen, the error touch caused by the untimely screen-off processing of the touch screen is easy to occur. It should be noted that fig. 2(a) -2(b) above only show the user holding the mobile device with the right hand. However, fig. 2(a) -2(b) are only used for illustration, and the technical problem mentioned above is also existed when the user holds the mobile device by left hand.
The inventors studied and analyzed the reasons behind the false touches. As shown in fig. 3, the screen-off process of the mobile device involves a Framework Layer (FWK), a Hardware Abstraction Layer (HAL), a kernel Layer (kernel), and an intelligent sensing hub. The framework layer is an API framework used by a core application program, and provides various interface APIs for the application layer, including various components and services to support android development of developers. The hardware abstraction layer is an abstract interface of device kernel drivers, and provides an application programming interface for accessing the underlying devices to a higher-level Java API framework. The HAL contains a plurality of library modules, each of which implements an interface for a particular type of hardware component. When the framework API requires access to the device hardware,
Figure BDA0002277682930000131
the system will load the library module for that hardware component. The inner core layer is
Figure BDA0002277682930000132
The basis of the system.
Figure BDA0002277682930000133
The final function realization of the system is completed through kernel. An intelligent sensing hub is a solution based on a combination of software and hardware on a low power consumption Microprocessor (MCU) and a lightweight Real-time operating system (RTOS), and has a main function of connecting and processing data from various sensor devices. A hardware abstraction definition language (HIDL) is an interface description language that specifies the interface between the HAL and the FWK.
FIG. 3 is a schematic diagram of a screen blanking process when a mobile device receives a second event proximate a side of a user's head. In the screen-off procedure of the mobile device as shown in fig. 3, the mobile device generates a second event when the mobile device approaches the side of the user's head; the second event is transmitted to the framework layer sequentially through an intelligent sensing hub, a kernel layer of an Application Processor (AP for short) and a hardware abstraction layer; and starting an execution instruction of the screen-off flow by the framework layer, transmitting the execution instruction to the kernel layer of the application processor through the hardware abstraction layer, and specifically executing screen-off by the kernel layer of the application processor. Through measurement and experiment, the time consumed for the whole process from the time when the mobile device is close to the side face of the head of the user to the time when the touch screen is turned off is generally 200ms-800 ms. The whole process takes a long time. Therefore, the screen-off processing of the touch screen by the mobile device is not timely.
Example one
A false touch prevention method provided in the first embodiment of the present application relates to fig. 4(a) -4(d) and fig. 5(c) -5 (d). In the first embodiment of the present application, the starting position of the user may be any position. For convenience of explanation, considering the most common scenario of a mobile device in actual use by a user, in conjunction with fig. 4(a) -4(d), it is clarified from certain positions of the starting position of the mobile device, respectively, directly in front of the user's chest (fig. 4(a)), left front of the user's chest (not shown in fig. 4), and right front of the user's chest (fig. 4 (d)).
The scene shown in fig. 4(a) is a scene in which the initial position of the mobile device is a certain position immediately in front of the chest of the user when the user uses the mobile device. This scenario is one of the most common scenarios when a user uses a mobile device. For example, when a user is on a subway or a bus, or walks or sits, the user usually takes up the mobile phone at a position right in front of the chest of the user to use the mobile phone. In the scenario shown in fig. 4(a), the starting position of the mobile device is 30cm directly in front of the user's chest. The above-mentioned distance of 30cm is a distance of one representative value selected according to the usage habit and arm length of an adult, and is not used to exclude distances of other values. Other numerical distances are within the scope of the present application. Distances of, for example, 10cm, 15cm, 20cm, 23cm, 35cm or any other value may be set and selected according to the user's usage habits and arm length. The above numerical value may be an integer or a decimal number, for example, 26.5 cm. Of course, the direction of the starting position of the mobile device is not limited to the front of the chest of the user, but may be other directions. Such as the front left, front right, etc. of the user's chest. The distance between the left front side and the right front side of the chest of the user may be 30cm, or may be any of the above-mentioned values or any other values, and may be selected according to the user's use habits and arm length. The above numerical values may be integers or decimals.
Take the scenario shown in fig. 4(a) as an example, i.e. the starting position of the mobile device is the user at 30cm directly in front of his chest. The movement locus of the mobile device is shown in fig. 4 (b). First, when the mobile device is in home position 1, the user places a call through the mobile device, or receives a call, or listens to a voice message. At this time, the user clicks a dial-out button, or an answer button, or a play button. Then, the user holds the mobile device starting from the home position 1 and moving toward the side of the user's head until the side of the user's head is approached for talking or listening to a voice message. When the mobile equipment is switched from a static state of an initial position 1 to a motion state and moves to a position 2, the mobile equipment receives a first event and triggers a non-response process; after the execution of the unresponsive flow is finished, the mobile equipment does not respond to the touch operation of the touch screen of the mobile equipment and keeps the preset time length; and the touch point or the touch surface of the touch operation is positioned in a partial area or any area of the touch screen. The processing time of the non-response flow is too short relative to the processing time of the screen-off flow, and is negligible. The processing duration of the unresponsive flow comprises the duration from the time the first event is received by the mobile equipment to the time the execution of the unresponsive flow is finished; the processing duration of the unresponsive flow does not include a preset duration. Therefore, the mobile device does not respond to the touch operation of the touch screen of the mobile device within the preset time length after the mobile device receives the first event. The mobile device continues to move from location 2, and when the mobile device moves to location 3, the mobile device receives a second event and triggers the execution of the screen-off process. In position 3, the mobile device is only near the side of the user's head and does not touch the side of the user's head, such as the side of the user's face or ears. The screen is turned off after the processing time of the screen turning-off process of the mobile equipment. The partial area includes: the touch screen comprises a status bar on the upper portion of the touch screen, and a dial and a navigation bar on the middle lower portion of the touch screen.
The first event includes, but is not limited to: after the mobile equipment is switched from a static state to a moving state, when the mobile equipment detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile equipment and a preset moving curve is greater than or equal to a preset similar threshold value, determining an event generated by the movement of the mobile equipment towards the side face of the head of a user; and/or determining an event generated by the mobile equipment moving towards the side of the head of the user when the mobile equipment detects that the image acquired by the front camera of the mobile equipment comprises an ear image of a person after the mobile equipment is switched from a static state to a moving state.
The second event includes, but is not limited to: detecting that the distance between the mobile equipment and the side face of the head of the user is smaller than a preset threshold value, and determining an event generated when the mobile equipment approaches the side face of the head of the user; or, determining an event generated when the mobile device approaches the side of the head of the user when detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile device is greater than or equal to a preset threshold value, wherein the signal is an electromagnetic wave signal or an audio signal; or detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile equipment at a first position is greater than or equal to the intensity value of a reflected signal at a second position, and determining an event generated when the mobile equipment approaches the side face of the head of the user, wherein the first position is farther away from the part of the mobile equipment where the signal is sent than the second position, and the signal is an electromagnetic wave signal or an audio signal; or, determining an event generated when the mobile device approaches the side of the head of the user when detecting that the area of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold and/or the capacitance value is larger than or equal to a second preset threshold.
Wherein whether the first event is generated is determined by the smart sensor hub based on the sensor data transmitted by the sensor. Including but not limited to acceleration sensors and/or gyroscopes placed on the mobile device. For simplicity, only the acceleration sensor will be described as an example. The mobile device is provided with X, Y and Z three axes along the width direction, the length direction and the height direction respectively. The X, Y and Z axes can be arranged along other directions of the mobile device as long as X, Y and the Z axes are perpendicular to each other. The acceleration sensor detects and acquires acceleration and acceleration components of the acceleration on X, Y and Z axes in real time, and draws a moving curve of the acceleration along with time change and/or an X-axis curve, a Y-axis curve and a Z-axis curve of the acceleration components along X, Y and Z axes along with time change according to the acceleration acquired by real-time detection. The mobile device is used by a user, and records data representing a movement curve of acceleration with time and/or data representing an X-axis curve, a Y-axis curve and a Z-axis curve of acceleration components with time along the X, Y and Z axes based on the acceleration detected in real time. Before the mobile device leaves a factory, preset data of a preset movement curve representing the change of the acceleration along with time and/or preset data of a preset X-axis curve, a preset Y-axis curve and a preset Z-axis curve representing the change of acceleration components along X, Y and Z axes along with time are preset on the mobile device. That is, the movement curve can be decomposed into an X-axis curve, a Y-axis curve, and a Z-axis curve; the preset moving curve can be decomposed into a preset X-axis curve, a preset Y-axis curve and a preset Z-axis curve. And comparing the similarity of the moving curve and the preset moving curve, and when the similarity of the moving curve and the preset moving curve is greater than or equal to a preset similarity threshold, considering that the moving curve and the preset moving curve are similar to each other, and further generating a first event.
Further, in order to improve the comparison accuracy, the similarity comparison result between the movement curve and the preset movement curve may be obtained by comparing the similarity between the X-axis curve, the Y-axis curve, and the Z-axis curve and the similarity between the preset X-axis curve, the preset Y-axis curve, and the preset Z-axis curve, respectively. For example, if the similarity between the movement curve and the preset movement curve is greater than or equal to a preset similarity threshold, the similarities between the X-axis curve, the Y-axis curve, and the Z-axis curve and the preset X-axis curve, the preset Y-axis curve, and the preset Z-axis curve are all required to be greater than or equal to a preset similarity threshold. Of course, other options are possible. For example, if the similarity between the movement curve and the preset movement curve is greater than or equal to a preset similarity threshold, the similarity between any two combinations of the X-axis curve and the preset X-axis curve, the similarity between the Y-axis curve and the preset Y-axis curve, and the similarity between any two combinations of the Z-axis curve and the preset Z-axis curve is required to be greater than or equal to a preset similarity threshold. The above variations can be specifically adjusted according to the use case.
In addition, the preset moving curve can be corrected and adjusted by adopting a statistical means when the user uses the device. For example, after the similarity between the movement curve and the preset movement curve is determined to be greater than or equal to the preset similarity threshold, the movement curve meeting the requirement is superimposed on the preset movement curve based on the preset movement curve, so as to form a new preset movement curve. The movement curves meeting the requirements can be accumulated to a certain number of times and then superposed, and the superposition can also be carried out as long as the movement curves meeting the requirements exist. The above-mentioned fixed number may be set as 2, 3, 4, etc., and is set as needed. And discarding the numerical value corresponding to the same time point which is abnormal or has larger deviation on the superposed preset movement curve. And replacing the average or median of the values close to the corresponding values at the same time point. And continuously correcting and adjusting to enable the preset moving curve to continuously approach to an ideal curve.
In determining the similarity between the movement curve and the preset movement curve, the characteristics of the movement curve and the preset movement curve may be acquired, and a comparison may be made based on the acquired characteristics of the two to determine whether to generate the first event. For the movement curve, the features obtained include: the monotonicity change conditions of the X-axis curve, the Y-axis curve and the Z-axis curve, and the number of wave crests and wave troughs and the change conditions of the wave crests and the wave troughs contained in the curves respectively. For the preset movement curve, the obtained characteristics include: the preset X-axis curve, the preset Y-axis curve, the monotonicity change condition of the preset Z-axis curve, and the number of wave crests and wave troughs and the change condition of the wave crests and the wave troughs contained in the curves respectively. The above-described captured features are merely exemplary and do not limit the scope of the captured features. Other features that can be used to identify the curve are also within the scope of the captured features.
Taking the features obtained in the above exemplary description as an example, a specific obtaining manner is further explained. The feature acquisition of the movement curve and the preset movement curve may be performed in the following manner: dividing the time of the abscissa axes of the moving curve and the preset moving curve into a plurality of time periods; expressing the movement curve corresponding to each time period and the preset movement curve by using an approximate function; deriving an approximate function under each time period, and obtaining monotonicity change conditions such as monotone increasing, monotone decreasing, monotone increasing first and then monotone decreasing, monotone decreasing first and then monotone increasing according to a derivation result; according to the monotonicity change condition, a transition point which is monotonically increased and then monotonically decreased and a transition point which is monotonically decreased and then monotonically increased can be obtained; the transition points which are monotonically increased and then monotonically decreased are wave crests, and the transition points which are monotonically decreased and then monotonically increased are wave troughs, so that the number of wave crests and wave troughs and the change situation of the wave crests and the wave troughs can be obtained. The above specific acquiring manner is only an exemplary illustration for acquiring the features, and does not limit the scope of the acquiring manner of the features. Other ways of obtaining the features are within the scope of the embodiments of the present application.
Whether the second event occurs may be determined by the smart sensor hub based on data detected by the proximity sensor. The specific mode can be as follows: 1. a proximity sensor of the mobile equipment sends out signals in real time and receives reflected signals; when the proximity sensor does not receive the reflected signal, the intelligent sensing concentrator determines that the detection result is not in proximity, and does not generate the second event; and when the reflected signal is received, the intelligent sensing concentrator determines that the detection result is close, and generates the second event. More precisely, the distance between the shelter and the mobile equipment is calculated according to the propagation speed and the time length of the signal, and the distance is compared with a preset distance threshold; if the detected result is close, the intelligent sensing concentrator determines that the detected result is close, and the second event is generated; otherwise, the intelligent sensing hub determines that the detection result is not close and does not generate the second event. The signals include audio signals, ultrasonic signals, infrared signals, and visible light signals. 2. The proximity sensor of the mobile equipment transmits and receives the reflected signal, the received reflected signal strength is compared with a preset signal strength threshold value, and when the comparison result is greater than or equal to the preset signal strength threshold value, the intelligent sensing concentrator determines that the detection result is close and generates the second event; otherwise, the intelligent sensing hub determines that the detection result is not close and does not generate the second event. More precisely, the proximity sensor of the mobile device determines whether the second event is generated based on the magnitude relationship of the signal strength of the received reflected signal at different receiving positions, and the smart sensor hub determines whether the second event is generated. Specifically, if the intensity of the reflected signal received by the first receiving position of the proximity sensor of the mobile device is greater than or equal to the intensity of the reflected signal received by the second receiving position, and the detection result is proximity, the intelligent sensing hub generates the second event; otherwise, the detection result is not close, and the intelligent sensing concentrator does not generate the second event; the first receiving location is further from a signal transmitting location of a proximity sensor of the mobile device than the second receiving location. The signals include audio signals, ultrasonic signals, infrared signals, and visible light signals.
Whether the second event occurs may also be determined by a change in capacitance value of a capacitive touch screen of the mobile device when in proximity to the user's face. Specifically, the detection result is determined to be close or not close by detecting that the area of the capacitance value of the capacitive touch screen of the mobile equipment, which changes, is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value; the intelligent sensing concentrator generates the second event when the detection result is close; and if the detection result is not close, the intelligent sensing concentrator does not generate the second event.
As shown in FIG. 5(a), the mobile device is based on
Figure BDA0002277682930000161
Provided is a system. After the smart Sensor hub generates the first event, the smart Sensor hub uploads the first event to the Sensor Driver of the kernel layer of the application processor AP, and the Sensor Driver starts an unresponsive process such as a freeze process. Specifically, the Sensor Driver sends a non-response instruction such as a freeze instruction to the TouchScreen Driver, so that the mobile device does not respond to the touch operation on the touch screen of the mobile device within the preset time length T of receiving the first event; and the touch point or the touch surface of the touch operation is positioned in a partial area or any area of the touch screen. The partial area includes: the touch screen comprises a status bar on the upper portion of the touch screen, and a dial and a navigation bar on the middle lower portion of the touch screen. After the intelligent sensing concentrator generates a second event, the intelligent sensing concentrator uploads the second event to a Sensor Driver of an AP kernel layer of an application processor; the second event is uploaded to the Sensor Hidl service of the hardware abstraction layer HAL through the Sensor Driver of the kernel layer of the application processor AP, is continuously uploaded to the Sensor manager of the framework FWK layer through the Sensor Hidl service of the hardware abstraction layer HAL, and is then transferred to the PowerManager of the framework FWK layer through the Sensor manager of the framework FWK layer; finally, a screen-off process is started by a PowerManager on the FWK layer of the framework, and a screen-off instruction is sent; the screen-off instruction is transmitted to the touch screen via a Hardware synthesizer (hard ware Composer, abbreviated as HWC) of the Hardware abstraction layer HAL; and after receiving the screen-off instruction, the touch screen executes a screen-off process. In summary, the non-response process is simplified compared with the screen-off process in terms of the steps and links involved. Therefore, compared with the processing time of the screen-off process, the processing time of the unresponsive process is obviously shortened. The processing duration of the unresponsive flow comprises the duration from the time the first event is received by the mobile equipment to the time the execution of the unresponsive flow is finished; the processing duration of the unresponsive flow does not include a preset duration. The frozen flow and the frozen instruction are a specific way of not responding to the flow and not responding to the instruction, respectively, and are not used to limit the scope of not responding to the flow or not responding to the instruction. Whenever movement is interruptedThe contact way between the touch point on the touch screen of the equipment and the subsequent processing belongs to the realization way of not responding to the flow or not responding to the instruction; also fall within the scope of the embodiments of the present application.
On the basis of fig. 4(b), the first embodiment of the present application will be further explained with reference to fig. 5 (c). When the mobile device is located at the position 2, the mobile device receives a first event; at this point t0, the mobile device initiates a no response procedure. Within a preset time length T after the execution of the unresponsive process is finished, the mobile equipment does not respond to the touch operation of a touch screen of the mobile equipment; and the touch points of the touch operation are positioned in a partial area or any area of the touch screen. T0 is the execution time length of the unresponsive flow, that is, the time length from when the unresponsive flow is started until the execution is completed. According to measurement and experiment, T0 is less than 20 ms. The preset time period T may be set to any time period between 1s and 2s, for example, 1.5s, when the requirement of the first embodiment is met. When the mobile device is located at position 3, the mobile device receives a second event; at this point t1, the mobile device initiates a screen-off procedure. T1 is the execution duration of the screen-off procedure, i.e. the duration from the time the screen-off procedure is started until the execution is completed. Through measurement and experiments, T1 is more than or equal to 200ms and less than or equal to 800 ms. At T1+ T1, the screen-off process is finished and the screen-off is finished. Wherein, as shown in FIG. 5(c), T0 < T0+ T0 < T1 < T1+ T1 ≦ T0+ T < T0+ T0+ T. T0 was negligible because it was significantly too small relative to T and T1. Therefore, a simplified diagram of the relevant time and duration after the omission is shown in fig. 5 (d). The relationship between the relevant time and the duration is simplified as follows: t0 is more than T1 and more than T1+ T1 and less than or equal to T0+ T. In this way, the mobile device does not respond to the touch operation of the touch screen of the mobile device within the preset time length T after the mobile device is located at the position 2 and the unresponsive process is started; and the touch points of the touch operation are positioned in a partial area or any area of the touch screen. The partial area includes: the touch screen comprises a status bar on the upper portion of the touch screen, and a dial and a navigation bar on the middle lower portion of the touch screen. In addition, the screen-off process is also completed within the preset time period T, i.e., within the time period from T0 to T0+ T. That is, from T0 to T0+ T, the mobile device has not responded to a touch operation on the touch screen of the mobile device; and the touch points of the touch operation are positioned in a partial area or any area of the touch screen. Even if the false touch occurs within the preset time length T, the unexpected operation of the false touch cannot be generated. Since the screen of the mobile device is turned off after T0+ T, even if a false touch occurs, an unintended operation of the false touch is not generated. Therefore, from t0, an unintended operation of erroneous touch is not generated.
The first unresponsive flow that has set up earlier than the flow that disappears that screen process starts, just the processing of unresponsive flow is long for the processing of the flow that disappears is long can shorten greatly when the processing of the flow that disappears, and the flow that does not respond is executed and is just accomplished before the length of time of presetting after finishing is carried out the flow that disappears to reduce or even avoided the mobile device is by the condition of mistake touching, has solved because the mobile device is not timely to the screen processing that disappears of touch-control screen and leads to the technical problem that the mistake that appears of user's head side to the touch-control screen touched that appears easily, has improved user experience.
In the first embodiment of the present application, the starting position of the mobile device may also be a certain position on the right front of the chest of the user, and a certain position on the left front of the chest of the user. Fig. 4(d) shows a scenario where the mobile device is located at a certain position at the front right of the chest of the user. Although the starting position of the mobile device is not shown in the drawings of the specification to be at a position to the left and front of the chest of the user, it is conceivable for those skilled in the art to have this starting position. The different starting positions of the mobile devices cause the moving tracks of the mobile devices to be different in the process of moving the mobile devices from the starting positions to the side surfaces of the heads of the users. The acceleration component of the acceleration of the mobile device in the X, Y and Z axes may vary with time. Although the mobile device obtains different moving curves at different initial positions, the basic rule is embodied. For comparison, the certain positions are uniformly selected to be 30cm away from the chest, such as 30cm right in front of the chest of the user, 30cm left in front of the chest of the user and 30cm right in front of the chest of the user. The above-mentioned distance of 30cm is a distance of one representative value selected according to the usage habit and arm length of an adult, and is not used to exclude distances of other values. Other numerical distances are within the scope of the embodiments of the present application. The above numerical value may be an integer or a decimal number, for example, 22.8 cm.
The right front of the chest of the user, the left front of the chest of the user and the right front of the chest of the user are respectively a 45-degree angle direction which is perpendicular to the chest of the user and is between the front of the chest of the user, the right front of the chest of the user and the right left direction of the chest of the user and a 45-degree angle direction between the right front of the chest of the user and the right direction of the chest of the user.
Although in each of fig. 4(a) -4(d) the user holds the mobile device with the right hand and moves the mobile device to a position proximate the right side of the user's head; it will be appreciated by those skilled in the art that the above is merely illustrative and that the user may hold the mobile device with the left hand and move to a position proximate the left side of the user's head. For example, a user moves from a position directly in front of the user's chest, in front left of the user's chest, and in front right of the user's chest to near the user's left ear or left face by holding the mobile device with the left hand. The above also falls within the scope of the embodiments of the present application.
Fig. 6(b) -6(d) are schematic diagrams of time-dependent curves of components of acceleration in the X, Y and Z axes detected in real time during movement of the mobile device from a start position of the mobile device directly in front of the chest of the user by 30cm right in front, 30cm left and 30cm right towards the side of the user's head with the right hand holding the mobile device. For simplicity, FIGS. 6(a) -6(d) do not plot acceleration versus time. The acceleration versus time curve may be decomposed into time curves of the acceleration components in the X, Y, and Z axes.
For illustrative purposes, the directions of the X-axis, Y-axis, and Z-axis in fig. 6(b) -6(d) are along the width, length, and height directions of the mobile device, respectively. The X, Y and Z axes can be arranged along other directions of the mobile device as long as X, Y and the Z axes are perpendicular to each other. In addition, by replacing the right hand with a left hand and moving to the vicinity of the left ear or face on the side of the user's head, the measured acceleration components in the X, Y and Z axes are reversed or approximately reversed in time, and the curves in the Y and Z axes are the same or approximately the same, compared to the curves in fig. 6(b) -6 (d).
The curves shown in fig. 6(b) -6(d) are a preset X-axis curve, a preset Y-axis curve, and a preset Z-axis curve, respectively, before the user uses the mobile device. After the user uses the mobile device, the curves of the acceleration components of the acceleration detected by the acceleration sensor on the mobile device in real time on the X axis, the Y axis and the Z axis along with the change of time are respectively an X-axis curve, a Y-axis curve and a Z-axis curve. The preset moving curve synthesized by the preset X-axis curve, the preset Y-axis curve and the preset Z-axis curve can be corrected and adjusted according to the statistical means.
In addition, the curves of the acceleration components of the acceleration in the X, Y and Z axes, which are detected in real time as the mobile device moves from the same start position toward the side of the user's head and the other side of the user's head, are different with respect to time. To exemplarily demonstrate this difference, fig. 6(a) is provided. Fig. 6(a) shows a graph of acceleration components of acceleration in the X, Y and Z axes detected in real time as a function of time in a right hand-held mobile device and moving from 30cm directly in front of the user's chest towards the front of the user's head. In fig. 6(a) to 6(d), the abscissa unit is 10ms, and the ordinate unit is the gravitational acceleration g.
Although fig. 6(a) -6(d) are graphs plotting acceleration components of real-time detected acceleration in the X, Y, and Z axes as a function of time for a user holding the mobile device with the right hand and moving from different starting positions to the vicinity of the user's right ear or right face; it will be appreciated by those skilled in the art that the above is merely illustrative and that the user may also hold the mobile device with the left hand and move it to the vicinity of the user's left ear or face to plot acceleration components of the real-time detected acceleration along the X, Y and Z axes as a function of time. And the foregoing also falls within the scope of the embodiments of the present application.
Comparing fig. 6(a) and fig. 6(b), it can be seen that the acceleration component on the X axis in fig. 6(a) is first gentle at about 0g, then fluctuates in the range of-0.2 g to 0.2g, and then tends to be gentle at about 0 g; the acceleration component on the Y axis is slowly increased from about 0.6g to about 1 g; the acceleration component on the Z axis is first stabilized at around 0.8g, then rises to around 1.4g, then falls to around-0.4 g, then oscillates and finally stabilizes at around 0.3 g. In FIG. 6(b), the acceleration component on the X-axis first decreases from about 0g to about-0.8 g, then slowly increases to about 1.2g, and then decreases to about 0.8 g; the acceleration component on the Y axis is relatively gentle, and fluctuates smoothly in the interval from about 0.3g to about 0.8g, the acceleration value of about 0.5g is slowly reduced to about 0.3g, and then slowly rises to about 0.8 g; the acceleration component on the Z axis starts to have a larger peak value which can reach about 2g, a short-time trough is arranged subsequently, the valley value can reach about-3 g, and then the acceleration component is quickly recovered to about 0 g. It can be seen that there is a significant difference in the curves of the two. The curve that is the same as or similar to the curve shown in fig. 6(b) is not the same as or similar to the curve shown in fig. 6 (a). Therefore, when the starting positions are the same or close, whether the first event is generated may be determined by using a criterion whether the similarity between the X-axis curve, the Y-axis curve, and the Z-axis curve and the preset X-axis curve, the preset Y-axis curve, and the preset Z-axis curve, respectively, is greater than or equal to a preset similarity threshold. Admittedly, the determination of whether the first event is generated based on the criterion may also have a certain degree of error. In embodiments of the present application, this error has less impact on the user experience, since there are other conditions defined such as the mobile device being in an outgoing state or listening to a voice message state.
The data based on the preset moving curve may be data obtained after a plurality of tests and statistical processing according to a statistical manner before the mobile device is sold on the market. For example, the average or median of the multiple test data is calculated, and then the calculated data is used to plot the preset moving curve. And then, sequentially comparing the moving curve with the preset moving curve, and if the moving curve and the preset moving curve are the same or the similarity is greater than or equal to a preset similarity threshold, generating a first event. And if the moving curves are different from all the preset moving curves after being compared with all the preset moving curves or the similarity is smaller than a preset similarity threshold, not generating a first event.
In calculating the similarity between the movement curve and the preset movement curve, the characteristics of the movement curve and the preset movement curve may be acquired, and a comparison may be made based on the acquired characteristics of the two to determine whether to generate the first event. For the movement curve, the obtained features may be: the monotonicity change conditions of the X-axis curve, the Y-axis curve and the Z-axis curve, and the number of wave crests and wave troughs and the change conditions of the wave crests and the wave troughs contained in the curves respectively. For the preset movement curve, the obtained characteristics may be: the preset X-axis curve, the preset Y-axis curve, the monotonicity change condition of the preset Z-axis curve, and the number of wave crests and wave troughs and the change condition of the wave crests and the wave troughs contained in the curves respectively. The above-described captured features are merely exemplary and do not limit the scope of the captured features. Other features that can be used to identify the curve are also within the scope of the captured features.
Taking the features obtained in the above exemplary description as an example, a specific obtaining manner is further explained. The feature acquisition of the movement curve and the preset movement curve may be performed in the following manner: dividing the time of the abscissa axes of the moving curve and the preset moving curve into a plurality of time periods; expressing the movement curve corresponding to each time period and the preset movement curve by using an approximate function; deriving an approximate function under each time period, and obtaining monotonicity change conditions such as monotone increasing, monotone decreasing, monotone increasing first and then monotone decreasing, monotone decreasing first and then monotone increasing according to a derivation result; according to the monotonicity change condition, a transition point which is monotonically increased and then monotonically decreased and a transition point which is monotonically decreased and then monotonically increased can be obtained; the transition points which are monotonically increased and then monotonically decreased are wave crests, and the transition points which are monotonically decreased and then monotonically increased are wave troughs, so that the number of wave crests and wave troughs and the change situation of the wave crests and the wave troughs can be obtained. The above specific acquiring manner is only an exemplary illustration for acquiring the features, and does not limit the scope of the acquiring manner of the features. Other ways of obtaining the features are within the scope of the embodiments of the present application.
Although fig. 6(b) -6(d) show three common starting positions for the mobile device, there may be other starting positions in actual use. In addition, during the process of moving the mobile device towards the side of the user's head, the mobile device is usually also rotated around itself by a certain angle. In order to cope with more complicated situations, a gyroscope can be arranged on the mobile device, and the similarity between a rotation curve obtained by combining the rotation angle detected by the gyroscope in real time and a preset rotation curve is used for assisting in determining whether the first event is generated. The similarity comparison method between the rotation curve obtained by using the rotation angle detected by the gyroscope and the preset rotation curve is similar to the similarity comparison method between the movement curve and the preset movement curve, and the description thereof is omitted.
Example two
In the case where the user places the mobile device in the irregular posture as shown in fig. 2(b) when the user makes a call, receives a call, or listens to a voice message, the mobile device cannot receive the second event due to the set position of the proximity sensor 180G, and thus the false touch prevention method of the first embodiment cannot effectively cope with the situation.
Therefore, a false touch prevention method of the second embodiment of the present application is provided. The false touch prevention method relates to fig. 8(a) -8 (b). In the second embodiment of the present application, the starting position of the user may be any position. Considering the common scenario of the mobile device in actual use by the user, for convenience of explanation, it is exemplarily illustrated that the starting positions of the mobile device are respectively certain positions directly in front of the chest of the user (fig. 4 (a)). Of course, the initial positions of the mobile device are a certain position at the left front of the user chest (not shown in the figure) and a certain position at the right front of the user chest (fig. 4(d)), and are also common scenes of the mobile device in actual use of the user, and the description thereof is omitted here.
As shown in fig. 8(a) -8(b), the false touch prevention method according to the second embodiment of the present application is applied to a mobile device. The mobile device is applied to
Figure BDA0002277682930000201
Provided is a system. The smart sensor hub determines whether a first event is generated based on the received sensor data. When the intelligent sensing hub generates a first event, the first event is reported to a Sensor Driver of an AP kernel layer of the application processor from the intelligent sensing hub, and then the Sensor Driver starts a non-response flow such as a freezing flow. Specifically, when the intelligent sensing hub generates a first event, the first event is reported to a Sensor Driver of an AP kernel layer of an application processor from the intelligent sensing hub; the Sensor Driver sends a non-response instruction such as a freeze instruction to the TouchScreen Driver, so that the mobile device does not respond to the touch operation on the touch screen of the mobile device within the preset time length T when the Sensor Driver receives the first event; and the touch points of the touch operation are positioned in a partial area or any area of the touch screen. The partial area includes: the touch screen comprises a status bar on the upper portion of the touch screen, and a dial and a navigation bar on the middle lower portion of the touch screen. After a certain condition is met, such as after a call is ended, or after a preset gesture operation and/or a preset side key operation is detected to occur, the mobile device responds to a touch operation on a touch screen of the mobile device, and a touch point of the touch operation is located in a partial area or any area of the touch screen. The frozen flow and the frozen instruction are a specific way of not responding to the flow and not responding to the instruction, respectively, and are not used to limit the scope of not responding to the flow or not responding to the instruction. As long as the contact mode between the touch point on the touch screen of the mobile equipment and the subsequent processing is interrupted, the method belongs to the realization mode of not responding to the flow or not responding to the instruction; also belong to the protection scope of the embodiments of the present application.
Different from the first embodiment of the present application, the second embodiment of the present application does not respond to the second event, and does not consider the preset duration. That is, no proximity sensor may be provided on the mobile device. Of course, a proximity sensor may be provided, but any operation is not performed depending on the detection data of the proximity sensor. And when the mobile equipment does not respond to the touch operation of the touch screen of the mobile equipment, the touch screen is still in a bright screen state until a preset condition is met, and then the touch screen is recovered to a normal state. In this way, for the situation that the posture of the user placing the mobile device on the side of the head is irregular, when the first event is generated, the mobile device does not respond to the touch operation of the touch screen of the mobile device, and the touch point of the touch operation is located in a partial area or any area of the touch screen. This unresponsiveness is no longer limited by the preset duration. To remove such restrictions, certain conditions are satisfied, such as: and ending the call, or detecting that a preset gesture operation and/or a preset side key operation occur. If the above conditions are satisfied, the mobile device returns to a normal state. The second embodiment of the application effectively solves the technical problem that the side face of the head of the user mistakenly touches the touch screen due to the irregular posture of the user placing the mobile equipment on the side face of the head when the user makes a call, answers the call or listens to a voice message.
The related contents related to the second embodiment of the present application are the same as those related to the first embodiment of the present application, unless otherwise specified. And will not be described in detail herein.
EXAMPLE III
As described above, some suboptimal solutions may also be provided compared to the solution of embodiment one. Improvements are made from only one aspect, such as shortening the flow processing time.
Therefore, a false touch prevention method in the third embodiment of the present application is provided. The anti-false touch method is applied to a mobile device and relates to fig. 9(a) -9 (b). In the third embodiment of the present application, the initial position of the user may be any position, including but not limited to a certain position right in front of the chest of the user (fig. 4(a)), a certain position in front of the left of the chest of the user (not shown in the figure), and a certain position in front of the right of the chest of the user (fig. 4 (d)).
The anti-false touch method shown in fig. 9(a) -9(b) is applied to a mobile device. The mobile device is based on
Figure BDA0002277682930000211
Provided is a system. When the intelligent sensing hub generates a second event, the intelligent sensing hub uploads the second event to the intelligent sensing hub of the kernel layer of the application processor AP. The smart sensor hub of the kernel layer of the application processor AP no longer uploads the second event to the hardware abstraction layer HAL and the framework FWK layer above the hardware abstraction layer HAL. The kernel layer of the application processor AP controls the mobile equipment not to respond to the touch operation of the touch screen of the mobile equipment; and the touch points of the touch operation are positioned in a partial area or any area of the touch screen. The partial area includes: the touch screen comprises a status bar on the upper portion of the touch screen, and a dial and a navigation bar on the middle lower portion of the touch screen.
The above-mentioned freezing flow and freezing instruction are only one specific way of not responding to the flow and not responding to the instruction, and are not used to limit the scope of not responding to the flow or not responding to the instruction. As long as the way of the contact between the touch point on the touch screen of the mobile device and the subsequent processing is interrupted, the implementation mode belongs to the implementation mode of no-response flow or no-response instruction. Also belong to the protection scope of the embodiments of the present application.
Compared with the flow shown in fig. 3, the flow shown in fig. 9(a) -9(b) related to the third embodiment of the present application is obviously simplified, accordingly, the time is shortened, the situation of erroneous contact is reduced or even avoided, and the technical problems that when the mobile device is held by the user and is close to the side of the head of the user for talking or listening to a voice message, the screen-off processing of the mobile device to the touch screen is not timely, the erroneous contact of the side of the head of the user to the touch screen is easily caused, and thus the user is inconvenient and the user experience is affected are solved.
The related contents related to the third embodiment of the present application are the same as those related to the first embodiment of the present application, unless otherwise specified. And will not be described in detail herein.
Example four
It is also possible to provide some sub-optimal solutions compared to the solution of the first embodiment, which only improve from one aspect, such as from an early process start processing time.
Therefore, a false touch prevention method in the fourth embodiment of the present application is provided. The anti-false touch method is applied to a mobile device and relates to fig. 10(a) -10 (b). In the fourth embodiment of the present application, the starting position of the user can be any position, including but not limited to a certain position right in front of the chest of the user (fig. 4(a)), a certain position in front of the left of the chest of the user (not shown in the figure), and a certain position in front of the right of the chest of the user (fig. 4 (d)).
The anti-false touch method shown in fig. 10(a) -10(b) is applied to a mobile device. The mobile device is based on
Figure BDA0002277682930000212
Provided is a system. When the intelligent sensing concentrator generates a first event, the intelligent sensing concentrator transmits the first event to the intelligent sensing concentrator of the kernel layer of the application processor AP; the first event is transmitted to the Sensor Hidl service of the hardware abstraction layer HAL by the intelligent sensing hub of the kernel layer of the application processor AP; then continuously uploading the service of the Sensor Hidl of the hardware abstraction layer HAL to a Sensor manager of a framework FWK layer; then the frame is transmitted to the PowerManager of the frame FWK layer by the SensorManager of the frame FWK layer; then, a screen-off process is started by a PowerManager of the frame FWK layer, and a screen-off instruction is sent; the screen-off instruction is transmitted to the kernel layer of the application processor AP via the hardware synthesizer HWC of the hardware abstraction layer HAL, and the screen-off procedure is executed.
Compared with the flow shown in fig. 3, although the flows shown in fig. 10(a) -10(b) related to the fourth embodiment of the present application all employ the same screen-off flow, the start time of the flow processing is advanced from the generation of the second event to the generation of the first event, so that the screen-off flow processing time is advanced, the situation of mistaken contact is reduced or even avoided, and the technical problems that when the mobile device is held by the user and is close to the side of the head of the user for calling or listening to a voice message, the screen-off processing of the touch screen by the mobile device is not timely, the mistaken contact of the side of the head of the user on the touch screen is easily caused, the inconvenience is brought to the user, and the user experience is affected are solved.
The related contents related to the fourth embodiment of the present application are the same as those related to the first embodiment of the present application, unless otherwise specified. And will not be described in detail herein.
In addition, in the first to fourth embodiments, fig. 7 illustrates a partial area on a touch screen of a mobile device where a user is likely to make a wrong touch while holding the mobile device. In the partial area, the status bar 601 is located at the upper part of the touch screen, and is mainly an area where the ear part of the user is easy to touch by mistake; the dial 602 and the navigation bar 603 are located in the middle-lower dial 602 and the navigation bar 603 of the touch screen, which are mainly areas where the side parts of the face of the user are prone to being touched by mistake. Of course, the status bar 601, the dial 602, and the navigation bar 603 may correspond to not only the ear portion of the user, the face side portion of the user, but also other portions of the head side of the user. Specifically, the mobile device may be prevented from responding to touch operations on the status bar 601, the dial 602 and the navigation bar 603 of the touch screen by masking the hit of the status bar 601 or the hit of the dial 602 and the navigation bar 603 of the touch screen of the mobile device.
In addition, in the first embodiment, it generally takes 200ms to 800ms from the reception of the second event to the completion of the screen-off of the touch screen of the mobile device. Therefore, the preset time length T may be any time length from 1s to 2 s. Modifications may also be adjusted as desired.
In addition, in the first to fourth embodiments, the acceleration sensor and/or the proximity sensor provided in the mobile device are not limited to one, and a plurality of acceleration sensors and/or proximity sensors may be provided. Proximity sensors include, but are not limited to, proximity light sensors, infrared transmitters and infrared receivers, and audio transmitters and audio receivers, among others. The proximity sensor may be disposed on an upper portion of a touch screen of the mobile device facing a user when the mobile device is in a call-out state or a call-in state or a voice-listening state. A gyroscope may further be provided on the mobile device.
In addition, in the first to fourth embodiments, the image may be acquired by only using a front camera on the mobile device, and whether the second event that the mobile device approaches the side of the head of the user is determined by identifying whether the image includes an ear image; alternatively, the accuracy of the mobile device in determining whether the second event is generated is improved by using a combination of a front-facing camera of the mobile device and an acceleration sensor. The ear image is an ear image acquired from the side surface of the head, and is not an ear image acquired from the front surface or other surfaces of the head, so that the acquired image is ensured to contain a complete ear contour and a concave-convex image in the ear.
In addition, in the first to fourth embodiments, the making or receiving of a call by the user is not limited to a voice call performed through the telecommunication carrier network, but also includes a voice call performed through the voice communication application. Such voice communication applications include, but are not limited to
Figure BDA0002277682930000221
Other voice communication applications not listed herein may also be included, such as
Figure BDA0002277682930000222
And
Figure BDA0002277682930000223
fig. 11 illustrates a mobile device 1100 provided herein. By way of example, the mobile device 1100 includes at least one processor 1110, memory 1120, and a touch screen 1130. The processor 1110 is coupled to the memory 1120 and the touch screen 1130, and the coupling in the embodiment of the present invention may be a communication connection, an electrical connection, or another form.
In particular, the memory 1120 is used to store program instructions. The touch screen 1130 is used to display a user interface. The processor 1110 is configured to call program instructions stored in the memory 1120 so that the mobile device 1100 performs the steps performed by the mobile device in a voice message preview method provided by the embodiment of the present application. It should be understood that the mobile device 1100 may be used to implement the method for preventing the false touch provided in the embodiment of the present application, and reference may be made to the above for related features, which are not described herein again.
On the basis of the false touch prevention method provided in any one of the possible implementation manners of the first embodiment, the second embodiment, the third embodiment, the fourth embodiment, and the above embodiments, a fifth embodiment of a mobile device is further provided, which at least includes: memory, one or more processors, a plurality of applications, and one or more computer programs; wherein the one or more computer programs are stored in the memory; the one or more processors, when executing the one or more computer programs, cause the mobile device to implement the anti-false touch method as in any one of the possible implementations of the foregoing first embodiment, second embodiment, third embodiment, fourth embodiment, and above.
On the basis of the false touch prevention method provided in any one of the first to fifth embodiments, the second to third embodiments, and the fourth to above possible implementation manners, a computer-readable storage medium is further provided, where the computer-readable storage medium is disposed on the mobile device provided in the fifth embodiment, and the computer-readable storage medium stores a false touch prevention program, and the false touch prevention program is used to execute the false touch prevention method provided in any one of the first to fourth embodiments, and above possible implementation manners.
It is clear to a person skilled in the art that the descriptions of the embodiments provided in the present application may be referred to each other, and for convenience and brevity of description, for example, the functions and steps of the apparatuses and the devices provided in the embodiments of the present application may be referred to the relevant descriptions of the method embodiments of the present application, and the method embodiments and the apparatus embodiments may be referred to each other.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways without departing from the scope of the application. For example, the above-described embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and the actual implementation may have another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed
Additionally, the apparatus and methods described, as well as the illustrations of various embodiments, may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present application. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electronic, mechanical or other form.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

1. A false touch prevention method is applied to mobile equipment and is characterized by comprising the following steps:
when the mobile equipment is in a calling state, a receiving state or a voice message listening state, the mobile equipment does not respond to touch operation on a touch screen of the mobile equipment within a preset time after a first event is received, and a touch point or a touch surface of the touch operation is located in a partial area or any area of the touch screen;
within the preset time length, when the mobile equipment receives a second event, the mobile equipment starts a screen-off process; completing the screen-off process within the preset time length;
the first event comprises:
after the mobile device is changed from a static state to a moving state, when the mobile device detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile device and a preset moving curve is greater than or equal to a preset similarity threshold value, determining an event generated by the movement of the mobile device towards the side of the head of a user;
alternatively, the first and second electrodes may be,
after the mobile device is changed from a static state to a moving state, when the mobile device detects that an image acquired by a front camera of the mobile device comprises an ear image of a person, determining an event generated by the movement of the mobile device towards the side of the head of a user;
alternatively, the first and second electrodes may be,
after the mobile equipment is changed from a static state to a moving state, when the mobile equipment detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile equipment and a preset moving curve is greater than or equal to a preset similarity threshold value and an ear image of a person is included in an image obtained by a front camera of the mobile equipment, determining an event generated by the movement of the mobile equipment towards the side face of the head of the user;
the second event comprises:
detecting that the distance between the mobile equipment and the side face of the head of the user is smaller than a preset threshold value, and determining an event generated when the mobile equipment approaches the side face of the head of the user;
alternatively, the first and second electrodes may be,
detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile equipment is greater than or equal to a preset threshold value, and determining an event generated when the mobile equipment approaches the side face of the head of the user, wherein the signal is an electromagnetic wave signal or an audio signal;
alternatively, the first and second electrodes may be,
detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile equipment at a first position is greater than or equal to the intensity value at a second position, and determining an event generated when the mobile equipment approaches the side face of the head of the user, wherein the first position is farther away from the part of the mobile equipment, which is used for sending the signal, than the second position, and the signal is an electromagnetic wave signal or an audio signal;
alternatively, the first and second electrodes may be,
and determining an event generated when the mobile device approaches the side of the head of the user when detecting that the area of the capacitance value of the capacitive touch screen of the mobile device is changed is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value.
2. The false touch prevention method according to claim 1, wherein the mobile device detects that the similarity between the movement curve obtained by its own acceleration sensor and the preset movement curve is greater than or equal to a preset similarity threshold, and the method comprises: the acceleration sensor is provided with an X axis, a Y axis and a Z axis which are vertical to each other; the moving curve and the preset moving curve comprise an X-axis curve, a Y-axis curve and a Z-axis curve; determining the characteristics of the movement curve according to the monotonicity change conditions of the X-axis curve, the Y-axis curve and the Z-axis curve, the number of wave crests and wave troughs contained in the X-axis curve, the Y-axis curve and the Z-axis curve and the change conditions of the wave crests and the wave troughs, and determining the similarity of the movement curve and a preset movement curve based on the characteristics of the movement curve and the characteristics of the preset movement curve.
3. The false touch protection method according to claim 1 or 2, wherein the partial area comprises: the touch screen comprises a status bar on the upper portion of the touch screen, and a dial and a navigation bar on the middle lower portion of the touch screen.
4. The false touch prevention method according to any one of claims 1 to 3,
when the mobile equipment is in a calling state, a receiving state or a voice message listening state, the mobile equipment does not respond to touch operation on a touch screen of the mobile equipment within a preset time after a first event is received, and a touch point or a touch surface of the touch operation is located in a partial area or any area of the touch screen;
within the preset time length, when the mobile equipment receives a second event, the mobile equipment starts a screen-off process; completing the screen-off process within the preset time length;
the method comprises the following steps:
when the mobile device is in an outgoing state, an incoming state or a voice message listening state, and when the mobile device receives a first event, the mobile device uploads the first event to a kernel layer of an application processor of the mobile device; the kernel layer of the application processor does not upload the first event to a hardware abstraction layer and a framework layer above the hardware abstraction layer any more; within a preset time length after the mobile device receives the first event, the kernel layer of the application processor controls the mobile device not to respond to touch operation on a touch screen of the mobile device, and a touch point or a touch surface of the touch operation is located in a partial area or any area of the touch screen; when the mobile device receives the second event within the preset time length, the mobile device uploads the second event to a kernel layer, a hardware abstraction layer and a framework layer of the application processor in sequence; starting a screen-off process in the frame layer; then, the framework layer is sequentially downloaded to the hardware abstraction layer and the kernel layer of the application processor; and completing the screen turn-off process within the preset time.
5. A false touch prevention method is applied to mobile equipment and is characterized by comprising the following steps:
when the mobile equipment is in a calling state, a receiving state or a voice message listening state, after a first event is received, the mobile equipment does not respond to touch operation on a touch screen of the mobile equipment, a touch point or a touch surface of the touch operation is located in a partial area or any area of the touch screen, and the mobile equipment does not respond to a second event;
after the calling state, the answering state or the voice message listening state is finished, or after a preset gesture operation and/or a preset side key operation is detected, the mobile equipment responds to touch operation on a partial area or any area of a touch screen of the mobile equipment;
the first event comprises:
after the mobile device is changed from a static state to a moving state, when the mobile device detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile device and a preset moving curve is greater than or equal to a preset similarity threshold value, determining an event generated by the movement of the mobile device towards the side of the head of a user;
alternatively, the first and second electrodes may be,
after the mobile device is changed from a static state to a moving state, when the mobile device detects that an image acquired by a front camera of the mobile device comprises an ear image of a person, determining an event generated by the movement of the mobile device towards the side of the head of a user;
alternatively, the first and second electrodes may be,
after the mobile equipment is changed from a static state to a moving state, when the mobile equipment detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile equipment and a preset moving curve is greater than or equal to a preset similarity threshold value and an ear image of a person is included in an image obtained by a front camera of the mobile equipment, determining an event generated by the movement of the mobile equipment towards the side face of the head of the user;
the second event comprises:
detecting that the distance between the mobile equipment and the side face of the head of the user is smaller than a preset threshold value, and determining an event generated when the mobile equipment approaches the side face of the head of the user;
alternatively, the first and second electrodes may be,
detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile equipment is greater than or equal to a preset threshold value, and determining an event generated when the mobile equipment approaches the side face of the head of the user, wherein the signal is an electromagnetic wave signal or an audio signal;
alternatively, the first and second electrodes may be,
detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile equipment at a first position is greater than or equal to the intensity value at a second position, and determining an event generated when the mobile equipment approaches the side face of the head of the user, wherein the first position is farther away from the part of the mobile equipment, which is used for sending the signal, than the second position, and the signal is an electromagnetic wave signal or an audio signal;
alternatively, the first and second electrodes may be,
and determining an event generated when the mobile device approaches the side of the head of the user when detecting that the area of the capacitance value of the capacitive touch screen of the mobile device is changed is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value.
6. The false touch prevention method according to claim 5, wherein the mobile device detects that the similarity between the movement curve obtained by its own acceleration sensor and the preset movement curve is greater than or equal to a preset similarity threshold, and the method comprises: the acceleration sensor is provided with an X axis, a Y axis and a Z axis which are vertical to each other; the moving curve and the preset moving curve comprise an X-axis curve, a Y-axis curve and a Z-axis curve; determining the characteristics of the movement curve according to the monotonicity change conditions of the X-axis curve, the Y-axis curve and the Z-axis curve, the number of wave crests and wave troughs contained in the X-axis curve, the Y-axis curve and the Z-axis curve and the change conditions of the wave crests and the wave troughs, and determining the similarity of the movement curve and a preset movement curve based on the characteristics of the movement curve and the characteristics of the preset movement curve.
7. The false touch protection method according to claim 5 or 6, wherein the partial area comprises: the touch screen comprises a status bar on the upper portion of the touch screen, and a dial and a navigation bar on the middle lower portion of the touch screen.
8. The false touch prevention method according to any one of claims 5 to 7,
when the mobile equipment is in a calling state, a receiving state or a voice message listening state, after a first event is received, the mobile equipment does not respond to touch operation on a touch screen of the mobile equipment, a touch point or a touch surface of the touch operation is located in a partial area or any area of the touch screen, and the mobile equipment does not respond to a second event;
the method comprises the following steps:
when the mobile device is in an outgoing state, an incoming state or a voice message listening state, and when the mobile device receives a first event, the mobile device uploads the first event to a kernel layer of an application processor of the mobile device; the kernel layer of the application processor does not upload the first event to a hardware abstraction layer and a framework layer above the hardware abstraction layer any more; after receiving the first event, the kernel layer of the application processor controls the mobile device not to respond to touch operation on a touch screen of the mobile device, a touch point or a touch surface of the touch operation is located in a partial area or any area of the touch screen, and the mobile device does not respond to a second event.
9. A false touch prevention method is applied to mobile equipment and is characterized by comprising the following steps:
when the mobile device is in an outgoing state or an incoming state or a voice message listening state, the mobile device does not respond to a first event;
after the mobile equipment receives the second event, the mobile equipment does not respond to touch operation on a touch screen of the mobile equipment, and touch points or a touch surface of the touch operation are located in a partial area or any area of the touch screen;
after the calling state, the answering state or the voice message listening state is finished, or after a preset gesture operation and/or a preset side key operation is detected, the mobile equipment responds to touch operation on a partial area or any area of a touch screen of the mobile equipment;
the first event comprises:
after the mobile device is changed from a static state to a moving state, when the mobile device detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile device and a preset moving curve is greater than or equal to a preset similarity threshold value, determining an event generated by the movement of the mobile device towards the side of the head of a user;
alternatively, the first and second electrodes may be,
after the mobile device is changed from a static state to a moving state, when the mobile device detects that an image acquired by a front camera of the mobile device comprises an ear image of a person, determining an event generated by the movement of the mobile device towards the side of the head of a user;
alternatively, the first and second electrodes may be,
after the mobile equipment is changed from a static state to a moving state, when the mobile equipment detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile equipment and a preset moving curve is greater than or equal to a preset similarity threshold value and an ear image of a person is included in an image obtained by a front camera of the mobile equipment, determining an event generated by the movement of the mobile equipment towards the side face of the head of the user;
the second event comprises:
detecting that the distance between the mobile equipment and the side face of the head of the user is smaller than a preset threshold value, and determining an event generated when the mobile equipment approaches the side face of the head of the user;
alternatively, the first and second electrodes may be,
detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile equipment is greater than or equal to a preset threshold value, and determining an event generated when the mobile equipment approaches the side face of the head of the user, wherein the signal is an electromagnetic wave signal or an audio signal;
alternatively, the first and second electrodes may be,
detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile equipment at a first position is greater than or equal to the intensity value at a second position, and determining an event generated when the mobile equipment approaches the side face of the head of the user, wherein the first position is farther away from the part of the mobile equipment, which is used for sending the signal, than the second position, and the signal is an electromagnetic wave signal or an audio signal;
alternatively, the first and second electrodes may be,
and determining an event generated when the mobile device approaches the side of the head of the user when detecting that the area of the capacitance value of the capacitive touch screen of the mobile device is changed is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value.
10. The false touch protection method of claim 9, wherein the partial region comprises: the touch screen comprises a status bar on the upper portion of the touch screen, and a dial and a navigation bar on the middle lower portion of the touch screen.
11. The false touch prevention method according to claim 9 or 10,
after the mobile equipment receives a second event, the mobile equipment does not respond to touch operation on a touch screen of the mobile equipment, and touch points or a touch surface of the touch operation are located in a partial area or any area of the touch screen;
the method comprises the following steps:
when the mobile device receives a second event, the mobile device uploads the second event to a kernel layer of an application processor of the mobile device; the kernel layer of the application processor does not upload the second event to a hardware abstraction layer and a framework layer above the hardware abstraction layer any more; and the kernel layer of the application processor controls the mobile equipment not to respond to the touch operation of the touch screen of the mobile equipment, and touch points or a touch surface of the touch operation are positioned in a partial area or any area of the touch screen.
12. A false touch prevention method is applied to mobile equipment and is characterized by comprising the following steps:
when the mobile equipment is in a calling state, a receiving state or a voice message listening state, after receiving a first event, the mobile equipment starts and completes a screen-off process of a touch screen of the mobile equipment, and the mobile equipment does not respond to a second event;
the first event comprises:
after the mobile device is changed from a static state to a moving state, when the mobile device detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile device and a preset moving curve is greater than or equal to a preset similarity threshold value, determining an event generated by the movement of the mobile device towards the side of the head of a user;
alternatively, the first and second electrodes may be,
after the mobile device is changed from a static state to a moving state, when the mobile device detects that an image acquired by a front camera of the mobile device comprises an ear image of a person, determining an event generated by the movement of the mobile device towards the side of the head of a user;
alternatively, the first and second electrodes may be,
after the mobile equipment is changed from a static state to a moving state, when the mobile equipment detects that the similarity between a moving curve obtained by an acceleration sensor of the mobile equipment and a preset moving curve is greater than or equal to a preset similarity threshold value and an ear image of a person is included in an image obtained by a front camera of the mobile equipment, determining an event generated by the movement of the mobile equipment towards the side face of the head of the user;
the second event comprises:
detecting that the distance between the mobile equipment and the side face of the head of the user is smaller than a preset threshold value, and determining an event generated when the mobile equipment approaches the side face of the head of the user;
alternatively, the first and second electrodes may be,
detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile equipment is greater than or equal to a preset threshold value, and determining an event generated when the mobile equipment approaches the side face of the head of the user, wherein the signal is an electromagnetic wave signal or an audio signal;
alternatively, the first and second electrodes may be,
detecting that the intensity value of a reflected signal of a self-sent signal received by the mobile equipment at a first position is greater than or equal to the intensity value at a second position, and determining an event generated when the mobile equipment approaches the side face of the head of the user, wherein the first position is farther away from the part of the mobile equipment, which is used for sending the signal, than the second position, and the signal is an electromagnetic wave signal or an audio signal;
alternatively, the first and second electrodes may be,
and determining an event generated when the mobile device approaches the side of the head of the user when detecting that the area of the capacitance value of the capacitive touch screen of the mobile device is changed is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value.
13. The false touch prevention method according to claim 12, wherein the mobile device detects that the similarity between the movement curve obtained by its own acceleration sensor and the preset movement curve is greater than or equal to a preset similarity threshold, and the method comprises: the acceleration sensor is provided with an X axis, a Y axis and a Z axis which are vertical to each other; the moving curve and the preset moving curve comprise an X-axis curve, a Y-axis curve and a Z-axis curve; determining the characteristics of the movement curve according to the monotonicity change conditions of the X-axis curve, the Y-axis curve and the Z-axis curve, the number of wave crests and wave troughs contained in the X-axis curve, the Y-axis curve and the Z-axis curve and the change conditions of the wave crests and the wave troughs, and determining the similarity of the movement curve and a preset movement curve based on the characteristics of the movement curve and the characteristics of the preset movement curve.
14. The false touch protection method according to claim 12 or 13, wherein the partial area comprises: the touch screen comprises a status bar on the upper portion of the touch screen, and a dial and a navigation bar on the middle lower portion of the touch screen.
15. The method according to any one of claims 12 to 14, wherein after receiving the first event, the mobile device starts and completes a screen turn-off process of the touch screen of the mobile device, including:
when a first event is received, the mobile device uploads the first event to a kernel layer, a hardware abstraction layer and a framework layer of an application processor of the mobile device in sequence; starting a screen-off process in the frame layer; then, the framework layer is sequentially downloaded to the hardware abstraction layer and the kernel layer of the application processor; and finishing the screen extinguishing process.
16. A mobile device comprising at least: memory, one or more processors, one or more applications, and one or more computer programs; wherein the one or more computer programs are stored in the memory; one or more processors, when executing the one or more computer programs, cause the mobile device to implement the false touch protection method of any one of claims 1-15.
17. A computer-readable storage medium comprising instructions that, when run on the mobile device of claim 16, cause the mobile device to perform the anti-false touch method of any one of claims 1-15.
CN201911128761.4A 2019-11-18 2019-11-18 False touch prevention method, mobile device and computer readable storage medium Pending CN112817512A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911128761.4A CN112817512A (en) 2019-11-18 2019-11-18 False touch prevention method, mobile device and computer readable storage medium
PCT/CN2020/129080 WO2021098644A1 (en) 2019-11-18 2020-11-16 Inadvertent touch prevention method, mobile device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911128761.4A CN112817512A (en) 2019-11-18 2019-11-18 False touch prevention method, mobile device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112817512A true CN112817512A (en) 2021-05-18

Family

ID=75852732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911128761.4A Pending CN112817512A (en) 2019-11-18 2019-11-18 False touch prevention method, mobile device and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN112817512A (en)
WO (1) WO2021098644A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022242211A1 (en) * 2021-05-21 2022-11-24 荣耀终端有限公司 Display screen control method and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120293551A1 (en) * 2011-05-19 2012-11-22 Qualcomm Incorporated User interface elements augmented with force detection
CN103941994A (en) * 2013-01-23 2014-07-23 中兴通讯股份有限公司 Sensing screen locking method and device of touch screen
CN105988580A (en) * 2015-04-28 2016-10-05 乐视移动智能信息技术(北京)有限公司 Screen control method and device of mobile terminal
US20170048379A1 (en) * 2014-04-24 2017-02-16 Kyocera Corporation Mobile electronic device, control method, and non-transitory storage medium
CN109582197A (en) * 2018-11-30 2019-04-05 北京小米移动软件有限公司 Screen control method, device and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107395897A (en) * 2017-08-24 2017-11-24 惠州Tcl移动通信有限公司 Mobile terminal goes out control method, storage device and the mobile terminal of screen
CN109257505B (en) * 2018-11-07 2021-06-29 维沃移动通信有限公司 Screen control method and mobile terminal
CN109756623A (en) * 2018-12-28 2019-05-14 Oppo广东移动通信有限公司 control method, control device, electronic device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120293551A1 (en) * 2011-05-19 2012-11-22 Qualcomm Incorporated User interface elements augmented with force detection
CN103941994A (en) * 2013-01-23 2014-07-23 中兴通讯股份有限公司 Sensing screen locking method and device of touch screen
US20170048379A1 (en) * 2014-04-24 2017-02-16 Kyocera Corporation Mobile electronic device, control method, and non-transitory storage medium
CN105988580A (en) * 2015-04-28 2016-10-05 乐视移动智能信息技术(北京)有限公司 Screen control method and device of mobile terminal
CN109582197A (en) * 2018-11-30 2019-04-05 北京小米移动软件有限公司 Screen control method, device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022242211A1 (en) * 2021-05-21 2022-11-24 荣耀终端有限公司 Display screen control method and electronic device

Also Published As

Publication number Publication date
WO2021098644A1 (en) 2021-05-27

Similar Documents

Publication Publication Date Title
CN110989852B (en) Touch screen, electronic equipment and display control method
CN113645351A (en) Application interface interaction method, electronic device and computer-readable storage medium
CN112740152B (en) Handwriting pen detection method, handwriting pen detection system and related device
CN110742580A (en) Sleep state identification method and device
CN111124201A (en) One-hand operation method and electronic equipment
WO2020056778A1 (en) Method for shielding touch event, and electronic device
CN112637758B (en) Equipment positioning method and related equipment thereof
CN113805487B (en) Control instruction generation method and device, terminal equipment and readable storage medium
CN113395382B (en) Method for data interaction between devices and related devices
CN110572866B (en) Management method of wake-up lock and electronic equipment
CN113691271B (en) Data transmission method and wearable device
CN112334860A (en) Touch method of wearable device, wearable device and system
CN115589051B (en) Charging method and terminal equipment
CN114090102B (en) Method, device, electronic equipment and medium for starting application program
CN112087649B (en) Equipment searching method and electronic equipment
CN111580671A (en) Video image processing method and related device
CN113448482A (en) Sliding response control method and device of touch screen and electronic equipment
CN112684969A (en) Always displaying method and mobile device
CN112817512A (en) False touch prevention method, mobile device and computer readable storage medium
CN117093068A (en) Vibration feedback method and system based on wearable device, wearable device and electronic device
CN115206308A (en) Man-machine interaction method and electronic equipment
CN114116610A (en) Method, device, electronic equipment and medium for acquiring storage information
CN114077519A (en) System service recovery method and device and electronic equipment
CN112311376A (en) Charge detection circuit, pressure detection method and terminal equipment
CN113672454B (en) Screen freezing monitoring method, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination