CN112817512B - False touch prevention method, mobile device and computer readable storage medium - Google Patents

False touch prevention method, mobile device and computer readable storage medium Download PDF

Info

Publication number
CN112817512B
CN112817512B CN201911128761.4A CN201911128761A CN112817512B CN 112817512 B CN112817512 B CN 112817512B CN 201911128761 A CN201911128761 A CN 201911128761A CN 112817512 B CN112817512 B CN 112817512B
Authority
CN
China
Prior art keywords
mobile device
preset
event
curve
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911128761.4A
Other languages
Chinese (zh)
Other versions
CN112817512A (en
Inventor
翟海鹏
肖啸
李�杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN201911128761.4A priority Critical patent/CN112817512B/en
Priority to CN202410468591.9A priority patent/CN118426615A/en
Priority to PCT/CN2020/129080 priority patent/WO2021098644A1/en
Publication of CN112817512A publication Critical patent/CN112817512A/en
Application granted granted Critical
Publication of CN112817512B publication Critical patent/CN112817512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to the field of intelligent control, in particular to a mobile equipment false touch prevention control technology in the field of intelligent control. In an error touch prevention method applied to mobile equipment, when the mobile equipment is in an outgoing state or an answering state or a voice message listening state, the mobile equipment does not respond to touch operation of a touch screen of the mobile equipment within a preset time period after receiving a first event, and a touch point or a touch surface of the touch operation is positioned in a partial area or any area of the touch screen; then, when the mobile equipment receives a second event, the mobile equipment starts a screen-off process; and the mobile equipment completes the screen-off process within the preset duration. By the technical scheme provided by the application, the situation that the mobile device is mistakenly touched by a touch screen due to untimely screen-out or nonstandard gesture of the mobile device when the mobile device is held by a user and approaches the side surface of the head of the user to be used for talking or receiving voice messages can be reduced or even avoided, and the user experience is improved.

Description

False touch prevention method, mobile device and computer readable storage medium
Technical Field
The application relates to the field of intelligent control, in particular to a mobile equipment false touch prevention control technology in the field of intelligent control.
Background
After the user picks up the mobile device to perform the operations of actively playing, passively listening to the voice message, the mobile device automatically detects whether the mobile device is close to the side of the user's head. When the approach is detected, the screen is turned off so as to avoid unexpected operation caused by false touch of the side face of the head of the user. When the mobile device is far away, the mobile device is on the screen, and the normal use state is restored. However, in practice, there are situations where the mobile device is not turned off in time when the mobile device is already near the side of the user's head. Thus the user's head side touches the touch screen of the mobile device by mistake. Therefore, a good anti-false touch experience is a need.
Disclosure of Invention
The application provides an error touch prevention method, mobile equipment and a computer readable storage medium, which are used for solving the technical problem that when a user uses the mobile equipment in the scene, the mobile equipment is out of screen and is not timely processed.
In a first aspect, a method for preventing false touch is provided. The false touch prevention method is applied to mobile equipment and comprises the following steps: when the mobile equipment is in an outgoing state or an answering state or a voice message listening state, the mobile equipment does not respond to the touch operation of the touch screen of the mobile equipment within the preset time after receiving the first event, and the touch point or the touch surface of the touch operation is positioned in a partial area or any area of the touch screen; within a preset duration, when the mobile equipment receives a second event, the mobile equipment starts a screen-off process; and finishing the screen-off process within a preset time length; wherein the first event comprises: after the mobile device changes from a stationary state to a mobile state, when the mobile device detects that the similarity between a movement curve obtained through the acceleration sensor of the mobile device and a preset movement curve is greater than or equal to a preset similarity threshold value, determining an event generated by the movement of the mobile device towards the side surface of the head of a user; or after the mobile device is changed from a static state to a moving state, determining an event generated by moving the mobile device towards the side surface of the head of the user when the mobile device detects that an image acquired by a front camera of the mobile device comprises an ear image of a person; or after the mobile device is changed from a static state to a moving state, determining an event generated by the mobile device moving towards the side face of the head of the user when the mobile device detects that the similarity of a moving curve obtained through an acceleration sensor of the mobile device and a preset moving curve is larger than or equal to a preset similarity threshold value and the image obtained through a front camera of the mobile device comprises an ear image of a person; the second event includes: detecting that the distance between the mobile device and the side face of the head of the user is smaller than a preset threshold value, and determining an event generated when the mobile device approaches the side face of the head of the user; or detecting that the intensity value of a reflected signal of a self-transmitted signal received by the mobile equipment is larger than or equal to a preset threshold value, and determining an event generated when the mobile equipment approaches the side surface of the head of the user, wherein the signal is an electromagnetic wave signal or an audio signal; or detecting that the intensity value of the reflected signal of the self-transmitted signal received by the mobile device at a first position is larger than or equal to the intensity value of a second position, determining an event generated by the fact that the mobile device is close to the side face of the head of the user, wherein the first position is far away from the part of the mobile device, which transmits the signal, compared with the second position, and the signal is an electromagnetic wave signal or an audio signal; or detecting that the area of the change of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value, and determining an event generated when the mobile device approaches to the side face of the head of the user. In this way, the non-response operation of the touch screen of the mobile device can be timely processed by setting the mobile device not to respond to the touch operation of any area or part of area of the touch screen of the mobile device in advance; and the screen is turned off within the preset time period from the receipt of the first event, so that the screen is turned off even after the non-responsive setting is finished, and the situation that the mobile equipment is touched by mistake before the screen is turned off is reduced or even avoided.
According to a first aspect, a mobile device detecting that a similarity of a movement curve obtained by its own acceleration sensor to a preset movement curve is greater than or equal to a preset similarity threshold, includes: the acceleration sensor is provided with an X axis, a Y axis and a Z axis which are mutually perpendicular; the movement curve and the preset movement curve both comprise an X-axis curve, a Y-axis curve and a Z-axis curve; according to monotonicity change conditions of the X-axis curve, the Y-axis curve and the Z-axis curve, the number of wave crests and wave troughs contained in the X-axis curve, the Y-axis curve and the Z-axis curve and the change conditions of the wave crests and the wave troughs are used for determining characteristics of a moving curve, and similarity of the moving curve and a preset moving curve is determined based on the characteristics of the moving curve and the characteristics of the preset moving curve. In this way, a specific acquisition mode for determining whether the similarity of the first event is generated later is provided, and after the similarity is acquired by the mode, when the similarity is greater than or equal to a preset similarity threshold value, the mobile device can be determined to move towards the side of the head of the user to generate the first event.
According to a first aspect, or any implementation manner of the first aspect, the partial area includes: status bar at upper part of touch screen, dial plate at middle lower part of touch screen and navigation bar. Therefore, the false touch of the upper status bar of the touch screen, the lower area of the touch screen, such as a dial plate and a navigation bar, which are easy to be touched by the false touch can be protected.
According to the first aspect, or any implementation manner of the first aspect, when the mobile device is in an outgoing state or an answering state or a voice message listening state, within a preset duration after receiving the first event, the mobile device does not respond to a touch operation on a touch screen of the mobile device, and a touch point or a touch surface of the touch operation is located in a partial area or an arbitrary area of the touch screen; within a preset duration, when the mobile equipment receives a second event, the mobile equipment starts a screen-off process; and finishing the screen-off process within a preset time length; the method specifically comprises the following steps: when the mobile device receives a first event, the mobile device uploads the first event to a kernel layer of an application processor of the mobile device when the mobile device is in an outgoing state or an answering state or a voice message listening state; the kernel layer of the application processor does not upload the first event to the frame layer of the upper layer of the hardware abstraction layer; the method comprises the steps that in a preset time period after a mobile device receives a first event, a kernel layer of an application processor controls the mobile device not to respond to touch operation of a touch screen of the mobile device, and a touch point or a touch surface of the touch operation is located in a partial area or any area of the touch screen; in the preset duration, when the mobile equipment receives a second event, the mobile equipment sequentially uploads the second event to a kernel layer, a hardware abstraction layer and a frame layer of the application processor; starting a screen-off process in the frame layer; sequentially downloading the hardware abstract layer and the kernel layer of the application processor from the framework layer; and completing the screen-off process within a preset time. Therefore, the non-response flow after the first event is received and the screen-off flow after the second event is received are further embodied, and the non-response flow is detected earlier and the processing time is shorter from a more specific point of view; and the screen-off process is completed within the preset time; the situation that the mobile device is touched by mistake before the screen is removed is reduced or even avoided.
According to the first aspect, or any implementation manner of the first aspect, the value e [1,2] of the preset duration is a unit of seconds. Thus, the preferred ranges for the preset time period are further summarized.
According to the first aspect, or any implementation manner of the first aspect, the calling state or the answering state includes a voice calling state or a voice answering state performed by a voice communication application program, and a voice calling state or a voice answering state performed by a telecom operator network, respectively. Thus, the range of the calling state and the receiving state is further clarified.
According to a first aspect, or any implementation of the first aspect above, the voice communication application includes, but is not limited to, weChat,And/>In this way, the voice communication application is further embodied.
In a second aspect, a method for preventing false touch is provided. The false touch prevention method is applied to mobile equipment and comprises the following steps: when the mobile device is in an outgoing state, an answering state or a voice message listening state, after a first event is received, the mobile device does not respond to touch operation on a touch screen of the mobile device, a touch point or a touch surface of the touch operation is positioned in a partial area or any area of the touch screen, and the mobile device does not respond to a second event; after the expiration state, the answering state or the voice message listening state is finished, or after a preset gesture operation and/or a preset side key operation are detected, the mobile device responds to the touch operation of a part area or any area of the touch screen of the mobile device; the first event includes: after the mobile device changes from a stationary state to a mobile state, when the mobile device detects that the similarity between a movement curve obtained through the acceleration sensor of the mobile device and a preset movement curve is greater than or equal to a preset similarity threshold value, determining an event generated by the movement of the mobile device towards the side surface of the head of a user; or after the mobile device is changed from a static state to a moving state, determining an event generated by moving the mobile device towards the side surface of the head of the user when the mobile device detects that an image acquired by a front camera of the mobile device comprises an ear image of a person; or after the mobile device is changed from a static state to a moving state, determining an event generated by the mobile device moving towards the side face of the head of the user when the mobile device detects that the similarity of a moving curve obtained through an acceleration sensor of the mobile device and a preset moving curve is larger than or equal to a preset similarity threshold value and the image obtained through a front camera of the mobile device comprises an ear image of a person; the second event includes: detecting that the distance between the mobile device and the side face of the head of the user is smaller than a preset threshold value, and determining an event generated when the mobile device approaches the side face of the head of the user; or detecting that the intensity value of a reflected signal of a self-transmitted signal received by the mobile equipment is larger than or equal to a preset threshold value, and determining an event generated when the mobile equipment approaches the side surface of the head of the user, wherein the signal is an electromagnetic wave signal or an audio signal; or detecting that the intensity value of the reflected signal of the self-transmitted signal received by the mobile device at a first position is larger than or equal to the intensity value of a second position, determining an event generated by the fact that the mobile device is close to the side face of the head of the user, wherein the first position is far away from the part of the mobile device, which transmits the signal, compared with the second position, and the signal is an electromagnetic wave signal or an audio signal; or detecting that the area of the change of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value, and determining an event generated when the mobile device approaches to the side face of the head of the user. In this way, the mobile device is not set to respond to the touch operation of any area or part of area of the touch screen of the mobile device in advance until the corresponding state is finished or the specific operation is triggered, and the screen-off process is not executed any more; the non-response operation of the touch screen can be timely processed, and the situation that the mobile equipment is touched by mistake is reduced or even avoided.
According to a second aspect, or any implementation manner of the second aspect, when the mobile device is in an outgoing state, an incoming state or a voice message listening state, after receiving the first event, the mobile device does not respond to a touch operation on a touch screen of the mobile device, a touch point or a touch surface of the touch operation is located in a partial area or any area of the touch screen, and the mobile device does not respond to the second event; comprising the following steps: when the mobile device receives a first event, the mobile device uploads the first event to a kernel layer of an application processor of the mobile device when the mobile device is in an outgoing state or an answering state or a voice message listening state; the kernel layer of the application processor does not upload the first event to the frame layer of the upper layer of the hardware abstraction layer; after the first event is received, the kernel layer of the application processor controls the mobile device not to respond to the touch operation of the touch screen of the mobile device, the touch point or the touch surface of the touch operation is positioned in a partial area or any area of the touch screen, and the mobile device does not respond to the second event. In this way, the non-response flow after the first event is received is further embodied, and from a more specific point of view, the non-response flow is not only detected earlier, but also the processing time is shorter, so that the situation that the mobile device is touched by mistake is reduced or even avoided.
In accordance with a second aspect, or any implementation manner of the second aspect, in a touch operation of the mobile device that does not respond to a whole or part of a screen of the mobile device, the screen is still in a bright screen state. In this way, the original state of the screen is still preserved.
Further implementation manners and corresponding technical effects in the second aspect may be referred to the corresponding implementation manners and corresponding technical effects in the first aspect, which are not described herein.
In a third aspect, a method for preventing false touch is provided. The false touch prevention method is applied to mobile equipment and comprises the following steps: the mobile device does not respond to the first event when the mobile device is in an outgoing state or an answering state or a voice message listening state; after the mobile device receives the second event, the mobile device does not respond to the touch operation of the touch screen of the mobile device, and the touch point or the touch surface of the touch operation is positioned in a partial area or any area of the touch screen; after the expiration state, the answering state or the voice message listening state is finished, or after a preset gesture operation and/or a preset side key operation are detected, the mobile device responds to the touch operation of a part area or any area of the touch screen of the mobile device; the first event includes: after the mobile device changes from a stationary state to a mobile state, when the mobile device detects that the similarity between a movement curve obtained through the acceleration sensor of the mobile device and a preset movement curve is greater than or equal to a preset similarity threshold value, determining an event generated by the movement of the mobile device towards the side surface of the head of a user; or after the mobile device is changed from a static state to a moving state, determining an event generated by moving the mobile device towards the side surface of the head of the user when the mobile device detects that an image acquired by a front camera of the mobile device comprises an ear image of a person; or after the mobile device is changed from a static state to a moving state, determining an event generated by the mobile device moving towards the side face of the head of the user when the mobile device detects that the similarity of a moving curve obtained through an acceleration sensor of the mobile device and a preset moving curve is larger than or equal to a preset similarity threshold value and the image obtained through a front camera of the mobile device comprises an ear image of a person; the second event includes: detecting that the distance between the mobile device and the side face of the head of the user is smaller than a preset threshold value, and determining an event generated when the mobile device approaches the side face of the head of the user; or detecting that the intensity value of a reflected signal of a self-transmitted signal received by the mobile equipment is larger than or equal to a preset threshold value, and determining an event generated when the mobile equipment approaches the side surface of the head of the user, wherein the signal is an electromagnetic wave signal or an audio signal; or detecting that the intensity value of the reflected signal of the self-transmitted signal received by the mobile equipment is larger than or equal to the intensity value of the second position at the first position, determining an event generated by the fact that the mobile equipment is close to the side face of the head of the user, wherein the first position is far away from the part of the transmitted signal of the mobile equipment compared with the second position, and the signal is an electromagnetic wave signal or an audio signal; or detecting that the area of the change of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value, and determining an event generated when the mobile device approaches to the side face of the head of the user. In this way, although the mobile device is not set in advance to not respond to the touch operation of any area or part of area of the touch screen of the mobile device, the processing time of the second event-triggered non-response flow is obviously shortened compared with the processing time of the existing second event-triggered screen-off flow, and the situation that the mobile device is touched by mistake is reduced or even avoided.
According to the third aspect, after the mobile device receives the second event, the mobile device does not respond to the touch operation on the touch screen of the mobile device, and the touch point or the touch surface of the touch operation is located in a partial area or any area of the touch screen; comprising the following steps: when the mobile device receives the second event, the mobile device uploads the second event to a kernel layer of an application processor of the mobile device; the kernel layer of the application processor does not upload the second event to the frame layer of the upper layer of the hardware abstraction layer; the kernel layer of the application processor controls the mobile device not to respond to the touch operation of the touch screen of the mobile device, and the touch point or the touch surface of the touch operation is positioned in a partial area or any area of the touch screen. Therefore, the non-response flow after the second event is received is further embodied, the processing time of the non-response flow is shorter from a more specific point of view, and the situation that the mobile equipment is touched by mistake is reduced or even avoided.
Further implementation manners and corresponding technical effects in the third aspect may be referred to the corresponding implementation manners and corresponding technical effects in the first aspect and the second aspect, which are not described herein.
In a fourth aspect, a method for preventing false touch is provided. The false touch prevention method is applied to mobile equipment and comprises the following steps: when the mobile device is in an outgoing state, an answering state or a voice message listening state, after receiving a first event, the mobile device starts and completes a screen-off process of a touch screen of the mobile device, and the mobile device does not respond to a second event; the first event includes: after the mobile device changes from a stationary state to a mobile state, when the mobile device detects that the similarity between a movement curve obtained through the acceleration sensor of the mobile device and a preset movement curve is greater than or equal to a preset similarity threshold value, determining an event generated by the movement of the mobile device towards the side surface of the head of a user; or after the mobile device is changed from a static state to a moving state, determining an event generated by moving the mobile device towards the side surface of the head of the user when the mobile device detects that an image acquired by a front camera of the mobile device comprises an ear image of a person; or after the mobile device is changed from a static state to a moving state, determining an event generated by the mobile device moving towards the side face of the head of the user when the mobile device detects that the similarity of a moving curve obtained through an acceleration sensor of the mobile device and a preset moving curve is larger than or equal to a preset similarity threshold value and the image obtained through a front camera of the mobile device comprises an ear image of a person; the second event includes: detecting that the distance between the mobile device and the side face of the head of the user is smaller than a preset threshold value, and determining an event generated when the mobile device approaches the side face of the head of the user; or detecting that the intensity value of a reflected signal of a self-transmitted signal received by the mobile equipment is larger than or equal to a preset threshold value, and determining an event generated by the mobile equipment approaching to the side surface of the head of the user, wherein the signal is an electromagnetic wave signal or an audio signal; or detecting that the intensity value of the reflected signal of the self-transmitted signal received by the mobile equipment is larger than or equal to the intensity value of the second position at the first position, determining an event generated by the fact that the mobile equipment is close to the side face of the head of the user, wherein the first position is far away from the part of the transmitted signal of the mobile equipment compared with the second position, and the signal is an electromagnetic wave signal or an audio signal; or detecting that the area of the change of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value, and determining an event generated when the mobile device approaches to the side face of the head of the user. Therefore, although the non-response flow is not executed any more, the screen-off flow is executed by the mobile device in advance, so that the screen-off processing of the touch screen is timely, and the situation that the mobile device is touched by mistake is reduced or even avoided.
According to a fourth aspect, after receiving the first event, the mobile device starts and completes a screen-off process of a touch screen of the mobile device, including: when a first event is received, the mobile device sequentially uploads the first event to a kernel layer, a hardware abstraction layer and a framework layer of an application processor of the mobile device; starting a screen-off process in the frame layer; sequentially downloading the hardware abstract layer and the kernel layer of the application processor from the framework layer; and finishing the screen-off process. In this way, the screen-off process after the first event is received is further embodied, and the execution time of the screen-off process is set forth in advance from a more specific point of view, so that the processing is earlier, and the situation that the mobile equipment is touched by mistake is reduced or even avoided.
Further implementation manners and corresponding technical effects in the fourth aspect may be referred to the corresponding implementation manners and corresponding technical effects in the first aspect, the second aspect, and the third aspect, and are not described herein.
In a fifth aspect, a method for preventing false touch is provided. The false touch prevention method is applied to mobile equipment and comprises the following steps: when the mobile equipment is in an outgoing state or an answering state or a voice message listening state, the mobile equipment does not respond to the touch operation of the touch screen of the mobile equipment within a preset time period after receiving a first preset event, and a touch point or a touch surface of the touch operation is positioned in a partial area or any area of the touch screen; within the preset time, when the mobile equipment receives a second preset event, the mobile equipment starts a screen-off process; and completing the screen-off process within the preset time length; the first preset event is an event that the mobile device moves towards the side face of the head of the user; the second preset event is an event that the mobile device approaches the side of the user's head.
According to a fifth aspect, the first preset event is determined by: after the mobile equipment changes from a static state to a moving state, the mobile equipment determines the first preset event when detecting that the similarity between a moving curve obtained through an acceleration sensor of the mobile equipment and a preset moving curve is greater than or equal to a preset similarity threshold value; or after the mobile equipment changes from a static state to a moving state, the mobile equipment determines the first preset event when detecting that an image acquired by a front camera of the mobile equipment comprises an ear image of a person; or after the mobile device changes from a static state to a moving state, the mobile device determines the first preset event when the similarity between a moving curve obtained through the acceleration sensor of the mobile device and a preset moving curve is larger than or equal to a preset similarity threshold value and an image obtained through a front camera of the mobile device comprises an ear image of a person.
According to a fifth aspect, the second preset event is determined by: detecting that the distance between the mobile device and the side face of the head of the user is smaller than a preset threshold value, and determining the second preset event; or detecting that the intensity value of a reflected signal of a self-transmitted signal received by the mobile equipment is greater than or equal to a preset threshold value, and determining the second preset event, wherein the signal is an electromagnetic wave signal or an audio signal; or detecting that the intensity value of a reflected signal of a self-transmitted signal received by the mobile equipment at a first position is larger than or equal to the intensity value of a second position, and determining the second preset event, wherein the first position is farther from a part of the transmitted signal of the mobile equipment than the second position, and the signal is an electromagnetic wave signal or an audio signal; or detecting that the area of the change of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value, and determining the second preset event.
Further implementation manners and corresponding technical effects in the fifth aspect may be referred to in the foregoing first aspect, the second aspect, the third aspect, and the fourth aspect, and the corresponding implementation manners and corresponding technical effects are not repeated herein.
In a sixth aspect, a method for preventing false touch is provided. The false touch prevention method is applied to mobile equipment and comprises the following steps: when the mobile equipment is in an outgoing state, an answering state or a voice message listening state, after a first preset event is received, the mobile equipment does not respond to touch operation of a touch screen of the mobile equipment, a touch point or a touch surface of the touch operation is positioned in a partial area or any area of the touch screen, and the mobile equipment does not respond to a second preset event; after the expiration state, the answering state or the voice message listening state is finished, or after a preset gesture operation and/or a preset side key operation are detected, the mobile device responds to the touch operation of a part area or any area of the touch screen of the mobile device; the first preset event is an event that the mobile device moves towards the side face of the head of the user; the second preset event is an event that the mobile device approaches the side of the user's head.
According to a sixth aspect, the first preset event is determined by: after the mobile equipment changes from a static state to a moving state, the mobile equipment determines the first preset event when detecting that the similarity between a moving curve obtained through an acceleration sensor of the mobile equipment and a preset moving curve is greater than or equal to a preset similarity threshold value; or after the mobile equipment changes from a static state to a moving state, the mobile equipment determines the first preset event when detecting that an image acquired by a front camera of the mobile equipment comprises an ear image of a person; or after the mobile device changes from a static state to a moving state, the mobile device determines the first preset event when the similarity between a moving curve obtained through the acceleration sensor of the mobile device and a preset moving curve is larger than or equal to a preset similarity threshold value and an image obtained through a front camera of the mobile device comprises an ear image of a person.
According to a sixth aspect, the second preset event is determined by: detecting that the distance between the mobile device and the side face of the head of the user is smaller than a preset threshold value, and determining the second preset event; or detecting that the intensity value of a reflected signal of a self-transmitted signal received by the mobile equipment is greater than or equal to a preset threshold value, and determining the second preset event, wherein the signal is an electromagnetic wave signal or an audio signal; or detecting that the intensity value of a reflected signal of a self-transmitted signal received by the mobile equipment at a first position is larger than or equal to the intensity value of a second position, and determining the second preset event, wherein the first position is farther from a part of the transmitted signal of the mobile equipment than the second position, and the signal is an electromagnetic wave signal or an audio signal; or detecting that the area of the change of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value, and determining the second preset event.
Further implementation manners and corresponding technical effects in the sixth aspect may be referred to in the foregoing first aspect, second aspect, third aspect, fourth aspect, and fifth aspect, and corresponding technical effects are not repeated herein.
In a seventh aspect, a method for preventing false touch is provided. The false touch prevention method is applied to mobile equipment and comprises the following steps: when the mobile equipment is in an outgoing state or an answering state or a voice message listening state, the mobile equipment does not respond to a first preset event; after the mobile equipment receives a second preset event, the mobile equipment does not respond to the touch operation of the touch screen of the mobile equipment, and a touch point or a touch surface of the touch operation is positioned in a partial area or any area of the touch screen; after the expiration state, the answering state or the voice message listening state is finished, or after a preset gesture operation and/or a preset side key operation are detected, the mobile device responds to the touch operation of a part area or any area of the touch screen of the mobile device; the first preset event is an event that the mobile device moves towards the side face of the head of the user; the second preset event is an event that the mobile device approaches the side of the user's head.
According to a seventh aspect, the first preset event is determined by: after the mobile equipment changes from a static state to a moving state, the mobile equipment determines the first preset event when detecting that the similarity between a moving curve obtained through an acceleration sensor of the mobile equipment and a preset moving curve is greater than or equal to a preset similarity threshold value; or after the mobile equipment changes from a static state to a moving state, the mobile equipment determines the first preset event when detecting that an image acquired by a front camera of the mobile equipment comprises an ear image of a person; or after the mobile device changes from a static state to a moving state, the mobile device determines the first preset event when the similarity between a moving curve obtained through the acceleration sensor of the mobile device and a preset moving curve is larger than or equal to a preset similarity threshold value and an image obtained through a front camera of the mobile device comprises an ear image of a person.
According to a seventh aspect, the second preset event is determined by: detecting that the distance between the mobile device and the side face of the head of the user is smaller than a preset threshold value, and determining the second preset event; or detecting that the intensity value of a reflected signal of a self-transmitted signal received by the mobile equipment is greater than or equal to a preset threshold value, and determining the second preset event, wherein the signal is an electromagnetic wave signal or an audio signal; or detecting that the intensity value of a reflected signal of a self-transmitted signal received by the mobile equipment at a first position is larger than or equal to the intensity value of a second position, and determining the second preset event, wherein the first position is farther from a part of the transmitted signal of the mobile equipment than the second position, and the signal is an electromagnetic wave signal or an audio signal; or detecting that the area of the change of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value, and determining the second preset event.
Further implementation manners and corresponding technical effects in the seventh aspect may be referred to in the foregoing first aspect, second aspect, third aspect, fourth aspect, fifth aspect, and corresponding technical effects in the sixth aspect, which are not described herein.
In an eighth aspect, a method for preventing false touch is provided. The false touch prevention method is applied to mobile equipment and comprises the following steps: when the mobile equipment is in an outgoing state, an answering state or a voice message listening state, after receiving a first preset event, the mobile equipment starts and completes a screen-off process of a touch screen of the mobile equipment, and the mobile equipment does not respond to a second preset event; the first preset event is an event that the mobile device moves towards the side face of the head of the user; the second preset event is an event that the mobile device approaches the side of the user's head.
According to an eighth aspect, the first preset event is determined by: after the mobile equipment changes from a static state to a moving state, the mobile equipment determines the first preset event when detecting that the similarity between a moving curve obtained through an acceleration sensor of the mobile equipment and a preset moving curve is greater than or equal to a preset similarity threshold value; or after the mobile equipment changes from a static state to a moving state, the mobile equipment determines the first preset event when detecting that an image acquired by a front camera of the mobile equipment comprises an ear image of a person; or after the mobile device changes from a static state to a moving state, the mobile device determines the first preset event when the similarity between a moving curve obtained through the acceleration sensor of the mobile device and a preset moving curve is larger than or equal to a preset similarity threshold value and an image obtained through a front camera of the mobile device comprises an ear image of a person.
According to an eighth aspect, the second preset event is determined by: detecting that the distance between the mobile device and the side face of the head of the user is smaller than a preset threshold value, and determining the second preset event; or detecting that the intensity value of a reflected signal of a self-transmitted signal received by the mobile equipment is greater than or equal to a preset threshold value, and determining the second preset event, wherein the signal is an electromagnetic wave signal or an audio signal; or detecting that the intensity value of a reflected signal of a self-transmitted signal received by the mobile equipment at a first position is larger than or equal to the intensity value of a second position, and determining the second preset event, wherein the first position is farther from a part of the transmitted signal of the mobile equipment than the second position, and the signal is an electromagnetic wave signal or an audio signal; or detecting that the area of the change of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value, and determining the second preset event.
Further implementation manners and corresponding technical effects in the eighth aspect may be referred to in the foregoing first aspect, second aspect, third aspect, fourth aspect, fifth aspect, sixth aspect, and seventh aspect, and corresponding technical effects are not repeated herein.
In a ninth aspect, a mobile device is provided. The mobile device comprises at least: memory, one or more processors, one or more application programs, and one or more computer programs; wherein the one or more computer programs are stored in the memory; the one or more processors, when executing the one or more computer programs, cause the mobile device to implement the false touch prevention method of the first aspect, the second aspect, the third aspect, the fourth aspect, the fifth aspect, the sixth aspect, the seventh aspect, the eighth aspect, and any one of the possible implementations.
In addition, any implementation manner and corresponding technical effects of the ninth aspect may be referred to in the foregoing first aspect, second aspect, third aspect, fourth aspect, fifth aspect, sixth aspect, seventh aspect, and corresponding technical effects of the different implementation manner and corresponding technical effects of the eighth aspect, which are not repeated herein.
In a tenth aspect, a computer readable storage medium is provided. The computer readable storage medium comprises instructions which, when run on a mobile device as the ninth aspect described above, cause the mobile device to perform the anti-false touch method as in the first, second, third, fourth, fifth, sixth, seventh, eighth and any one of the possible implementations described above.
In addition, any implementation manner and corresponding technical effects of the tenth aspect may be referred to in the first aspect, the second aspect, the third aspect, the fourth aspect, the fifth aspect, the sixth aspect, the seventh aspect, and the eighth aspect, which are different implementation manners and corresponding technical effects, and are not repeated herein.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the description of the embodiments of the present application will be briefly described below. It will be apparent to those of ordinary skill in the art that the drawings in the following description relate to some embodiments of the application and that other drawings may be derived from them without undue effort.
Fig. 1 is a schematic structural diagram of a mobile device according to an embodiment of the present application.
Fig. 2 (a) -2 (b) are two schematic views of a scenario in which a user makes a false touch while using a mobile device.
Fig. 3 is a schematic illustration of an off-screen flow when the mobile device receives a second event proximate a side of the user's head.
Fig. 4 (a) -4 (c) are schematic diagrams of a situation of an anti-false touch method in which the starting position of the mobile device is 30cm right in front of the chest of the user in the first embodiment of the present application.
Fig. 4 (d) is a schematic view of a scenario in which the starting position of the mobile device is 30cm in front of the chest of the user in the first embodiment of the present application.
Fig. 5 (a) -5 (b) are schematic diagrams and flowcharts of a touch error prevention method according to a first embodiment of the present application.
Fig. 5 (c) is a schematic diagram of time relation between execution of the first event and the second event and the corresponding triggering process in an anti-false touch method according to an embodiment of the present application; fig. 5 (d) is a simplified schematic diagram of the time relationship of fig. 5 (c).
Fig. 6 (a) is a schematic diagram of the measured time-dependent profile of the components of the acceleration in the X, Y and Z axes during the movement of the mobile device starting position 30cm directly in front of the user's chest from this starting position towards the front of the user's head.
Fig. 6 (b) -6 (d) are schematic diagrams of curves of time-varying components of the acceleration measured in the X-axis, Y-axis and Z-axis during the movement of the mobile device from the starting position to the side of the user's head, where the starting position of the mobile device is 30cm in front of the user's chest, 30cm in front of the user's head, and 30cm in front of the user's head.
Fig. 7 is a schematic diagram of each area on a touch screen of a mobile device in an anti-false touch method provided in the first embodiment, the second embodiment and the third embodiment of the present application and in a mobile device provided in the fifth embodiment of the present application.
Fig. 8 (a) -8 (b) are a schematic diagram and a flowchart of a touch error prevention method according to a second embodiment of the application.
Fig. 9 (a) -9 (b) are a schematic diagram and a flowchart of a touch error prevention method according to a third embodiment of the present application.
Fig. 10 (a) -10 (b) are a schematic diagram and a flowchart of a touch error prevention method according to a fourth embodiment of the present application.
Fig. 11 is a block diagram of a mobile device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without any inventive effort, are intended to be within the scope of the application.
The method provided by the embodiment of the application can be applied to the mobile device 100 shown in fig. 1. Fig. 1 shows a schematic structure of a mobile device 100.
The mobile device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of an embodiment of the present application does not constitute a particular limitation of the mobile device 100. In other embodiments of the application, mobile device 100 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural-Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SERIAL DATA LINE, SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc., respectively, through different I2C bus interfaces. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement a touch function of the mobile device 100.
The I2S interface may be used for audio communication. PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. The UART interface is a universal serial data bus for asynchronous communications.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (CAMERA SERIAL INTERFACE, CSI), display serial interfaces (DISPLAY SERIAL INTERFACE, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of mobile device 100. The processor 110 and the display 194 communicate via the DSI interface to implement the display functionality of the mobile device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative and not limiting to the structure of the mobile device 100. In other embodiments of the present application, the mobile device 100 may also employ different interfaces in the above embodiments, or a combination of interfaces.
The charge management module 140 is configured to receive a charge input from a charger. The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The wireless communication function of the mobile device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the mobile device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication, including 2G/3G/4G/5G, as applied on the mobile device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), near field communication (NEAR FIELD communication, NFC), infrared (IR), etc., as applied to the mobile device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of mobile device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that mobile device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques can include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (GENERAL PACKET radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation SATELLITE SYSTEM, GLONASS), a beidou satellite navigation system (beidou navigation SATELLITE SYSTEM, BDS), a quasi zenith satellite system (quasi-zenith SATELLITE SYSTEM, QZSS) and/or a satellite based augmentation system (SATELLITE BASED AUGMENTATION SYSTEMS, SBAS).
The mobile device 100 implements display functionality through a GPU, a display screen 194, and an application processor, among others. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an organic light-emitting diode (OLED), an active-matrix organic LIGHT EMITTING diode (AMOLED), a flexible light-emitting diode (FLED), miniled, microLed, micro-oLed, a quantum dot LIGHT EMITTING diode (QLED), or the like. In some embodiments, the mobile device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The mobile device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the mobile device 100 is selecting a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The mobile device 100 may support one or more video codecs. In this way, the mobile device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the mobile device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the mobile device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the mobile device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The gyro sensor 180B may be used to determine a motion gesture of the mobile device 100. The air pressure sensor 180C is used to measure air pressure. In some embodiments, the mobile device 100 calculates altitude from barometric pressure values measured by the barometric pressure sensor 180C, aiding in positioning and navigation. The magnetic sensor 180D includes a hall sensor. The acceleration sensor 180E may detect the magnitude of acceleration of the mobile device 100 in various directions (typically three axes). A distance sensor 180F for measuring a distance. The mobile device 100 may measure distance by infrared or laser. In some embodiments, the scene is photographed and the mobile device 100 can range using the distance sensor 180F to achieve quick focus. The proximity sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The ambient light sensor 180L is used to sense ambient light level. The fingerprint sensor 180H is used to collect a fingerprint. The mobile device 100 may utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, take a photograph of the fingerprint, answer an incoming call, etc. The temperature sensor 180J is for detecting temperature.
The proximity detection method of the proximity sensor 180G mainly includes:
1. The proximity sensor 180G sends out a signal in real time and receives a reflected signal; when the reflected signal is not received, the detection result is not close; when the reflected signal is received, the detection result is close. More precisely, the distance between the obstruction and the mobile device 100 is calculated from the propagation speed and the duration of the signal, and compared with a preset distance threshold; if the detection result is smaller than the detection result, the detection result is approximate; otherwise, the detection result is not close. The signals include audio signals, ultrasonic signals, infrared signals, and visible light signals.
2. The proximity sensor 180G transmits and receives the reflected signal, compares the received reflected signal strength with a preset signal strength threshold, and when the comparison result is greater than or equal to the preset signal strength threshold, the detection result is proximity; otherwise, the detection result is not close. More precisely, the proximity sensor 180G determines whether or not the proximity is detected based on the magnitude relation of the signal intensity of the received reflected signal at different receiving positions. Specifically, if the intensity of the reflected signal received at the first receiving position of the proximity sensor 180G is greater than or equal to the intensity of the reflected signal received at the second receiving position, the detection result is proximity; otherwise, the detection result is not close; the first receiving location is farther from the signal transmitting portion of the proximity sensor 180G than the second receiving location. The signals include audio signals, ultrasonic signals, infrared signals, and visible light signals.
The acceleration sensor 180E detects acceleration of the mobile device 100 in real time. The mobile device 100 is provided with X, Y and Z-axes along its width, length and height directions, respectively. The X, Y and Z-axis arrangements described above may also be along other directions of the mobile device 100, as long as the X, Y and Z-axes are perpendicular to each other. The acceleration sensor 180E detects and acquires acceleration and acceleration components of the acceleration on X, Y and Z axes in real time, and draws a movement curve of the acceleration with time and an X-axis curve, a Y-axis curve, and a Z-axis curve of the acceleration components along X, Y and Z axes with time according to the acceleration acquired by the real-time detection.
Fig. 2 (a) -2 (b) are two schematic views of a scenario in which a user makes a false touch while using a mobile device. As shown in fig. 2 (a), when a user uses a mobile device to answer a call, answer a three-way voice message or make a call, and the mobile device is close to the side of the user's head, when the touch screen of the mobile device is not deactivated, the twisting of the user's head causes false touch. This affects the normal use of the user. Sometimes the pose of the user placing the mobile device on the side of the head is not standard, such as the user presenting the mobile device in the pose shown in fig. 2 (b). And the proximity sensor 180G is typically disposed near a front camera (not shown) in the upper portion of the mobile device. Thus, when the mobile device is in the posture shown in fig. 2 (b), the proximity sensor 180G cannot trigger the screen-off process because it cannot acquire the sensing data reflecting the actual situation, and thus false touch is likely to occur. In addition, for the mode that the mobile device determines whether the mobile device approaches the side face of the head of the user through the change of the capacitance value on the capacitive screen, false touch caused by untimely screen-off processing of the touch screen is easy to occur. It should be noted that fig. 2 (a) -2 (b) show that the user holds the mobile device by the right hand. However, fig. 2 (a) -2 (b) are only for illustrative purposes, and the technical problem described above is also presented by a user holding a mobile device with the left hand.
The inventors studied the reason behind analysis to find out the false touch. As shown in fig. 3, the off-screen flow of the mobile device involves a Framework layer (FWK), a hardware abstraction layer (Hardware Abstraction Layer HAL), a kernel layer (kernel), and an intelligent sensor hub. The framework layer is an API framework used by the core application program, and provides various interface APIs for the application layer, including various components and services to support android development of a developer. The hardware abstraction layer is an abstract interface driven by the device kernel, and provides an application programming interface for accessing the underlying device to a higher level Java API framework. The HAL contains a plurality of library modules, each of which implements an interface for a particular type of hardware component. When the framework API requires access to the device hardware,The system will load the library module for that hardware component. The kernel layer is/>The basis of the system. /(I)The final function implementation of the system is completed through a kernel. An intelligent sensor hub is a solution based on a combination of software and hardware on a low power microprocessor (Microcontroller Unit, abbreviated MCU) and a lightweight Real-time operating system (Real-time operating system, abbreviated RTOS) operating system, the main function of which is to connect and process data from various sensor devices. The hardware abstraction layer interface description language (HAL INTERFACE definition language, abbreviated HIDL) is an interface description language that specifies the interface between HAL and FWK.
Fig. 3 is a schematic illustration of an off-screen flow when the mobile device receives a second event proximate a side of the user's head. In the off-screen process of the mobile device as shown in fig. 3, the mobile device generates a second event when the mobile device approaches the side of the user's head; the second event is transmitted to a framework layer through a kernel layer and a hardware abstraction layer of an intelligent sensor hub, an application processor (Application Processor, abbreviated as AP) in sequence; and then, an execution instruction of the screen-off process is started by the framework layer, is transmitted to the kernel layer of the application processor through the hardware abstraction layer, and the screen-off is specifically executed by the kernel layer of the application processor. Through measurement and experiment, the time spent from the time when the mobile device approaches the side of the user head to the time when the touch screen is completely turned off is generally 200ms-800ms. The whole time is long. Thus, the mobile device does not timely process the screen-off of the touch screen.
Example 1
An embodiment of the application provides a false touch prevention method, which relates to fig. 4 (a) -4 (d) and fig. 5 (c) -5 (d). In the first embodiment of the present application, the starting position of the user may be any position. For convenience of explanation, considering the most common scenario of the mobile device in actual use of the user, in connection with fig. 4 (a) -4 (d), it is elucidated from the starting position of the mobile device at a certain position of the right front of the user's chest (fig. 4 (a)), the left front of the user's chest (not shown in fig. 4) and the right front of the user's chest (fig. 4 (d)), respectively.
The scenario shown in fig. 4 (a) is a scenario in which when the user uses the mobile device, the starting position of the mobile device is a position directly in front of the chest of the user. This scenario is one of the most common scenarios when a user uses a mobile device. For example, a user uses a mobile phone by taking a subway, public transportation, or walking, or sitting, by picking up the mobile phone at a position just in front of the chest of the user. In the scenario shown in fig. 4 (a), the starting position of the mobile device is 30cm in front of the user's chest. The distance of 30cm is a representative value selected according to the habit of the adult and the arm length, and is not used to exclude other values. Other values of distance are within the scope of the present application. Distances such as 10cm, 15cm, 20cm, 23cm, 35cm or any other values can be selected according to the use habit of the user and the length of the arm. The above values may be integers or decimal, such as 26.5cm. Of course, the direction of the starting position of the mobile device is not limited to the front of the chest of the user, but may be other directions. Such as left front, right front, etc. of the chest of the user. The distance between the left front side and the right front side of the chest of the user can be 30cm, or the distance can be the above value or any other value, and the distance is also selected according to the use habit, the arm length and other factors of the user. The above values may be integers or fractions.
Taking the scenario shown in fig. 4 (a) as an example, the starting position of the mobile device is 30cm in front of the user's chest. The movement trace of the mobile device is shown in fig. 4 (b). First, when the mobile device is at the starting position 1, the user makes a call, or receives a call, or listens to a voice message through the mobile device. At this time, the user clicks the dial-out button, or the answer button, or the play button. The user then holds the mobile device from the starting position 1 towards the side of the user's head until the side of the user's head is approached for talking or listening to a voice message. When the mobile equipment is switched from the stationary state of the initial position 1 to the moving state and moves to the position 2, the mobile equipment receives a first event at the moment and triggers an unresponsive flow; after the non-response flow is executed, the mobile equipment does not respond to the touch operation of the touch screen of the mobile equipment and keeps the preset duration; the touch point or the touch surface of the touch operation is positioned in a partial area or any area of the touch screen. Since the processing duration of the non-responsive flow is too short relative to the processing duration of the off-screen flow, it is negligible. The processing duration of the non-response flow comprises a duration from the mobile equipment receiving the first event until the non-response flow is executed; the processing duration of the non-response flow does not include the preset duration. Thus, the mobile device does not respond to a touch operation on the touch screen of the mobile device within a preset time period after the mobile device receives the first event. The mobile device continues to move from the position 2, and when the mobile device moves to the position 3, the mobile device receives a second event at the moment, and triggers the execution of the screen-off process. In position 3, the mobile device is only near the side of the user's head and does not contact the side of the user's face or the side of the user's head such as the ears. And the mobile equipment realizes the screen-off after the processing time of the screen-off flow. The partial region includes: the touch screen comprises a status bar at the upper part of the touch screen, a dial plate at the middle and lower parts of the touch screen and a navigation bar.
The first event includes, but is not limited to: after the mobile device is switched from a static state to a moving state, determining an event generated by the mobile device moving towards the side surface of the head of a user when the mobile device detects that the similarity of a moving curve obtained through an acceleration sensor of the mobile device and a preset moving curve is greater than or equal to a preset similarity threshold value; and/or after the mobile device is switched from the static state to the moving state, determining an event generated by moving the mobile device towards the side face of the head of the user when the mobile device detects that the image acquired by the front camera of the mobile device comprises an ear image of a person.
The second event includes, but is not limited to: detecting that the distance between the mobile device and the side face of the head of the user is smaller than a preset threshold value, and determining an event generated when the mobile device approaches the side face of the head of the user; or detecting that the intensity value of a reflected signal of a self-transmitted signal received by the mobile equipment is larger than or equal to a preset threshold value, and determining an event generated when the mobile equipment approaches the side surface of the head of the user, wherein the signal is an electromagnetic wave signal or an audio signal; or detecting that the intensity value of a reflected signal of a self-transmitted signal received by the mobile device at a first position is larger than or equal to the intensity value of a second position, determining an event generated by the fact that the mobile device approaches the side face of the head of a user, wherein the first position is farther from the part of the mobile device, which transmits the signal, than the second position, and the signal is an electromagnetic wave signal or an audio signal; or detecting that the area of the change of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value, and determining an event generated when the mobile device approaches to the side face of the head of the user.
Wherein whether the first event occurs is determined by the smart sensor hub based on the sensed data transmitted by the sensor. Including but not limited to acceleration sensors and/or gyroscopes, etc. disposed on mobile devices. For simplicity, only the acceleration sensor will be described as an example. The mobile device is provided with X, Y and Z three axes along the width, length and height directions respectively. The X, Y and Z axes may be oriented in other directions of the mobile device, as long as the X, Y and Z axes are perpendicular to each other. The acceleration sensor detects and acquires acceleration and acceleration components of the acceleration on X, Y and Z three axes in real time, and draws a moving curve of the acceleration along time change and/or an X-axis curve, a Y-axis curve and a Z-axis curve along the acceleration components on X, Y and Z three axes along time change according to the acceleration obtained by real-time detection. The user uses a mobile device that records data representing a movement profile of acceleration over time and/or data representing an X-axis profile, a Y-axis profile, and a Z-axis profile along the X, Y and Z-axes of acceleration components over time based on acceleration detected in real time. Before the mobile device leaves the factory, the mobile device is preset with preset data of a preset movement curve representing the change of acceleration along time and/or preset data of a preset X-axis curve, a preset Y-axis curve and a preset Z-axis curve representing the change of acceleration components along X, Y and Z-axes along time. That is, the movement curve can be decomposed into an X-axis curve, a Y-axis curve, and a Z-axis curve; the preset movement curve can be decomposed into a preset X-axis curve, a preset Y-axis curve and a preset Z-axis curve. And comparing the similarity of the moving curve and the preset moving curve, and when the similarity of the moving curve and the preset moving curve is larger than or equal to a preset similarity threshold value, considering that the curves are similar, and further generating a first event.
Further, in order to improve the accuracy of comparison, the similarity comparison result of the moving curve and the preset moving curve can be obtained through the similarity comparison results of the X-axis curve, the Y-axis curve and the Z-axis curve with the preset X-axis curve, the preset Y-axis curve and the preset Z-axis curve respectively. For example, if the similarity between the movement curve and the preset movement curve is greater than or equal to a preset similarity threshold, the similarity between the X-axis curve, the Y-axis curve, and the Z-axis curve and the preset X-axis curve, the preset Y-axis curve, and the preset Z-axis curve is required to be greater than or equal to the preset similarity threshold. Of course, other options are possible. For example, if the similarity between the movement curve and the preset movement curve is greater than or equal to a preset similarity threshold, the similarity between the X-axis curve and any two combinations of the preset X-axis curve, the Y-axis curve and the preset Y-axis curve, and the Z-axis curve and the preset Z-axis curve is required to be greater than or equal to the preset similarity threshold. The above-mentioned changes can be specifically adjusted according to the use cases.
In addition, the preset movement curve can be corrected and adjusted by adopting a statistical means in the use of a user. For example, after determining that the similarity between the movement curve and the preset movement curve is greater than or equal to a preset similarity threshold, the movement curve meeting the requirements is superimposed on the preset movement curve based on the preset movement curve, so as to form a new preset movement curve. The movement curves meeting the requirements can be accumulated to a certain number of times and then overlapped, and the movement curves meeting the requirements can be overlapped as long as the movement curves meeting the requirements exist. The above-mentioned primary number can be 2 times, 3 times, 4 times, etc., and can be set up by self according to the need. And discarding the numerical value abnormality or larger deviation corresponding to the same time point on the superimposed preset movement curve. The values corresponding to the same time point are close, and the average or median of the values is replaced. And continuously correcting and adjusting to enable the preset moving curve to continuously approach an ideal curve.
When determining the similarity of the movement curve and the preset movement curve, the characteristics of the movement curve and the preset movement curve may be acquired, and the first event may be determined based on the acquired characteristics and compared. For the movement profile, the acquired features include: the X-axis curve, the Y-axis curve and the Z-axis curve have monotonic change conditions, and the number of wave crests and wave troughs and the change conditions of the wave crests and the wave troughs are respectively contained in the X-axis curve, the Y-axis curve and the Z-axis curve. For the preset movement curve, the acquired features include: the preset X-axis curve, the preset Y-axis curve and the monotonicity change condition of the preset Z-axis curve respectively comprise the number of wave crests and wave troughs and the change condition of the wave crests and the wave troughs. The features acquired above are merely exemplary and do not limit the scope of the features acquired. Other features that can be used to identify the curve are also within the scope of the acquired features.
The specific manner of acquisition is further illustrated by taking the features acquired in the above exemplary description as examples. The feature acquisition of the movement curve and the preset movement curve may be as follows: firstly dividing the time of the axis of abscissa of the movement curve and the preset movement curve into a plurality of time periods; expressing the movement curve and the preset movement curve corresponding to each time period by using an approximate function; deriving an approximation function under each time period, and obtaining monotonic change conditions such as monotonic increment, monotonic decrement, monotonic increment and monotonic decrement, monotonic decrement after monotonic decrement, monotonic increment after monotonic increment and the like according to a derivation result; according to the monotonicity change condition, a transition point which is monotonically increased and then monotonically decreased and a transition point which is monotonically decreased and then monotonically increased can be obtained; the transition points which are firstly monotonously increased and then monotonously decreased are wave crests, and the transition points which are firstly monotonously decreased and then monotonously increased are wave troughs, so that the number of the wave crests and the wave troughs and the change condition of the wave crests and the wave troughs can be obtained. The specific acquisition mode described above is merely an exemplary illustration of the acquisition of the features, and does not limit the scope of the acquisition mode of the features. Other ways of obtaining the features are within the scope of embodiments of the application.
Whether the second event occurs may be determined by the smart sensor hub based on data detected by the proximity sensor. The specific modes can be as follows: 1. the proximity sensor of the mobile device sends out signals in real time and receives reflected signals; when the proximity sensor does not receive the reflected signal, the intelligent sensing hub determines that the detection result is not close, and the second event is not generated; and when the reflected signal is received, the intelligent sensing hub determines that the detection result is close, and the second event is generated. More precisely, calculating the distance between the shielding object and the mobile equipment according to the propagation speed and the duration of the signal, and comparing the distance with a preset distance threshold; if the detection result is smaller than the detection result, the intelligent sensing hub determines that the detection result is close to the detection result, and the second event is generated; otherwise, the intelligent sensing hub determines that the detection result is not close, and the second event is not generated. The signals include audio signals, ultrasonic signals, infrared signals, and visible light signals. 2. The proximity sensor of the mobile device transmits and receives the reflected signal, compares the received reflected signal strength with a preset signal strength threshold, and when the comparison result is greater than or equal to the preset signal strength threshold, the intelligent sensing hub determines that the detection result is proximity, and generates the second event; otherwise, the intelligent sensing hub determines that the detection result is not close, and the second event is not generated. More precisely, the proximity sensor of the mobile device determines whether to approach based on the magnitude relation of the signal strength of the received reflected signal at different receiving positions, and the intelligent sensor hub determines whether to generate the second event. Specifically, if the intensity of the reflected signal received by the first receiving position of the proximity sensor of the mobile device is greater than or equal to the intensity of the reflected signal received by the second receiving position, the detection result is that the mobile device is close, and the intelligent sensor hub generates the second event; otherwise, the detection result is that the second event is not generated by the intelligent sensing hub; the first receiving location is farther from a signal transmitting portion of a proximity sensor of the mobile device than the second receiving location. The signals include audio signals, ultrasonic signals, infrared signals, and visible light signals.
Whether the second event occurs may also be determined by a change in capacitance value of the capacitive touch screen of the mobile device as it approaches the user's face. Specifically, by detecting that the area of the change of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value, determining that the detection result is approaching or not approaching; the detection result is that the intelligent sensing hub is close, and the intelligent sensing hub generates the second event; and if the detection result is that the second event is not close, the intelligent sensing hub does not generate the second event.
As shown in fig. 5 (a), the mobile device is based onThe system. After the intelligent Sensor hub generates the first event, the intelligent Sensor hub uploads the first event to the Sensor Driver of the kernel layer of the application processor AP, and then the Sensor Driver starts the unresponsive flow such as the freezing flow. Specifically, the Sensor Driver sends a non-response instruction, such as a freeze instruction, to TouchScreen Driver, so that the mobile device does not respond to the touch operation of the touch screen of the mobile device within the preset duration T of receiving the first event; the touch point or the touch surface of the touch operation is positioned in a partial area or any area of the touch screen. The partial region includes: the touch screen comprises a status bar at the upper part of the touch screen, a dial plate at the middle and lower parts of the touch screen and a navigation bar. After the intelligent sensing hub generates a second event, the intelligent sensing hub uploads the second event to a Sensor Driver of an AP kernel layer of an application processor; the second event is uploaded to Sensor Hidl services of the hardware abstraction layer HAL through Sensor Driver of the application processor AP kernel layer, is continuously uploaded to SensorManager of the framework FWK layer through Sensor Hidl services of the hardware abstraction layer HAL, and is then transferred to PowerManager of the framework FWK layer through SensorManager of the framework FWK layer; finally, powerManager of the FWK layer of the framework starts a screen-off process and sends a screen-off instruction; the screen-off instruction is transmitted to the touch screen through a hardware synthesizer (Hardware Composer, abbreviated as HWC) of the hardware abstraction layer HAL; and after receiving the screen-off instruction, the touch screen executes a screen-off process. In summary, the steps and links involved in the non-response flow are simplified relative to the screen-off flow. Therefore, the processing time of the non-response flow is significantly shortened compared to the processing time of the off-screen flow. The processing duration of the non-response flow comprises a duration from the mobile equipment receiving the first event until the non-response flow is executed; the processing duration of the non-response flow does not include the preset duration. The above-mentioned freezing process and freezing instruction are a specific way of not responding to the process and not responding to the instruction, respectively, and are not used to limit the scope of not responding to the process or not responding to the instruction. The method only needs to interrupt the contact mode between the touch point on the touch screen of the mobile device and the subsequent processing, and belongs to the implementation mode of non-response flow or non-response instruction; and are intended to fall within the scope of embodiments of the present application.
On the basis of fig. 4 (b), a first embodiment of the present application will be further explained with reference to fig. 5 (c). When the mobile device is at the position 2, the mobile device receives a first event; at this time, t0, the mobile device starts an unresponsive flow. Within a preset time period T after the non-response flow is executed, the mobile equipment does not respond to the touch operation of the touch screen of the mobile equipment; the touch points of the touch operation are located in a partial area or any area of the touch screen. T0 is the execution duration of the non-response flow, namely the duration from the start of the non-response flow to the completion of the execution. It is found from the measurement and experiment that T0 is less than 20ms. The preset time period T may be set to any one time period between 1s and 2s, such as 1.5s, while satisfying the requirement of the first embodiment. The mobile device receives a second event when the mobile device is at position 3; at this time, t1, the mobile device starts the off-screen process. T1 is the execution duration of the off-screen process, namely the duration from the start of the off-screen process to the completion of the execution. Through measurement and experiment, T1 is more than or equal to 200ms and less than or equal to 800ms. And in the T1+ T1, the screen-off process is completed after the execution of the screen-off process is completed. Wherein, as shown in FIG. 5 (c), t0 < t0+t0 < t1 < t1+t1.ltoreq.t0+T < t0+t0+T. Since T0 is significantly too small relative to T and T1, it is negligible. Thus, a simplified diagram of the time and duration of interest after omission is shown in fig. 5 (d). The relation between the related time and the time length is simplified as follows: t0 is more than T1 and less than t1+t1 is less than or equal to t0+T. In this way, the mobile device does not respond to the touch operation of the touch screen of the mobile device within the preset time period T after the mobile device is positioned at the position 2 and the non-response flow is started; the touch points of the touch operation are located in a partial area or any area of the touch screen. The partial region includes: the touch screen comprises a status bar at the upper part of the touch screen, a dial plate at the middle and lower parts of the touch screen and a navigation bar. In addition, the screen-off process is also executed within the preset time period T, that is, within the time period from T0 to t0+t. That is, from T0 to t0+t, the mobile device does not respond to the touch operation of the touch screen of the mobile device; the touch points of the touch operation are located in a partial area or any area of the touch screen. Even if false touch occurs within the preset time period T, unexpected operation of false touch can not be generated. Since t0+t, since the mobile device has completed the screen-off, even if a false touch occurs, an unexpected operation of the false touch does not occur. Therefore, from t0, an unexpected operation of erroneous touch is not generated.
According to the embodiment of the application, the non-response flow started earlier than the screen-off flow is set, the processing time of the non-response flow is greatly shortened relative to the processing time of the screen-off flow, and the screen-off flow is executed before the preset time after the execution of the non-response flow is finished, so that the situation that the mobile equipment is touched by mistake is reduced or even avoided, the technical problem of easy false touch of the touch screen on the side of the user head caused by untimely screen-off processing of the touch screen by the mobile equipment is solved, and the user experience is improved.
In the first embodiment of the present application, the starting position of the mobile device may also be a position at the right front of the chest of the user and a position at the left front of the chest of the user. Fig. 4 (d) shows a scenario in which the mobile device is located at a position right in front of the chest of the user. Although the starting position of the mobile device is not shown in the drawings of the specification at a position in front of the left of the chest of the user, it is conceivable to a person skilled in the art. The different starting positions of the mobile devices can cause different movement tracks of the mobile devices in the process of moving the mobile devices from the starting positions to the sides of the heads of the users. The acceleration components of the mobile device on the X, Y and Z axes will also vary in time profile. Although the movement curves acquired by the mobile device at different starting positions are different, the basic rules are shown. For the convenience of comparison, the above certain position is uniformly selected to be 30cm away from the chest, such as 30cm right in front of the chest of the user, 30cm left in front of the chest of the user and 30cm right in front of the chest of the user. The distance of 30cm is a representative value selected according to the habit of the adult and the arm length, and is not used to exclude other values. Other values of distance are within the scope of embodiments of the present application. The above values may be integers or decimal, such as 22.8cm.
The right front of the user chest, the left front of the user chest and the right front of the user chest are respectively 45-degree angular directions which are perpendicular to the user chest and between the user chest front, the user chest front direction and the user chest front left direction and 45-degree angular directions between the user chest front direction and the user chest front right direction.
Although in fig. 4 (a) -4 (d) the user holds the mobile device by the right hand and moves the mobile device to a position near the right side of the user's head; it will be appreciated by those skilled in the art that the foregoing is merely illustrative and that a user may hold the mobile device by the left hand and move to a position proximate the left side of the user's head. For example, the user moves with the mobile device in the left hand from a position directly in front of, left in front of, and right in front of the user's chest to near the left ear or left face of the user. The foregoing also falls within the scope of embodiments of the present application.
Fig. 6 (b) -6 (d) are schematic diagrams of time curves of components of acceleration detected in real time in X-axis, Y-axis and Z-axis during the movement of the mobile device from the starting position toward the side of the user's head, with the starting position of the mobile device being 30cm in front of the user's chest, 30cm in front of the left and 30cm in front of the right, respectively. For simplicity, fig. 6 (a) -6 (d) do not plot acceleration over time. The time-dependent curve of the acceleration can be decomposed into time-dependent curves of the components of the acceleration in the X-axis, the Y-axis and the Z-axis.
For exemplary illustration, the directions of the X, Y and Z axes in fig. 6 (b) -6 (d) are along the width, length and height directions of the mobile device, respectively. The X, Y and Z-axis arrangements described above may also be along other directions of the mobile device, as long as the X, Y and Z-axes are perpendicular to each other. In addition, the right hand described above is replaced with the left hand and moved to the vicinity of the left ear or left face side of the user's head side, and the measured time-dependent curves of the components of the acceleration in the X-axis, Y-axis and Z-axis are reversed or nearly reversed only in the X-axis and the Y-axis and Z-axis are the same or nearly the same as those of fig. 6 (b) -6 (d).
The curves shown in fig. 6 (b) -6 (d) are a preset X-axis curve, a preset Y-axis curve, and a preset Z-axis curve, respectively, before the user uses the mobile device. After the user uses the mobile device, the curves of acceleration components of acceleration detected by the acceleration sensor on the mobile device in real time along the time directions of the acceleration components on the X axis, the Y axis and the Z axis are respectively an X axis curve, a Y axis curve and a Z axis curve. The preset movement curve synthesized by the preset X-axis curve, the preset Y-axis curve and the preset Z-axis curve can be corrected and adjusted according to the statistical means.
In addition, the curves of acceleration components in the X-axis, Y-axis and Z-axis of acceleration detected in real time by the mobile device moving from the same starting position toward the side of the user's head and moving from the other side of the user's head are different. To illustrate this difference, fig. 6 (a) is provided. Fig. 6 (a) shows a plot of acceleration components of the acceleration detected in real time in the X-axis, Y-axis and Z-axis as a function of time with the mobile device held in the right hand and moving from 30cm directly in front of the chest of the user toward the front of the user's head. In fig. 6 (a) to 6 (d), the units on the abscissa are 10ms, and the units on the ordinate are the gravitational acceleration g.
While FIGS. 6 (a) -6 (d) are graphs of acceleration components of real-time detected acceleration in the X, Y and Z axes over time, with a user holding the mobile device by the right hand and moving from a different starting position to near the user's right ear or right face; it will be appreciated by those skilled in the art that the foregoing is merely exemplary and that a user may also hold a mobile device with the left hand and move to the vicinity of the user's left ear or left face to plot the acceleration components of the real-time detected accelerations in the X-axis, Y-axis, and Z-axis over time. And the foregoing also falls within the scope of embodiments of the present application.
Comparing fig. 6 (a) with fig. 6 (b), it is clear that the acceleration component on the X axis in fig. 6 (a) is firstly gentle at about 0g, then fluctuates in a range from-0.2 g to 0.2g, and subsequently tends to be gentle at about 0 g; the acceleration component on the Y axis slowly rises from about 0.6g to about 1 g; the acceleration component on the Z axis is first smoothed to about 0.8g, then raised to about 1.4g, then lowered to about-0.4 g, then oscillated and finally stabilized to about 0.3 g. In FIG. 6 (b), the acceleration component on the X axis is first decreased from about 0g to about-0.8 g, then slowly increased to about 1.2g, and then decreased to about 0.8 g; the acceleration component on the Y axis is relatively gentle and slowly fluctuates in a section from about 0.3g to about 0.8g, and slowly decreases from about 0.5g to about 0.3g and slowly increases back to about 0.8 g; the acceleration component on the Z axis starts to have a larger peak, the peak can reach about 2g, a short-time trough is arranged next, the trough can reach about-3 g, and then the acceleration component is quickly restored to about 0 g. It can be seen that there is a significant difference in the curves of the two. The same or similar curve as the curve shown in fig. 6 (b) will not be the same as or similar to the curve shown in fig. 6 (a). Therefore, when the starting positions are the same or similar, the reference of whether the similarity between the X-axis curve, the Y-axis curve, and the Z-axis curve and the preset X-axis curve, the preset Y-axis curve, and the preset Z-axis curve is greater than or equal to a preset similarity threshold may be used to determine whether the first event is generated. Undeniably, the determination of whether the first event is based may also have some degree of error. In embodiments of the application, this error has less impact on the user experience, as there are other conditions restrictions such as the mobile device being in an outgoing state or listening to voice messages.
The data based on the preset movement curve can be data which is subjected to multiple tests and statistical treatment according to a statistical mode before the mobile equipment is marketed. Such as an average or median calculation of the plurality of test data, and then plotting the predetermined movement curve with the calculated data. And then, sequentially comparing the movement curve with the preset movement curve, and generating a first event if the movement curve and the preset movement curve are the same or have the similarity larger than or equal to a preset similarity threshold value. If the movement curves are completely compared with all the preset movement curves, the movement curves are different or the similarity is smaller than a preset similarity threshold value, and no first event is generated.
When calculating the similarity of the movement curve and the preset movement curve, the characteristics of the movement curve and the preset movement curve can be acquired, and whether the first event is generated or not can be determined based on the acquired characteristics of the movement curve and the preset movement curve. For the movement curve, the acquired features may be: the X-axis curve, the Y-axis curve and the Z-axis curve have monotonic change conditions, and the number of wave crests and wave troughs and the change conditions of the wave crests and the wave troughs are respectively contained in the X-axis curve, the Y-axis curve and the Z-axis curve. For the preset movement curve, the acquired characteristics may be: the preset X-axis curve, the preset Y-axis curve and the monotonicity change condition of the preset Z-axis curve respectively comprise the number of wave crests and wave troughs and the change condition of the wave crests and the wave troughs. The features acquired above are merely exemplary and do not limit the scope of the features acquired. Other features that can be used to identify the curve are also within the scope of the acquired features.
The specific manner of acquisition is further illustrated by taking the features acquired in the above exemplary description as examples. The feature acquisition of the movement curve and the preset movement curve may be as follows: firstly dividing the time of the axis of abscissa of the movement curve and the preset movement curve into a plurality of time periods; expressing the movement curve and the preset movement curve corresponding to each time period by using an approximate function; deriving an approximation function under each time period, and obtaining monotonic change conditions such as monotonic increment, monotonic decrement, monotonic increment and monotonic decrement, monotonic decrement after monotonic decrement, monotonic increment after monotonic increment and the like according to a derivation result; according to the monotonicity change condition, a transition point which is monotonically increased and then monotonically decreased and a transition point which is monotonically decreased and then monotonically increased can be obtained; the transition points which are firstly monotonously increased and then monotonously decreased are wave crests, and the transition points which are firstly monotonously decreased and then monotonously increased are wave troughs, so that the number of the wave crests and the wave troughs and the change condition of the wave crests and the wave troughs can be obtained. The specific acquisition mode described above is merely an exemplary illustration of the acquisition of the features, and does not limit the scope of the acquisition mode of the features. Other ways of obtaining the features are within the scope of embodiments of the application.
Although fig. 6 (b) -6 (d) show three common starting positions of the mobile device, other starting positions may exist in actual use. In addition, the mobile device typically also rotates about itself through an angle during movement of the mobile device toward the side of the user's head. In order to cope with more complex situations, a gyroscope may be further arranged on the mobile device, and the similarity between a rotation curve obtained by combining the rotation angles detected by the gyroscope in real time and a preset rotation curve is used for assisting in determining whether the first event is generated. The similarity comparison between the rotation curve obtained by using the rotation angle detected by the gyroscope and the preset rotation curve is similar to that between the movement curve and the preset movement curve, and will not be described here.
Example two
For the case where the user places the mobile device in an irregular posture as shown in fig. 2 (b) when the user makes a call, receives a call or listens to a voice message, the mobile device cannot receive the second event due to the set position of the proximity sensor 180G, and thus the false touch preventing method of the first embodiment cannot effectively cope with the case.
Therefore, a false touch prevention method of the second embodiment of the present application is provided. The false touch prevention method relates to fig. 8 (a) -8 (b). In the second embodiment of the present application, the starting position of the user may be any position. Considering the common scenario of a mobile device in actual use by a user, for ease of illustration, the starting position of the mobile device is illustrated as being at a position directly in front of the chest of the user (fig. 4 (a)), respectively. Of course, the starting position of the mobile device is a certain position at the left front side (not shown in the figure) of the chest of the user and a certain position at the right front side (fig. 4 (d)) of the chest of the user respectively, and is also a common scene of the mobile device in actual use of the user, and will not be explained again here.
As shown in fig. 8 (a) -8 (b), the method for preventing false touch in the second embodiment of the present application is applied to a mobile device. The mobile device is applied toThe system. The smart sensor hub determines whether to generate a first event based on the received sensor data. When the intelligent Sensor hub generates a first event, the first event is reported to a Sensor Driver of an AP kernel layer of the application processor from the intelligent Sensor hub, and then the Sensor Driver starts a non-response flow such as a freezing flow. Specifically, when the intelligent Sensor hub generates a first event, the first event is reported to a Sensor Driver of an AP kernel layer of an application processor from the intelligent Sensor hub; the Sensor Driver sends a non-response instruction such as a freezing instruction to TouchScreen Driver, so that the mobile device does not respond to touch operation of a touch screen of the mobile device within a preset duration T when the Sensor Driver receives the first event; the touch points of the touch operation are located in a partial area or any area of the touch screen. The partial region includes: the touch screen comprises a status bar at the upper part of the touch screen, a dial plate at the middle and lower parts of the touch screen and a navigation bar. After a certain condition is met, such as after the call is ended, or after a preset gesture operation and/or a preset side key operation are detected, the mobile device responds to the touch operation of the touch screen of the mobile device, and the touch point of the touch operation is located in a partial area or any area of the touch screen. The above-mentioned freezing process and freezing instruction are a specific way of not responding to the process and not responding to the instruction, respectively, and are not used to limit the scope of not responding to the process or not responding to the instruction. The method only needs to interrupt the contact mode between the touch point on the touch screen of the mobile device and the subsequent processing, and belongs to the implementation mode of non-response flow or non-response instruction; and also fall within the scope of embodiments of the present application.
Unlike the first embodiment of the present application, the second embodiment of the present application does not respond to the second event, and does not consider the preset time period. That is, no proximity sensor may be provided on the mobile device. Of course, a proximity sensor may be provided, but any operation is not performed depending on the detection data of the proximity sensor. And in the touch operation of the mobile device, which is not responded to the touch screen of the mobile device, the touch screen is still in a bright screen state until a preset condition is met, and then the mobile device is restored to a normal state. In this way, in the case that the gesture of the mobile device placed on the side of the head by the user is not standard, when the first event is generated, the mobile device does not respond to the touch operation on the touch screen of the mobile device, and the touch point of the touch operation is located in a partial area or any area of the touch screen. Such non-response is no longer limited by the preset duration. To remove such limitation, certain conditions should be satisfied, such as: ending the call or detecting that a preset gesture operation and/or a preset side key operation occurs. And if the condition is met, the mobile equipment is restored to the normal state. The second embodiment of the application effectively solves the technical problem of false touch of the user head side to the touch screen caused by the condition that the posture of the user placing the mobile device on the head side is not standard when the user dials a call, answers a call or listens to a voice message.
Unless otherwise specified, the relevant contents of the second embodiment of the present application are the same as those of the first embodiment of the present application. And will not be described in detail herein.
Example III
As described above, some suboptimal solutions may also be provided compared to the solution of the first embodiment. Improvements are made in only one aspect, such as from shortening the duration of the flow process.
Therefore, a false touch prevention method in the third embodiment of the application is provided. The false touch prevention method is applied to a mobile device and involves fig. 9 (a) -9 (b). In the third embodiment of the present application, the starting position of the user may be any position, including, but not limited to, a position directly in front of the chest of the user (fig. 4 (a)), a position in front of the left side of the chest of the user (not shown in the drawing), and a position in front of the right side of the chest of the user (fig. 4 (d)).
The false touch prevention method as shown in fig. 9 (a) -9 (b) is applied to a mobile device. The mobile device is based onThe system. When the intelligent sensing hub generates a second event, the intelligent sensing hub uploads the second event to the intelligent sensing hub of the kernel layer of the application processor AP. The intelligent sensor hub of the kernel layer of the application processor AP no longer uploads the second event to the hardware abstraction layer HAL and the framework FWK layer above the hardware abstraction layer HAL. The kernel layer of the application processor AP controls the mobile device not to respond to the touch operation of the touch screen of the mobile device; the touch points of the touch operation are located in a partial area or any area of the touch screen. The partial region includes: the touch screen comprises a status bar at the upper part of the touch screen, a dial plate at the middle and lower parts of the touch screen and a navigation bar.
The above-described freezing process and freezing instruction are just one specific way of not responding to the process and not responding to the instruction, and are not intended to limit the scope of not responding to the process or not responding to the instruction. The method for interrupting the contact between the touch point on the touch screen of the mobile device and the subsequent processing belongs to the implementation mode of non-response flow or non-response instruction. And also fall within the scope of embodiments of the present application.
Compared with the flow shown in fig. 3, the flow shown in fig. 9 (a) -9 (b) related to the third embodiment of the application is obviously simplified, accordingly, the time is shortened, the situation of false touch is reduced or even avoided, and the technical problems that when the mobile device is held by a user and approaches the side of the head of the user to be used for talking or listening to a voice message, the mobile device is not timely in screen-off treatment of the touch screen, and the false touch of the side of the head of the user to the touch screen easily occurs, so that inconvenience is brought to the user and the user experience is influenced are solved.
Unless otherwise specified, the relevant contents of the third embodiment of the present application are the same as those of the first embodiment of the present application. And will not be described in detail herein.
Example IV
In comparison with the solution of the first embodiment, some suboptimal solutions may be provided, where the improvement is made only in one aspect, such as from the early flow start processing time.
Therefore, a false touch prevention method in the fourth embodiment of the present application is provided. The false touch prevention method is applied to a mobile device and involves fig. 10 (a) -10 (b). In the fourth embodiment of the present application, the starting position of the user may be any position, including, but not limited to, a position directly in front of the chest of the user (fig. 4 (a)), a position in front of the left side of the chest of the user (not shown), and a position in front of the right side of the chest of the user (fig. 4 (d)).
The false touch prevention method as shown in fig. 10 (a) -10 (b) is applied to a mobile device. The mobile device is based onThe system. When the intelligent sensing hub generates a first event, the intelligent sensing hub transmits the first event to the intelligent sensing hub of the kernel layer of the application processor AP; the first event is transmitted to Sensor Hidl services of a hardware abstraction layer HAL by an intelligent sensing hub of a kernel layer of an application processor AP; then continuously uploading the service Sensor Hidl of the hardware abstraction layer HAL to SensorManager of the framework FWK layer; then transferring PowerManager from SensorManager of the framework FWK layer to the framework FWK layer; then, powerManager of the FWK layer of the framework starts a screen-off process and sends a screen-off instruction; the screen-off instruction is transmitted to the kernel layer of the application processor AP through the hardware synthesizer HWC of the hardware abstraction layer HAL, and the screen-off process is executed.
Compared with the process shown in fig. 3, the processes shown in fig. 10 (a) -10 (b) according to the fourth embodiment of the present application all adopt the same screen-off process, but the start time of the process is advanced from the generation of the second event to the generation of the first event, so that the processing time of the screen-off process is shortened, the situation of false touch is reduced or even avoided, and the technical problems that when the mobile device is held by the user and approaches the side of the head of the user for talking or listening to the voice message, the screen-off process of the mobile device is not timely, and the side of the head of the user is likely to be in false touch with the touch screen, thereby causing inconvenience to the user and affecting the user experience are solved.
Unless otherwise specified, the relevant contents of the fourth embodiment of the present application are the same as those of the first embodiment of the present application. And will not be described in detail herein.
In addition, in the first to fourth embodiments, fig. 7 shows a partial area on the touch screen of the mobile device where false touch is likely to occur in use by the user holding the mobile device. In the partial area, the status bar 601 is located at the upper part of the touch screen, and is mainly an area where the ear of the user is easy to touch by mistake; the dial 602 and the navigation bar 603 are located in a region of the touch screen where the middle-lower portion of the dial 602 and the navigation bar 603 are mainly a side face portion of the face of the user and are prone to false touch. Of course, the status bar 601, the dial 602, and the navigation bar 603 may correspond to not only the ear portion of the user, the side portion of the face of the user, but also other portions of the side of the head of the user, respectively. Specifically, the mobile device may not respond to the touch operations of the status bar 601, the dial 602 and the navigation bar 603 of the touch screen by shielding the report points of the status bar 601 or the report points of the dial 602 and the navigation bar 603 of the touch screen of the mobile device.
In addition, in the first embodiment, the time required from the receipt of the second event to the completion of the screen-off of the touch screen of the mobile device is generally 200ms-800ms. Therefore, the preset time period T is only required to select any one time period from 1s to 2 s. Modifications may also be adjusted as desired.
In addition, in the first to fourth embodiments, the acceleration sensor and/or the proximity sensor provided on the mobile device are not limited to one, and a plurality of sensors may be provided. Wherein the proximity sensor includes, but is not limited to, a proximity light sensor, an infrared emitter and an infrared receiver, and an audio emitter and an audio receiver. The proximity sensor may be placed on top of a touch screen of the mobile device facing the user when the mobile device is in an outgoing state or listening to speech. A gyroscope may be further provided on the mobile device.
In addition, in the first to fourth embodiments, the front camera on the mobile device may be used only to collect the image, and whether the second event that the mobile device approaches the side of the user's head is generated may be determined by identifying whether the image includes an ear image; or the accuracy of the mobile device in determining whether the second event is generated is improved by using a mode that a front-facing camera and an acceleration sensor of the mobile device are combined. The ear image is an ear image collected from the side of the head, not from the front or other sides of the head, so as to ensure that the collected image contains a complete ear contour and a concave-convex image inside the ear.
In addition, in the first to fourth embodiments, the making and receiving of a call by the user is not limited to the voice call through the telecom operator network, but includes the voice call through the voice communication application. Such voice communication applications include, but are not limited toOther voice communication applications not listed here, such as/>, may also be includedAnd/>
Fig. 11 shows a mobile device 1100 according to the present application. By way of example, mobile device 1100 includes at least one processor 1110, memory 1120, and touch screen 1130. The processor 1110 is coupled to the memory 1120 and the touch screen 1130, and the coupling in the embodiments of the present application may be a communication connection, electrical, or other forms.
Specifically, the memory 1120 is used for storing program instructions. Touch screen 1130 is used to display a user interface. The processor 1110 is configured to invoke program instructions stored in the memory 1120 to cause the mobile device 1100 to perform steps performed by the mobile device in a voice message preview method provided by an embodiment of the present application. It should be appreciated that the mobile device 1100 may be used to implement an anti-false touch method provided in the embodiments of the present application, and relevant features may be referred to above and will not be described herein.
On the basis of the error touch prevention method provided by any one of the foregoing possible implementation manners of the first embodiment, the second embodiment, the third embodiment, the fourth embodiment, and the above, an embodiment fifth of a mobile device is further provided, and at least includes: a memory, one or more processors, a plurality of application programs, and one or more computer programs; wherein the one or more computer programs are stored in the memory; the one or more processors, when executing the one or more computer programs, cause the mobile device to implement the false touch prevention method in any one of the possible implementations of the first, second, third, fourth, and above described embodiments.
On the basis of the anti-false touch method provided by any one of the possible implementation manners of the first embodiment, the second embodiment, the third embodiment, the fourth embodiment and the above, a computer readable storage medium is further provided, the computer readable storage medium is provided on the mobile device provided by the fifth embodiment, the computer readable storage medium stores an anti-false touch program, and the anti-false touch program is used for executing the anti-false touch method mentioned by any one of the possible implementation manners of the first embodiment, the second embodiment, the third embodiment, the fourth embodiment and the above.
It will be apparent to those skilled in the art that the descriptions of the embodiments of the present application may be provided with reference to each other, and that the functions and steps performed by the apparatuses and devices provided in the embodiments of the present application may be described with reference to the related descriptions of the method embodiments of the present application, or may be referred to each other between the method embodiments and between the apparatus embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways without exceeding the scope of the application. For example, the above-described embodiments are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed
In addition, the described apparatus and methods, as well as illustrations of various embodiments, may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the application. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electronic, mechanical or other forms.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (5)

1. An anti-false touch method applied to a mobile device is characterized by comprising the following steps:
when the mobile device is in an outgoing state or an answering state or a voice message listening state, after the intelligent sensor hub of the mobile device determines that the first event is generated, starting an unresponsive flow, including: sending a freezing instruction to a touch screen driver of an application processor kernel layer through a sensor driver of the application processor AP kernel layer, wherein the freezing instruction is used for enabling the mobile device not to respond to touch operation of a touch screen of the mobile device within a preset time period after receiving a first event, and a touch point or a touch surface of the touch operation is positioned in a partial area or any area of the touch screen;
within the preset time period, the intelligent sensor hub determines to generate a second event;
The intelligent sensing hub sends the second event to PowerManager of the framework layer FWK through the sensor driver of the application processor kernel layer, sensor Hidl service of the hardware abstraction layer HAL and SensorManager of the framework layer FWK in sequence;
responding to the second event, and sending a screen-off instruction to the touch screen through a hardware synthesizer HWC of a hardware abstraction layer HAL by PowerManager of the framework layer FWK;
After receiving the screen-off instruction, the touch screen executes a screen-off process; and completing the screen-off process within the preset time length;
the first event comprises:
after the mobile device changes from a static state to a moving state, the mobile device determines an event generated by moving the mobile device towards the side surface of the head of a user when the similarity between a moving curve obtained through an acceleration sensor of the mobile device and a preset moving curve is greater than or equal to a preset similarity threshold value;
Or alternatively
After the mobile equipment is changed from a static state to a moving state, when the mobile equipment detects that an image acquired by a front camera of the mobile equipment comprises an ear image of a person, determining an event generated by the movement of the mobile equipment towards the side face of the head of the user;
Or alternatively
After the mobile equipment is changed from a static state to a moving state, when the mobile equipment detects that the similarity between a moving curve obtained through an acceleration sensor of the mobile equipment and a preset moving curve is larger than or equal to a preset similarity threshold value and an image obtained through a front camera of the mobile equipment comprises an ear image of a person, determining an event generated when the mobile equipment moves towards the side face of the head of a user;
The second event, comprising:
detecting that the distance between the mobile device and the side face of the user head is smaller than a preset threshold value, and determining an event generated when the mobile device approaches the side face of the user head;
Or alternatively
Detecting that the intensity value of a reflected signal of a self-transmitted signal received by the mobile equipment is larger than or equal to a preset threshold value, and determining an event generated by the mobile equipment approaching to the side surface of the head of the user, wherein the signal is an electromagnetic wave signal or an audio signal;
Or alternatively
Detecting an event generated by the fact that the mobile device approaches the side of the user head when the intensity value of a reflected signal of a self-transmitted signal received by the mobile device is larger than or equal to the intensity value of a second position, wherein the first position is farther from the part of the mobile device, which transmits the signal, than the second position, and the signal is an electromagnetic wave signal or an audio signal;
Or alternatively
And detecting that the area of the change of the capacitance value of the capacitive touch screen of the mobile device is larger than or equal to a first preset threshold value and/or the capacitance value is larger than or equal to a second preset threshold value, and determining an event generated when the mobile device approaches to the side face of the user head.
2. The false touch prevention method according to claim 1, wherein the mobile device detecting that a similarity of a movement curve obtained by its own acceleration sensor to a preset movement curve is greater than or equal to a preset similarity threshold value includes: the acceleration sensor is provided with an X axis, a Y axis and a Z axis which are perpendicular to each other; the movement curve and the preset movement curve comprise an X-axis curve, a Y-axis curve and a Z-axis curve; according to monotonicity change conditions of the X-axis curve, the Y-axis curve and the Z-axis curve, the number of wave crests and wave troughs contained in the X-axis curve, the Y-axis curve and the Z-axis curve and the change conditions of the wave crests and the wave troughs are used for determining characteristics of the moving curve, and similarity of the moving curve and the preset moving curve is determined based on the characteristics of the moving curve and the characteristics of the preset moving curve.
3. The false touch prevention method as claimed in claim 1 or 2, wherein the partial region includes: the touch screen comprises a status bar at the upper part of the touch screen, a dial plate at the middle and lower parts of the touch screen and a navigation bar.
4. A mobile device comprising at least: memory, one or more processors, one or more application programs, and one or more computer programs; wherein the one or more computer programs are stored in the memory; the method of any of claims 1-3, wherein the one or more processors, when executed by the one or more computer programs, cause the mobile device to implement the method of preventing false touches.
5. A computer readable storage medium comprising instructions which, when run on a mobile device as claimed in claim 4, cause the mobile device to perform the false touch prevention method as claimed in any one of claims 1 to 3.
CN201911128761.4A 2019-11-18 2019-11-18 False touch prevention method, mobile device and computer readable storage medium Active CN112817512B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201911128761.4A CN112817512B (en) 2019-11-18 2019-11-18 False touch prevention method, mobile device and computer readable storage medium
CN202410468591.9A CN118426615A (en) 2019-11-18 2019-11-18 False touch prevention method, mobile device and computer readable storage medium
PCT/CN2020/129080 WO2021098644A1 (en) 2019-11-18 2020-11-16 Inadvertent touch prevention method, mobile device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911128761.4A CN112817512B (en) 2019-11-18 2019-11-18 False touch prevention method, mobile device and computer readable storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202410468591.9A Division CN118426615A (en) 2019-11-18 2019-11-18 False touch prevention method, mobile device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112817512A CN112817512A (en) 2021-05-18
CN112817512B true CN112817512B (en) 2024-05-28

Family

ID=75852732

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202410468591.9A Pending CN118426615A (en) 2019-11-18 2019-11-18 False touch prevention method, mobile device and computer readable storage medium
CN201911128761.4A Active CN112817512B (en) 2019-11-18 2019-11-18 False touch prevention method, mobile device and computer readable storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202410468591.9A Pending CN118426615A (en) 2019-11-18 2019-11-18 False touch prevention method, mobile device and computer readable storage medium

Country Status (2)

Country Link
CN (2) CN118426615A (en)
WO (1) WO2021098644A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115390687A (en) * 2021-05-21 2022-11-25 荣耀终端有限公司 Display screen control method and electronic equipment
CN116527809A (en) * 2022-01-24 2023-08-01 北京小米移动软件有限公司 Terminal control method, device, terminal and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103941994A (en) * 2013-01-23 2014-07-23 中兴通讯股份有限公司 Sensing screen locking method and device of touch screen
CN105988580A (en) * 2015-04-28 2016-10-05 乐视移动智能信息技术(北京)有限公司 Screen control method and device of mobile terminal
CN109582197A (en) * 2018-11-30 2019-04-05 北京小米移动软件有限公司 Screen control method, device and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8952987B2 (en) * 2011-05-19 2015-02-10 Qualcomm Incorporated User interface elements augmented with force detection
JP6215128B2 (en) * 2014-04-24 2017-10-18 京セラ株式会社 Portable electronic device, control method and control program
CN107395897A (en) * 2017-08-24 2017-11-24 惠州Tcl移动通信有限公司 Mobile terminal goes out control method, storage device and the mobile terminal of screen
CN109257505B (en) * 2018-11-07 2021-06-29 维沃移动通信有限公司 Screen control method and mobile terminal
CN109756623A (en) * 2018-12-28 2019-05-14 Oppo广东移动通信有限公司 control method, control device, electronic device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103941994A (en) * 2013-01-23 2014-07-23 中兴通讯股份有限公司 Sensing screen locking method and device of touch screen
CN105988580A (en) * 2015-04-28 2016-10-05 乐视移动智能信息技术(北京)有限公司 Screen control method and device of mobile terminal
CN109582197A (en) * 2018-11-30 2019-04-05 北京小米移动软件有限公司 Screen control method, device and storage medium

Also Published As

Publication number Publication date
CN112817512A (en) 2021-05-18
WO2021098644A1 (en) 2021-05-27
CN118426615A (en) 2024-08-02

Similar Documents

Publication Publication Date Title
EP3951774B1 (en) Voice-based wakeup method and device
CN115866121B (en) Application interface interaction method, electronic device and computer readable storage medium
CN113448482B (en) Sliding response control method and device of touch screen and electronic equipment
CN113395382B (en) Method for data interaction between devices and related devices
CN114365482A (en) Large aperture blurring method based on Dual Camera + TOF
CN112637758B (en) Equipment positioning method and related equipment thereof
CN110742580A (en) Sleep state identification method and device
CN112087649B (en) Equipment searching method and electronic equipment
CN111580671A (en) Video image processing method and related device
CN111741284A (en) Image processing apparatus and method
CN112817512B (en) False touch prevention method, mobile device and computer readable storage medium
CN114090102B (en) Method, device, electronic equipment and medium for starting application program
CN114422340A (en) Log reporting method, electronic device and storage medium
CN114880251B (en) Memory cell access method, memory cell access device and terminal equipment
CN111104295A (en) Method and equipment for testing page loading process
CN117093068A (en) Vibration feedback method and system based on wearable device, wearable device and electronic device
CN114153638A (en) Application exception exit repairing method and device and electronic equipment
CN116382728B (en) Propagation name display method and terminal equipment
CN116339510B (en) Eye movement tracking method, eye movement tracking device, electronic equipment and computer readable storage medium
CN115794476B (en) Processing method of kernel graphic system layer memory and terminal equipment
WO2023207862A1 (en) Method and apparatus for determining head posture
CN113672454B (en) Screen freezing monitoring method, electronic equipment and computer readable storage medium
CN116709023B (en) Video processing method and device
CN115513571B (en) Control method of battery temperature and terminal equipment
CN116703741B (en) Image contrast generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant