CN112527180B - Touch event identification method, device, equipment and storage medium - Google Patents

Touch event identification method, device, equipment and storage medium Download PDF

Info

Publication number
CN112527180B
CN112527180B CN201910877252.5A CN201910877252A CN112527180B CN 112527180 B CN112527180 B CN 112527180B CN 201910877252 A CN201910877252 A CN 201910877252A CN 112527180 B CN112527180 B CN 112527180B
Authority
CN
China
Prior art keywords
touch
frame
signal
area
event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910877252.5A
Other languages
Chinese (zh)
Other versions
CN112527180A (en
Inventor
吴静
唐矩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201910877252.5A priority Critical patent/CN112527180B/en
Publication of CN112527180A publication Critical patent/CN112527180A/en
Application granted granted Critical
Publication of CN112527180B publication Critical patent/CN112527180B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers

Abstract

The present disclosure provides a touch event recognition method, apparatus, device and storage medium, wherein the method comprises: if the touch signal is detected currently, judging whether the attenuation amount of the area of the touch signal meets the attenuation condition of the designated area or not by utilizing the areas of the touch signal of continuous frames including the current frame, and if the attenuation condition of the designated area is met, judging that the current touch event is a touch lifting event. In the embodiment, the variation trend of the touch signal area is used as the judgment condition of the lifting action, so that the misjudgment of the lifting action caused by shaking and the like can be avoided, and the judgment accuracy of the lifting action is improved.

Description

Touch event identification method, device, equipment and storage medium
Technical Field
The present application relates to the field of touch technologies, and in particular, to a method, an apparatus, a device, and a storage medium for identifying a touch event.
Background
With the continuous development of information technology. Electronic equipment such as smart phones and smart tablets play an increasingly important role in daily life of people, and with the rapid development of screen technologies, display screens supporting touch operation have become standard configurations of electronic equipment.
Touch Panel (TP) is a device that can accept input signals. The touch screen may include a resistive touch screen, a capacitive touch screen, and the like. Taking a capacitive touch screen as an example, when a user touches the capacitive touch screen, due to a human body electric field, a coupling capacitor is formed between the user and the surface of the touch screen, because a high-frequency signal is connected to the touch screen, a small current is absorbed by a finger, the current flows out from electrodes on four corners of the touch screen respectively, and theoretically, the current flowing through the four electrodes is proportional to the distance from the finger to the four corners of the touch screen, so that the position of a touch point can be obtained by calculating the proportion of the four currents, and a touch event containing touch data such as the position of the touch point is reported to an application layer. The touch events include a touch Down event (Down), a touch Move event (Move), and a touch Up event (Up).
Currently, the same threshold is often used to determine whether the touch down event or the touch up event is determined. For example, whether the touch signal value is a touch down event or a touch up event is determined according to the comparison result between the touch signal value and the fixed threshold. However, the inventor finds that the judgment of the type of the touch event by using the threshold value may be inaccurate for the same touch screen.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a touch event recognition method, apparatus, device, and storage medium.
According to a first aspect of embodiments of the present disclosure, there is provided a touch event recognition method, the method including:
if the touch signal is detected currently, judging whether the attenuation of the area of the touch signal meets the attenuation condition of a specified area or not by using the area of the touch signal of continuous frames including the current frame, wherein the area of the touch signal is determined by the number of signal acquisition modules of which the acquisition values are greater than the threshold value of the specified signal;
and if the specified area attenuation condition is met, judging that the current touch event is a touch lifting event.
In one embodiment, the specified area attenuation condition includes one or more of the following conditions:
in the continuous M frames, the areas of the touch signals are sequentially reduced according to the time sequence;
for two specified frames in the continuous M frames, the ratio of the area of the touch signal of the frame behind the time to the area of the touch signal of the frame before the time is smaller than or equal to a set ratio.
In one embodiment, the specified two of the consecutive M frames include any one of:
the frame before the time is the first frame in the continuous M, and the frame after the time is the current frame;
any two adjacent frames in the continuous M;
any interval in the continuous M is two frames of P frames, and P is more than or equal to 1 and less than or equal to M-1;
the frame before the time is the first frame in the continuous M, and the frame after the time is any one frame in the non-first frames;
the frame before the time is any non-end frame in the continuous M, and the frame after the time is an end frame.
In one embodiment, in the process of determining the touch signal area of the current frame, the used designated signal threshold is the signal threshold of the previous frame;
the signal threshold value for any frame is obtained by multiplying a drop signal threshold value, which is a threshold value used for determining a touch drop event, by a specified adjustment coefficient that has a positive correlation with the attenuation of the touch signal area.
In one embodiment, the method further comprises:
when the current touch event is not a touch up event but a touch move event, the signal threshold of the current frame is obtained by multiplying the falling signal threshold by a specified adjustment coefficient.
In one embodiment, the method further comprises:
aiming at the same touch object, if a touch falling event is not detected, when a touch signal is detected, determining a touch signal value by an acquisition value currently acquired by a signal acquisition module;
and comparing the determined touch signal value with a falling signal threshold value, and identifying whether the current touch event is a touch falling event according to the comparison result.
In one embodiment, the method comprises:
if the current event is a touch movement event, predicting that the action between the Nth frame and the (N + 1) th frame is jitter or sliding in the staying process of a touch object according to the position change information of touch points in at least two frames before the (N + 1) th frame of touch signals;
if the touch object is shaken in the staying process, taking the position of the touch point of the Nth frame as the report point position of the touch point of the (N + 1) th frame;
if the frame is sliding, the position of the touch point of the (N + 1) th frame/the position close to the position of the touch point of the (N + 1) th frame is taken as the report point position of the touch point of the (N + 1) th frame.
In one embodiment, the predicting, according to the position change information of the touch point in at least two frames before the N +1 th frame of the touch signal, that the action between the N th frame and the N +1 th frame is jitter or sliding in the process of stopping the touch object includes:
determining a point stable range of the nth frame by using a value obtained by converting a distance between the touch point of the nth frame and the touch point of the (N-1) th frame as a range reference amount and the touch point of the nth frame as a reference point;
if the position of the touch point of the (N + 1) th frame is in the determined point stability range, the action between the (N + 1) th frame and the (N + 1) th frame is jitter in the staying process of the touch object; otherwise, the action between the nth frame and the (N + 1) th frame is sliding.
In one embodiment, the position of the touch point in the (N + 1) th frame is obtained by predicting the actual position of the touch point in at least two frames before the touch signal in the (N + 1) th frame.
In one embodiment, the position of the touch point of the (N + 1) th frame is the actual position of the touch point of the (N + 1) th frame, and the (N + 1) th frame is the current frame.
In one embodiment, the range reference is obtained based on a product of the pitch and a specified scale value, the specified scale value being less than or equal to 1.
According to a second aspect of embodiments of the present disclosure, there is provided a touch event recognition apparatus, the apparatus comprising:
the condition judgment module is configured to judge whether the attenuation quantity of the area of the touch signal meets the specified area attenuation condition or not by using the area of the touch signal of continuous frames including the current frame if the touch signal is detected currently, wherein the area of the touch signal is determined by the number of signal acquisition modules of which the acquisition values are greater than the specified signal threshold;
and the event identification module is configured to judge that the current touch event is a touch lifting event if the specified area attenuation condition is met.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
if the touch signal is detected currently, judging whether the attenuation of the area of the touch signal meets the attenuation condition of a specified area or not by using the area of the touch signal of continuous frames including the current frame, wherein the area of the touch signal is determined by the number of signal acquisition modules of which the acquisition values are greater than the threshold value of the specified signal;
and if the specified area attenuation condition is met, judging that the current touch event is a touch lifting event.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of any of the methods described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the embodiment of the disclosure provides a scheme for identifying a touch event, in which if a touch signal is currently detected, the area of the touch signal of several consecutive frames including the current frame is used to determine whether the attenuation of the area of the touch signal meets an attenuation condition of a designated area, and if the attenuation condition of the designated area is met, the current touch event is determined to be a touch-up event. In the embodiment, the variation trend of the touch signal area is used as the judgment condition of the lifting action, so that the misjudgment of the lifting action caused by shaking and the like can be avoided, and the judgment accuracy of the lifting action is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a touch event recognition method according to an exemplary embodiment of the present disclosure.
Fig. 2 is a schematic diagram illustrating a signal area change according to an exemplary embodiment of the present disclosure.
FIG. 3 is a schematic diagram illustrating one type of finger drop and lift determination using different thresholds according to an exemplary embodiment of the present disclosure.
Fig. 4A is a schematic of a point stability range as illustrated by the present disclosure.
FIG. 4B is a flowchart illustrating a touch point hit location determination method according to an exemplary embodiment of the present disclosure.
FIG. 4C is a schematic diagram illustrating a touch point location within multiple frames according to an exemplary embodiment of the present disclosure.
FIG. 5 is a flow chart illustrating another touch event recognition method according to an exemplary embodiment of the present disclosure.
FIG. 6 is a flow chart illustrating another touch point hit location determination method according to an example embodiment of the present disclosure.
FIG. 7 is a block diagram illustrating a touch event recognition device according to an exemplary embodiment of the present disclosure.
Fig. 8 is a schematic structural diagram illustrating a touch event recognition apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The touch object may be referred to as a touch object, and may be an operation object that can be recognized by a screen, such as a finger, a palm, or a stylus. The touch events include a touch Down event (Down event), a touch Move event (Move event), and a touch Up event (Up event). A touch down event, which may also be referred to as a touch start event, a touch down event, etc., is an event triggered by a touch object touching the screen to generate a first touch point. Within the Down event may include, but is not limited to: touch data such as touch position (x, y) coordinates, touch pressure and a unique Pointer mark (touch ID) of the Pointer are sent to the processing module. The touch movement event may also be referred to as a touch slide event, a touch intermediate event, and the like, and is an event triggered by a touch point during the movement of the touch object. The touch up event may also be referred to as a touch end event, which is an event triggered by the touch object leaving the touch screen.
Currently, the same threshold is often used to determine whether the touch down event or the touch up event is determined. For example, whether the touch signal value is a touch down event or a touch up event is determined according to the comparison result between the touch signal value and the fixed threshold. However, the inventor finds that the judgment of the type of the touch event by using the threshold value may be inaccurate for the same touch screen.
In view of this, the embodiments of the present disclosure provide a touch event identification scheme, where if a touch signal is currently detected, the area of the touch signal of several consecutive frames including the current frame is used to determine whether the attenuation of the area of the touch signal satisfies a specified area attenuation condition, and if the specified area attenuation condition is satisfied, the current touch event is determined to be a touch-up event. In the embodiment, the variation trend of the touch signal area is used as the judgment condition of the lifting action, so that the misjudgment of the lifting action caused by shaking and the like can be avoided, and the judgment accuracy of the lifting action is improved.
The embodiment of the disclosure can be applied to electronic equipment comprising a touch screen, and can also be applied to a touch screen IC of the electronic equipment. Touch screens mentioned In the present disclosure include In-Cell, On-Cell, OGS (One Glass Solution), G/G (Cover Glass/Sensor Glass, which may also be called touch functional layer), GFF (Cover Glass conductive layer/conductive layer), and other types of touch screens. In-Cell refers to a method of embedding a touch function into a liquid crystal pixel of a display screen, that is, embedding a touch sensor function inside the display screen; the On-Cell is a method for embedding the touch screen between a color filter substrate and a polarizer of the display screen, namely, a touch function is configured On a liquid crystal panel of the display screen. The OGS technology is to integrate a touch screen and protective glass together and plate an ITO conductive layer on the inner side of the protective glass. G/G and GFF touch screens refer to touch sensing devices of a touch screen system mounted on an upper layer of a display screen. The signal acquisition module is a unit for acquiring a touch signal, and may be a touch sensor electrode, for example.
In the embodiments of the present disclosure, the term "electronic device" includes various handheld devices with touch screens, vehicle-mounted devices, wearable devices, computing devices, and the like. Such as smart phones, tablet computers, smart watches, PDAs (Personal Digital assistants), e-book readers, multimedia players, and the like.
As shown in fig. 1, fig. 1 is a flowchart illustrating a touch event recognition method according to an exemplary embodiment of the present disclosure, which may include the steps of:
in step 102, if a touch signal is currently detected, determining whether the attenuation of the touch signal area satisfies a specified area attenuation condition by using the areas of the touch signal of several consecutive frames including the current frame, where the areas of the touch signal are determined by the number of signal acquisition modules whose acquisition values are greater than a specified signal threshold;
in step 104, if the designated area attenuation condition is satisfied, it is determined that the current touch event is a touch-up event.
When the touch object touches the screen, the signal acquisition module covered by the touch object can detect the touch signal. When a touch signal is detected, the change trend of the area of the touch signal, such as the attenuation of the area of the touch signal, can be obtained by using the area of the touch signal of multiple frames of touch signals, and whether the current touch event is a touch lifting event or not can be judged by using the attenuation of the area of the touch signal.
The consecutive frames may be two or more consecutive frames, and the consecutive frames include at least the current frame. The designated area attenuation condition is a condition for judging the area attenuation degree of the touch signal, and a touch lifting event is judged only when the area attenuation degree reaches a certain degree, so that more factors influencing coordinates are considered in the process of calculating that the touch object leaves the screen, and the point can be reported more accurately.
With respect to the specified area attenuation condition, in one embodiment, it may include: in the continuous M frames, the area of the touch signal is gradually reduced in time sequence. Taking M as 5 as an example, the area of the touch signal of the 5 th frame is smaller than that of the touch signal of the 4 th frame, the area of the touch signal of the 4 th frame is smaller than that of the touch signal of the 3 rd frame, the area of the touch signal of the 3 rd frame is smaller than that of the touch signal of the 2 nd frame, and the area of the touch signal of the 2 nd frame is smaller than that of the touch signal of the 1 st frame. In this embodiment, if the areas of the touch signals in the continuous multi-frame signals are found to continuously present a decreasing trend, it may be determined that the current touch operation is a touch-up operation, and accordingly, the current touch event is a touch-up event.
In another embodiment, specifying area attenuation conditions may include: for two specified frames in the continuous M frames, the ratio of the area of the touch signal of the frame behind the time to the area of the touch signal of the frame before the time is smaller than or equal to a set ratio.
In this embodiment, the attenuation degree of the touch signal area is determined by using the ratio of the touch signal area of the frame later in time to the touch signal area of the frame earlier in time, so as to determine the area attenuation degree.
Wherein, only one or more frames are designated in the M frames.
For example, specifying two frames may be: the frame before the time is the first frame in the continuous M, and the frame after the time is the current frame. Wherein, the first frame refers to the first frame arranged in the continuous M according to the time sequence. There is a first frame in each successive M frame, and the first frames in different successive M frames tend to be different.
As shown in fig. 2, a schematic diagram of a signal area change is shown according to an exemplary embodiment of the present disclosure. Taking M as 3 and the touch object as an example of a finger, the consecutive 3 frames may be the (i + 1) th frame (1 st or first frame for short), the (i + 2) th frame (2 nd frame for short), and the (i + 3) th frame (3 rd or current frame for short). In 3 continuous frames, the ratio of the touch signal area of the current frame (area 3 for short) to the touch signal area of the first frame (area 1 for short) is compared with a set ratio, and whether the specified area attenuation condition is met or not is further judged.
In the embodiment, the current frame and the first frame are compared, and the lifting operation is considered when the area is reduced to a certain percentage, so that the judgment accuracy can be improved.
The following is an example in which a plurality of specified two frames of the M consecutive frames may be exemplified.
In another example, a given two of the consecutive M frames are any two adjacent frames in the consecutive M. Correspondingly, for any two adjacent frames in the continuous M, the ratio of the touch signal area of the frame behind the time to the touch signal area of the frame before the time is smaller than or equal to the set ratio.
Still taking M as 3 as an example, the area of the touch signal of the 1 st frame in the 3 frames is area 1, the area of the touch signal of the 2 nd frame is area 2, and the area of the touch signal of the 3 rd frame is area 3. Then it is determined that the specified area attenuation condition is satisfied when the ratio of the area 3 to the area 2 is less than or equal to the set ratio and the ratio of the area 2 to the area 1 is less than or equal to the set ratio.
In this embodiment, the comparison result of the two designated frames is used to determine the designated area attenuation condition, so as to determine that the signal area has a continuous attenuation trend, thereby improving the accuracy of the determination of the lift-off operation.
In another example, two frames of the M consecutive frames are designated as two frames of the M consecutive frames arbitrarily spaced by P frames, with 1 ≦ P ≦ M-1.
Taking M as 5 and P as 1 as an example, the area of the touch signal of the 1 st frame in 5 frames is area 1, the area of the touch signal of the 2 nd frame is area 2, the area of the touch signal of the 3 rd frame is area 3, the area of the touch signal of the 4 th frame is area 4, and the area of the touch signal of the 5 th frame is area 5. The specified area attenuation condition is determined to be satisfied when the ratio of the area 5 to the area 3 is less than or equal to the set ratio, the ratio of the area 4 to the area 2 is less than or equal to the set ratio, and the ratio of the area 3 to the area 1 is less than or equal to the set ratio.
In this embodiment, the comparison result of the two designated frames is used to determine the designated area attenuation condition, so as to determine that the signal area has a continuous attenuation trend, thereby improving the accuracy of the determination of the lift-off operation.
In another example, two frames are specified from the consecutive M frames: the frame before the time is the first frame in the continuous M, and the frame after the time is any frame in the non-first frame.
Still taking M as 3 as an example, if the touch signal area of the 1 st frame in the 3 frames is area 1, the touch signal area of the 2 nd frame is area 2, and the touch signal area of the 3 rd frame is area 3, it is determined that the specified area attenuation condition is satisfied when the ratio of area 3 to area 1 is less than or equal to the set ratio, and the ratio of area 2 to area 1 is less than or equal to the set ratio.
In this embodiment, the comparison result of the two designated frames is used to determine the designated area attenuation condition to determine that the signal area has a continuous attenuation trend, and the ratio of any one frame of the non-first frame to the first frame is limited to be compared with the set ratio, so that the accuracy of the lifting operation determination can be improved.
In another example, two frames are specified from the consecutive M frames: the frame before the time is any non-end frame in the continuous M, and the frame after the time is an end frame. The last frame refers to a frame arranged at the rearmost in the sequence M in time order. There is a tail frame in each successive M frame, typically the tail frames in different successive M frames are different.
Still taking M as 3 as an example, if the touch signal area of the 1 st frame in the 3 frames is area 1, the touch signal area of the 2 nd frame is area 2, and the touch signal area of the 3 rd frame is area 3, it is determined that the specified area attenuation condition is satisfied when the ratio of area 3 to area 1 is less than or equal to the set ratio, and the ratio of area 3 to area 2 is less than or equal to the set ratio.
In this embodiment, the comparison result of the two designated frames is used to determine the designated area attenuation condition to determine that the signal area has a continuous attenuation trend, and the ratio of the current frame to any non-current frame is limited to be compared with the set ratio, so that the accuracy of the lifting operation determination can be improved.
It should be understood that the two specified frames in the consecutive M may also be one or more other two frames, which are not described herein again. The above-mentioned specified area attenuation conditions are only examples and should not be construed as any limitation to the present disclosure, and other existing or future conditions for determining signal area attenuation may be applied to the present disclosure and shall be included in the scope of the present disclosure. For example, the above-mentioned specified area attenuation conditions may also be combined, and the specified area attenuation conditions may be: in the continuous M frames, the areas of the touch signals are sequentially reduced according to the time sequence; and aiming at two appointed frames in the continuous M frames, the ratio of the area of the touch signal of the frame behind the time to the area of the touch signal of the frame before the time is smaller than or equal to the set ratio. Through multiple condition judgment, the false judgment of the lifting operation caused by the conditions such as jitter and the like can be further avoided.
As for the specified signal threshold, a threshold used for judging a touch up event may be used. In one example, the designated signal threshold used for different frames may be the same.
In practical application, in the process of moving the touch object, due to noise and sensor gaps, particularly the density of signal acquisition modules which may be arranged in a middle area and an edge area of the same touch screen is different, when the touch object is right above the signal acquisition modules, a signal peak value is larger, but a pixel area where a signal exists may be smaller, and when the touch object is between two signal acquisition modules, a signal peak value is smaller, but a pixel area where a signal exists may be larger. For another example, the signal area in the moving process is often smaller than that of the signal area when the touch screen just falls, the moving may cause less scanned signals, and the like. For this reason, in another example, the designated signal threshold may also be dynamically adjusted according to the attenuation amount of the touch signal area. For example, the signal threshold value for any frame is obtained by multiplying a preset signal threshold value by a specified adjustment coefficient. The preset signal threshold may be a preset threshold, for example, a drop signal threshold, which is a threshold used to determine a touch drop event. The preset signal threshold may also be a signal threshold of a previous frame, etc.
The embodiment can realize that different application scenes adopt different signal thresholds by dynamically adjusting the signal thresholds.
As for the specified adjustment coefficient, a coefficient for defining the adjustment magnitude of the signal threshold value may be used. For example, the specified adjustment coefficient has a positive correlation with the attenuation of the touch signal area. If the area of the touch signal is found to be smaller, the signal threshold can be adjusted smaller, and if the area of the touch signal is found to be larger, the signal threshold can be adjusted larger. The embodiment considers the situation of signal reduction more, realizes that the threshold value for judging the touch lifting event is smaller than the threshold value for judging the touch falling event, enhances the stability of reporting points, and is less prone to the problem of coordinate jitter during lifting action.
In one example, the following formula may be employed
Figure BDA0002204751510000111
A specified adjustment factor is determined. In this embodiment, byThe specific value of the touch signal area of the current frame and the touch signal area of the previous frame is used for determining the designated adjustment coefficient, so that the determination speed of the designated adjustment coefficient is improved.
It can be understood that the designated adjustment coefficient may also be determined by other means, for example, a fixed turn-up coefficient and a fixed turn-down coefficient are preset, and when the area of the touch signal is reduced, the fixed turn-down coefficient is used to adjust the signal threshold; and when the area of the touch signal is increased, adjusting a signal threshold value and the like by adopting a fixed increasing coefficient. The threshold adjustment is only an example and should not be understood as any limitation to the present disclosure, and other existing or future ways of dynamically adjusting the signal threshold according to the area variation trend may be applicable to the present disclosure and all shall be included in the protection scope of the present disclosure.
The touch signal area is determined by the number of signal acquisition modules with acquisition values larger than a specified signal threshold. In one example, the touch signal area may be directly the number of signal acquisition modules whose acquisition value is greater than a specified signal threshold. In another embodiment, the area of the touch signal may also take into account the area of the signal acquisition module.
Considering that the touch signal area may be determined by the number of signal acquisition modules with acquisition values greater than the specified signal threshold, and the signal threshold of the current frame may be obtained by multiplying the falling signal threshold by the specified adjustment coefficient, for this reason, in one example, in the process of determining the touch signal area of the current frame, the specified signal threshold used is the signal threshold of the previous frame. Correspondingly, the determining process of the touch signal area of the current frame comprises the following steps: and determining the number of the signal acquisition modules with the acquisition values larger than the signal threshold of the previous frame from the signal acquisition modules which detect the signals currently, and further obtaining the touch signal area of the current frame according to the determined number.
As shown in fig. 2, the signal threshold 1 may be a signal threshold of the ith frame, the signal threshold 2 may be a signal threshold of the (i + 1) th frame, and the signal threshold 3 may be a signal threshold of the (i + 2) th frame.
Therefore, the touch signal area of the current frame is determined by using the signal threshold of the previous frame, so that the signal threshold of the previous frame and the signal area change trend are considered in the touch lifting event of the current frame, and the judgment accuracy is improved.
In one embodiment, when the current touch event is not a touch up event but a touch move event, the signal threshold of the current frame is obtained by multiplying the drop signal threshold by a specified adjustment coefficient.
In this embodiment, the signal threshold for the current frame is dynamically adjusted if it is determined that the current touch event is a touch move event. After the signal threshold of the current frame is obtained, the position of the touch point can be calculated by using the signal threshold, so that whether the point is reported or not is determined, and the stability of the point reporting is enhanced.
However, as to how to judge the touch movement event, in the case of judging that the current touch event is not the touch lift event, the coordinates of the current frame can be predicted by using the previous multi-frame signals, and the predicted coordinates of the current frame are compared with the actual coordinates of the current frame, so as to judge whether the touch point touched by the same touch object or the touch point touched by a new touch object according to whether the coordinate difference is within the preset range. And if the touch point is the touch point touched by the same touch object, judging that the current touch event is a mobile touch event. It should be understood that other means in the related art may be adopted to determine whether to touch the moving event, which is not described herein again.
With respect to touch down events, in one embodiment, the method further comprises:
aiming at the same touch object, if a touch falling event is not detected, when a touch signal is detected, determining a touch signal value by an acquisition value currently acquired by a signal acquisition module;
and comparing the determined touch signal value with a falling signal threshold value, and identifying whether the current touch event is a touch falling event according to the comparison result.
As shown in fig. 3, a diagram illustrating a finger drop and lift is judged by using different thresholds. In this embodiment, the signal threshold used by the touch-up event is the threshold obtained by adjusting the drop signal threshold according to the signal area, so that the drop action and the lift action are determined by using different thresholds, the drop determination uses a larger threshold, and the lift determination uses a smaller threshold, which enhances the stability of the report point and is less likely to cause the coordinate jitter problem during the lift action.
In order to ensure the stability of the point, a range, called as the point stable range of the nth frame, may be set on the basis of the report point coordinates of the nth frame, and the coordinates of the nth frame are kept unchanged for the points of the (N + 1) th frame that do not exceed the point stable range, so as to avoid the coordinate change caused by the small movement.
In order to ensure the handedness, it is desirable that the coordinates of the touch points in any frame are as close as possible to the actual position of the touch object, and a smaller point stability range is adopted or the point stability range is removed.
To ensure that the touch point follows the hand movement, a new touch point position is calculated every minute movement, a situation where the touch point is jittered may occur due to frequent updates of the touch point position. The inventors have found that determining touch point coordinates using a fixed size point stability range each time is detrimental to the tracking quality. As shown in fig. 4A, is a schematic diagram of the point stability range. Based on the report point of the N frame, there is a point stable range (in the dotted line circle) with a fixed size, if the report point of the (N + 1) th frame is still in this range, the coordinates are kept the same as the N frame, therefore, the coordinates of the (N + 1) th frame and the N frame are the same, no movement is generated, and the point stability can be ensured. However, if the N +1 th frame report is just at the edge of the point stability range, a large distance is generated between the reported point coordinate and the point coordinate of the hand actually touching the screen, such as the distance between the N th frame report and the N +1 th frame report in fig. 1, which is not favorable for the tracking.
In view of this, the embodiment of the present disclosure further provides a touch point position determining scheme, which predicts, according to position change information of a touch point in at least two frames before an N +1 th frame of a touch signal, that an action between the N th frame and the N +1 th frame is jitter or sliding in a touch object staying process, and if the action is jitter in the touch object staying process, takes a position of the touch point of the N th frame as a touch point position of the touch point of the N +1 th frame; if the frame is sliding, taking the position of the touch point of the (N + 1) th frame/the position close to the position of the touch point of the (N + 1) th frame as the report point position of the touch point of the (N + 1) th frame; by pre-judging the jitter and the sliding, the point stability is considered, the micro coordinate jitter is avoided, the point coordinate updating is also carried out, the point tracking chirality is considered, and the situation that the distance between the reported point coordinate and the actual touch point coordinate of the touch object is large in order to guarantee the point stability is avoided.
As shown in fig. 4B, fig. 4B is a flowchart illustrating a touch point touch location determining method according to an exemplary embodiment of the present disclosure, which provides a touch point touch location determining method based on the foregoing embodiments, and the method may include the following steps:
in step 402, if the current event is a touch movement event, predicting the motion between the nth frame and the (N + 1) th frame as the jitter or the sliding of the touch object in the staying process according to the position change information of the touch point in at least two frames before the (N + 1) th frame of the touch signal;
in step 404, if the touch object is jittered during the staying process, the position of the touch point of the nth frame is taken as the touch point reporting position of the touch point of the (N + 1) th frame;
in step 406, if the frame is a slide, the position of the touch point at the N +1 th frame/the position close to the position of the touch point at the N +1 th frame is taken as the touch point position at the N +1 th frame.
Wherein the at least two frames include an Nth frame and one or more frames preceding the Nth frame. It should be understood that the present embodiment uses the first few frames of data to predict the motion in the last two frames, and then determine the hit position of the touch point in the last frame. For the sake of distinction, the last two frames are named nth frame and N +1 th frame.
In one embodiment, the at least two frames include an N-1 th frame and an nth frame, so as to realize that the motion of the two subsequent frames is predicted by using the data of the two previous frames, that is, the motion between the nth frame and the (N + 1) th frame is predicted to be jitter or sliding in the process of touch object stopping by using the position change information of the touch points of the two frames.
The jitter of the touch object in the staying process can refer to the tiny coordinate change of the touch object in the sliding process, and the actions are actions expected to be ignored; sliding may refer to a slightly larger coordinate variation, and this type of motion is the motion that is expected to embody a touch point. The moving process of the touch object is often inertial, and the motion between the nth frame and the (N + 1) th frame can be predicted to be jitter or sliding in the process of the touch object staying by using the position change information in the previous frame.
In one example, the action between the nth frame and the (N + 1) th frame as a jitter or a sliding during the touch object's stay may be represented by: and determining the position of the touch point of the (N + 1) th frame and the distance between the touch point of the (N) th frame and the touch point of the (N-1) th frame. It should be understood that the distance between the touch point of the nth frame and the touch point of the N-1 th frame may also be limited, and the distance between the touch points of the other frames of the at least two frames and the touch points of the frame, or other factors may also be used to determine that the action between the nth frame and the N +1 th frame is jitter or sliding in the process of the touch object staying.
In an example, the predicting, according to the position change information of the touch point in at least two frames before the N +1 th frame of the touch signal, that the action between the N th frame and the N +1 th frame is jitter or sliding in the process of stopping the touch object includes:
determining a point stable range of the nth frame by using a value obtained by converting a distance between the touch point of the nth frame and the touch point of the (N-1) th frame as a range reference amount and the touch point of the nth frame as a reference point;
if the position of the touch point of the (N + 1) th frame is in the determined point stability range, the action between the (N + 1) th frame and the (N + 1) th frame is jitter in the staying process of the touch object; otherwise, the action between the nth frame and the (N + 1) th frame is sliding.
The position change information of the touch points in the at least two frames may include a distance D between the touch point of the nth frame and the touch point of the N-1 st frame. The range reference may be a reference for determining a point stability range, and may be a radius value, a short radius value, a long radius value, or the like, which is determined according to the shape of the point stability range. For example, the shape of the point stability range may include, but is not limited to, circular, elliptical, and the like. The range reference amount can be converted from the distance between the touch point of the nth frame and the touch point of the N-1 th frame. For example, the range reference may be a function related to the distance between the touch point of the nth frame and the touch point of the N-1 th frame. For example, the range reference is obtained based on a product of the pitch and a specified scale value, the specified scale value being less than or equal to 1.
Regarding whether the position of the touch point of the N +1 th frame is within the determined point stable range, for example, the distance between the touch point of the N +1 th frame and the touch point of the N +1 th frame may be directly compared with the range reference.
FIG. 4C is a schematic diagram illustrating a touch point location within multiple frames according to an exemplary embodiment of the present disclosure. The radius R of the point stability range of the nth frame is determined using the following equation:
Figure BDA0002204751510000161
Figure BDA0002204751510000162
indicating a designated scale value, and D indicates the interval between the N-1 th frame and the nth frame. If the distance between the Nth frame and the (N + 1) th frame is smaller than or equal to the radius R, the situation tends to be a one-time micro-jitter, the stability of a viewpoint is heavier, and the coordinate of a reporting system is closer to the position of the Nth frame; if the distance between the Nth frame and the (N + 1) th frame is larger than the radius R, the sliding behavior is considered to be a one-time sliding behavior, the following chirality is emphasized, and the coordinate of the reporting system is closer to the position of the (N + 1) th frame. The specified proportion value may be determined according to platform or electronic equipment or other requirements. For example, if the pursuit is fast, the smaller the designated proportion value is, the better, within a certain extent, and if the pursuit is stable or limited by hardware requirements of the electronic device, the designated proportion value may be larger, and may even be 1, and the like.
It should be understood that the range reference may also be determined by the distance between the touch points in the at least two frames and the other frames, or determined by other data of the at least two frames, and the like, which are not described herein again.
Regarding the position of the touch point of the (N + 1) th frame, in one embodiment, the position of the touch point of the (N + 1) th frame is the actual position of the touch point of the (N + 1) th frame. The (N + 1) th frame may be the current frame. In this embodiment, after obtaining the actual positions of the touch points of the nth frame and the nth frame before, and the actual positions of the touch points of the (N + 1) th frame, the actions of the touch points of the nth frame and the (N + 1) th frame can be predicted by using the actual positions of the touch points of the previous frames. Specifically, the point stability range of the nth frame may be adjusted by using the actual positions of the touch points of the previous frames, and the reporting position of the touch point of the (N + 1) th frame may be determined by comparing the actual position of the (N + 1) th frame with the point stability range of the nth frame. For example, the (N + 1) th frame is a current frame, and after the touch signal of the (N + 1) th frame is detected, if the touch signal of the (N + 1) th frame is to be reported, the reporting position of the touch point of the (N + 1) th frame is determined by comparing the actual position of the (N + 1) th frame with the point stability range of the (N + 1) th frame.
In another embodiment, the position of the touch point in the (N + 1) th frame may be obtained by predicting the actual position of the touch point in at least two frames before the (N + 1) th frame of the touch signal. In this embodiment, when obtaining the nth frame and the actual position of the touch point before the nth frame, the position of the N +1 frame, which is simply referred to as the predicted position of the N +1 th frame, may be predicted by using the actual positions of the touch points of the previous frames, the point stability range of the nth frame may be adjusted by using the actual positions of the touch points of the previous frames, and the reported position of the touch point of the N +1 th frame may be determined by comparing the predicted position of the N +1 th frame with the point stability range of the nth frame. For example, the nth frame is a current frame, and when the nth frame touch signal is detected, the position of the touch point of the (N + 1) th frame is predicted.
The reporting position of the (N + 1) th frame can be reported without obtaining the actual position of the (N + 1) th frame touch point.
Regarding how to predict the position of the touch point of the (N + 1) th frame, in one example, the position of the touch point of the (N-1) th frame and the position of the touch point of the Nth frame can be obtained by weighted summation. For example, the position of the touch point of the (N-1) th frame is multiplied by a first weight value, the position of the touch point of the (N) th frame is multiplied by a second weight value, and the two products are added to obtain the position of the touch point of the (N + 1) th frame. The first weight value and the second weight value may be configured according to actual requirements, for example, if the pursuit is fast, the second weight value may be greater than the first weight value, and if accuracy needs to be ensured, the second weight value may be equal to the first weight value, and the like. The embodiment can quickly obtain the position of the touch point of the (N + 1) th frame by weighted summation of the position of the touch point of the (N-1) th frame and the position of the touch point of the (N) th frame.
It should be understood that other existing or future methods of determining the location of the touch point of the (N + 1) th frame may be applied to the present disclosure, and all of them are included in the scope of the present disclosure. For example, the position of the touch point of the (N + 1) th frame and the like are predicted from the touch point velocity, acceleration and the like determined from the multi-frame data.
The report position of the touch point may be a position reported to the processing module for the touch point, and may be, for example, report coordinates. For this reason, the position reported to the processing module is referred to as a touch point position for distinguishing.
And if the touch object is shaken in the staying process, taking the position of the touch point of the Nth frame as the report point position of the touch point of the (N + 1) th frame, thereby avoiding the touch point position update caused by shaking. If the frame is a sliding motion, the position of the touch point of the (N + 1) th frame is taken as the report point position of the touch point of the (N + 1) th frame, or the position close to the position of the touch point of the (N + 1) th frame is taken as the report point position of the touch point of the (N + 1) th frame. The approximate position may be a position within a specified range of the position of the N +1 th frame touch point, and for example, a position between the N th frame touch point and the N +1 th frame touch point and close to the N +1 th frame touch point may be used as the touch point position of the N +1 th frame touch point.
After the report point position of the N +1 th frame touch point is determined, the report point position can be reported to the processing module. If the reported point position of the touch point of the (N + 1) th frame is the same as that of the touch point of the Nth frame, keeping the original touch point still, and if the reported point position of the touch point of the (N + 1) th frame is different from that of the touch point of the Nth frame, moving.
The various technical features in the above embodiments can be arbitrarily combined, so long as there is no conflict or contradiction between the combinations of the features, but the combination is limited by the space and is not described one by one, and therefore, any combination of the various technical features in the above embodiments also belongs to the scope disclosed in the present specification.
As shown in fig. 5, fig. 5 is a flowchart of another touch event recognition method shown in the present disclosure according to an exemplary embodiment, which may be used in a terminal, and the method is exemplified by one of the combinations on the basis of the foregoing embodiments, and includes the following steps:
in step 502, for the same touch object, if a touch drop event is not detected, when a touch signal is detected, determining a touch signal value from an acquisition value currently acquired by a signal acquisition module; and comparing the determined touch signal value with a falling signal threshold value, and identifying whether the current touch event is a touch falling event according to the comparison result.
In step 504, for the same touch object, if a touch drop event has been detected, when a touch signal is detected, determining whether the attenuation of the touch signal area satisfies a specified area attenuation condition by using the area of the touch signal of several consecutive frames including the current frame, where the touch signal area is determined by the number of signal acquisition modules whose acquisition values are greater than a specified signal threshold;
in the process of determining the touch signal area of the current frame, the used designated signal threshold is the signal threshold of the previous frame;
the signal threshold value for any frame is obtained by multiplying a drop signal threshold value, which is a threshold value used for determining a touch drop event, by a specified adjustment coefficient that has a positive correlation with the attenuation of the touch signal area.
In step 506, if the designated area attenuation condition is satisfied, it is determined that the current touch event is a touch-up event.
In step 508, if the designated area attenuation condition is not satisfied, it may be determined whether the current touch event is a touch move event according to the predicted current touch point position and the actual current touch point position.
Fig. 5 is the same as the related art in fig. 1, and is not repeated herein.
The embodiment of the disclosure increases the logic for judging the area change trend when the touch object is lifted; in the moving process of the touch object, different signal threshold calculation methods are adopted in different scenes; and the finger can be judged by adopting different threshold values when falling and lifting, so that the judgment accuracy of the lifting event can be improved, and the touch coordinate point reporting stability can be ensured.
As shown in fig. 6, fig. 6 is a flowchart illustrating another method for determining a touch point touch location according to an exemplary embodiment of the present disclosure, which describes another method how to determine a touch point touch location based on the foregoing embodiment, and includes the following steps:
in step 402, determining a point stability range of the nth frame by using a value converted from a distance between a touch point of the nth frame and a touch point of the (N-1) th frame as a range reference amount and a touch point of the nth frame as a reference point;
in step 404, if the position of the touch point of the N +1 th frame is within the determined point stability range, the position of the touch point of the N +1 th frame is taken as the touch point position of the touch point of the N +1 th frame;
in step 406, if the position of the touch point of the (N + 1) th frame is not within the determined point stability range, the actual position of the touch point of the (N + 1) th frame is taken as the touch point position of the touch point of the (N + 1) th frame.
The embodiment improves the stability of the report points while improving the tracking chirality of the finger moving report points, and can avoid the situation that the finger slides and then vertically lifts, and the lifted coordinate has jitter which is not stable enough; the situations that the finger slides and naturally leaves in a non-vertical way, the judgment of acceleration and speed is wrong, and fast sliding is changed into slow sliding or slow sliding is changed into fast sliding can also be avoided.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present disclosure is not limited by the order of acts, as some steps may, in accordance with the present disclosure, occur in other orders and concurrently.
Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that acts and modules referred to are not necessarily required by the disclosure.
Corresponding to the embodiment of the touch event recognition method, the disclosure also provides embodiments of a touch event recognition device, equipment applied by the device and a storage medium.
As shown in fig. 7, fig. 7 is a block diagram of a touch event recognition apparatus according to an exemplary embodiment of the present disclosure, the apparatus including:
a condition determining module 72 configured to determine whether the attenuation amount of the touch signal area satisfies a specified area attenuation condition by using the touch signal area of several consecutive frames including the current frame if the touch signal is currently detected, where the touch signal area is determined by the number of signal collecting modules whose collection values are greater than a specified signal threshold;
an event identification module 74 configured to determine that the current touch event is a touch up event if the specified area attenuation condition is satisfied.
In one embodiment, the specified area attenuation condition includes one or more of the following conditions:
in the continuous M frames, the areas of the touch signals are sequentially reduced according to the time sequence;
for two specified frames in the continuous M frames, the ratio of the area of the touch signal of the frame behind the time to the area of the touch signal of the frame before the time is smaller than or equal to a set ratio.
In one embodiment, the specified two of the consecutive M frames include any one of:
the frame before the time is the first frame in the continuous M, and the frame after the time is the current frame;
any two adjacent frames in the continuous M;
any interval in the continuous M is two frames of P frames, and P is more than or equal to 1 and less than or equal to M-1;
the frame before the time is the first frame in the continuous M, and the frame after the time is any one frame in the non-first frames;
the frame before the time is any non-end frame in the continuous M, and the frame after the time is an end frame.
In one embodiment, in the process of determining the touch signal area of the current frame, the used designated signal threshold is the signal threshold of the previous frame;
the signal threshold value for any frame is obtained by multiplying a drop signal threshold value, which is a threshold value used for determining a touch drop event, by a specified adjustment coefficient that has a positive correlation with the attenuation of the touch signal area.
In one embodiment, the event identification module 74 is configured to:
when the current touch event is not a touch up event but a touch move event, the signal threshold of the current frame is obtained by multiplying the falling signal threshold by a specified adjustment coefficient.
In one embodiment, the event identification module 74 is configured to:
aiming at the same touch object, if a touch falling event is not detected, when a touch signal is detected, determining a touch signal value by an acquisition value currently acquired by a signal acquisition module;
and comparing the determined touch signal value with a falling signal threshold value, and identifying whether the current touch event is a touch falling event according to the comparison result.
In one embodiment, the apparatus further comprises a position determination module (not shown in fig. 7) configured to: if the current event is a touch movement event, predicting that the action between the Nth frame and the (N + 1) th frame is jitter or sliding in the staying process of a touch object according to the position change information of touch points in at least two frames before the (N + 1) th frame of touch signals;
if the touch object is shaken in the staying process, taking the position of the touch point of the Nth frame as the report point position of the touch point of the (N + 1) th frame;
if the frame is sliding, the position of the touch point of the (N + 1) th frame/the position close to the position of the touch point of the (N + 1) th frame is taken as the report point position of the touch point of the (N + 1) th frame.
In one embodiment, the predicting, according to the position change information of the touch point in at least two frames before the N +1 th frame of the touch signal, that the action between the N th frame and the N +1 th frame is jitter or sliding in the process of stopping the touch object includes:
determining a point stable range of the nth frame by using a value obtained by converting a distance between the touch point of the nth frame and the touch point of the (N-1) th frame as a range reference amount and the touch point of the nth frame as a reference point;
if the position of the touch point of the (N + 1) th frame is in the determined point stability range, the action between the (N + 1) th frame and the (N + 1) th frame is jitter in the staying process of the touch object; otherwise, the action between the nth frame and the (N + 1) th frame is sliding.
In one embodiment, the position of the touch point in the (N + 1) th frame is obtained by predicting the actual position of the touch point in at least two frames before the touch signal in the (N + 1) th frame.
In one embodiment, the position of the touch point of the (N + 1) th frame is the actual position of the touch point of the (N + 1) th frame, and the (N + 1) th frame is the current frame.
In one embodiment, the range reference is obtained based on a product of the pitch and a specified scale value, the specified scale value being less than or equal to 1.
The specific details of the implementation process of the functions and actions of each module in the device are referred to the implementation process of the corresponding step in the method, and are not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
Accordingly, the present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any of the methods described above.
The present disclosure may take the form of a computer program product embodied on one or more storage media including, but not limited to, disk storage, CD-ROM, optical storage, and the like, having program code embodied therein. Computer-usable storage media include permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of the storage medium of the computer include, but are not limited to: phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technologies, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by a computing device.
Accordingly, an embodiment of the present disclosure provides an electronic device, which includes: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to implement the steps of any of the methods described above.
Fig. 8 is a schematic diagram illustrating a structure of a touch event recognition apparatus 800 according to an exemplary embodiment. For example, the apparatus 800 may be a user device, which may be embodied as a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, a wearable device such as a smart watch, smart glasses, a smart bracelet, a smart running shoe, and the like.
Referring to fig. 8, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user as described above. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or slide action but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed state of the device 800, the relative positioning of components, such as a display and keypad of the apparatus 800, the sensor assembly 814 may also detect a change in position of the apparatus 800 or a component of the apparatus 800, the presence or absence of user contact with the apparatus 800, orientation or acceleration/deceleration of the apparatus 800, and a change in temperature of the apparatus 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The apparatus 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G LTE, 5G NR, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the above-mentioned communication component 816 further comprises a Near Field Communication (NFC) module to facilitate short-range communication. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804 including instructions, that when executed by the processor 820 of the apparatus 800, enable the apparatus 800 to perform a method of touch event recognition, the method comprising:
if the touch signal is detected currently, judging whether the attenuation of the area of the touch signal meets the attenuation condition of a specified area or not by using the area of the touch signal of continuous frames including the current frame, wherein the area of the touch signal is determined by the number of signal acquisition modules of which the acquisition values are greater than the threshold value of the specified signal;
and if the specified area attenuation condition is met, judging that the current touch event is a touch lifting event.
The non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (13)

1. A method of touch event recognition, the method comprising:
if the touch signal is detected currently, judging whether the attenuation of the area of the touch signal meets the attenuation condition of a specified area or not by using the area of the touch signal of continuous frames including the current frame, wherein the area of the touch signal is determined by the number of signal acquisition modules of which the acquisition values are greater than the threshold value of the specified signal;
if the designated area attenuation condition is met, judging that the current touch event is a touch lifting event;
in the process of determining the touch signal area of the current frame, the used designated signal threshold is the signal threshold of the previous frame;
the signal threshold value for any frame is obtained by multiplying a drop signal threshold value, which is a threshold value used for determining a touch drop event, by a specified adjustment coefficient that has a positive correlation with the attenuation of the touch signal area.
2. The method of claim 1, wherein the specified area attenuation condition comprises one or more of the following:
in the continuous M frames, the areas of the touch signals are sequentially reduced according to the time sequence;
for two specified frames in the continuous M frames, the ratio of the area of the touch signal of the frame behind the time to the area of the touch signal of the frame before the time is smaller than or equal to a set ratio.
3. The method of claim 2, wherein the specified two of the consecutive M frames comprise any one of:
the frame before the time is the first frame in the continuous M, and the frame after the time is the current frame;
any two adjacent frames in the continuous M;
any interval in the continuous M is two frames of P frames, and P is more than or equal to 1 and less than or equal to M-1;
the frame before the time is the first frame in the continuous M, and the frame after the time is any one frame in the non-first frames;
the frame before the time is any non-end frame in the continuous M, and the frame after the time is an end frame.
4. The method of claim 1, further comprising:
when the current touch event is not a touch up event but a touch move event, the signal threshold of the current frame is obtained by multiplying the falling signal threshold by a specified adjustment coefficient.
5. The method of claim 1, further comprising:
aiming at the same touch object, if a touch falling event is not detected, when a touch signal is detected, determining a touch signal value by an acquisition value currently acquired by a signal acquisition module;
and comparing the determined touch signal value with a falling signal threshold value, and identifying whether the current touch event is a touch falling event according to the comparison result.
6. The method according to any one of claims 1 to 5, characterized in that it comprises:
if the current event is a touch movement event, predicting that the action between the Nth frame and the (N + 1) th frame is jitter or sliding in the staying process of a touch object according to the position change information of touch points in at least two frames before the (N + 1) th frame of touch signals;
if the touch object is shaken in the staying process, taking the position of the touch point of the Nth frame as the report point position of the touch point of the (N + 1) th frame;
if the frame is sliding, the position of the touch point of the (N + 1) th frame/the position close to the position of the touch point of the (N + 1) th frame is taken as the report point position of the touch point of the (N + 1) th frame.
7. The method according to claim 6, wherein the predicting the motion between the nth frame and the N +1 th frame as a jitter or a sliding during the touch object staying according to the position change information of the touch point in at least two frames before the N +1 th frame of the touch signal comprises:
determining a point stable range of the nth frame by using a value obtained by converting a distance between the touch point of the nth frame and the touch point of the (N-1) th frame as a range reference amount and the touch point of the nth frame as a reference point;
if the position of the touch point of the (N + 1) th frame is in the determined point stability range, the action between the (N + 1) th frame and the (N + 1) th frame is jitter in the staying process of the touch object; otherwise, the action between the nth frame and the (N + 1) th frame is sliding.
8. The method according to claim 7, wherein the position of the touch point in the N +1 th frame is obtained by predicting the actual position of the touch point in at least two frames before the touch signal in the N +1 th frame.
9. The method according to claim 7, wherein the position of the touch point of the N +1 th frame is an actual position of the touch point of the N +1 th frame, and the N +1 th frame is a current frame.
10. The method according to any one of claims 7 to 9, wherein the range reference is obtained based on a product of the pitch and a specified scale value, the specified scale value being less than or equal to 1.
11. A touch event recognition apparatus, the apparatus comprising:
the condition judgment module is configured to judge whether the attenuation quantity of the area of the touch signal meets the specified area attenuation condition or not by using the area of the touch signal of continuous frames including the current frame if the touch signal is detected currently, wherein the area of the touch signal is determined by the number of signal acquisition modules of which the acquisition values are greater than the specified signal threshold;
the event identification module is configured to judge that the current touch event is a touch lifting event if the specified area attenuation condition is met;
in the process of determining the touch signal area of the current frame, the used designated signal threshold is the signal threshold of the previous frame;
the signal threshold value for any frame is obtained by multiplying a drop signal threshold value, which is a threshold value used for determining a touch drop event, by a specified adjustment coefficient that has a positive correlation with the attenuation of the touch signal area.
12. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
if the touch signal is detected currently, judging whether the attenuation of the area of the touch signal meets the attenuation condition of a specified area or not by using the area of the touch signal of continuous frames including the current frame, wherein the area of the touch signal is determined by the number of signal acquisition modules of which the acquisition values are greater than the threshold value of the specified signal;
if the designated area attenuation condition is met, judging that the current touch event is a touch lifting event;
in the process of determining the touch signal area of the current frame, the used designated signal threshold is the signal threshold of the previous frame;
the signal threshold value for any frame is obtained by multiplying a drop signal threshold value, which is a threshold value used for determining a touch drop event, by a specified adjustment coefficient that has a positive correlation with the attenuation of the touch signal area.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 10.
CN201910877252.5A 2019-09-17 2019-09-17 Touch event identification method, device, equipment and storage medium Active CN112527180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910877252.5A CN112527180B (en) 2019-09-17 2019-09-17 Touch event identification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910877252.5A CN112527180B (en) 2019-09-17 2019-09-17 Touch event identification method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112527180A CN112527180A (en) 2021-03-19
CN112527180B true CN112527180B (en) 2022-03-08

Family

ID=74974837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910877252.5A Active CN112527180B (en) 2019-09-17 2019-09-17 Touch event identification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112527180B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101464750A (en) * 2009-01-14 2009-06-24 苏州瀚瑞微电子有限公司 Method for gesture recognition through detecting induction area of touch control panel
CN102612676A (en) * 2009-08-21 2012-07-25 李圣昊 Method and device for recognizing touch input
CN102902469A (en) * 2011-07-25 2013-01-30 宸鸿光电科技股份有限公司 Gesture recognition method and touch system
CN103164067A (en) * 2011-12-19 2013-06-19 联想(北京)有限公司 Method for judging touch input and electronic device
CN104317466A (en) * 2014-10-28 2015-01-28 深圳市汇顶科技股份有限公司 Touch screen control method and touch screen device
CN109669567A (en) * 2017-10-13 2019-04-23 华为技术有限公司 A kind of control method and terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101464750A (en) * 2009-01-14 2009-06-24 苏州瀚瑞微电子有限公司 Method for gesture recognition through detecting induction area of touch control panel
CN102612676A (en) * 2009-08-21 2012-07-25 李圣昊 Method and device for recognizing touch input
CN102902469A (en) * 2011-07-25 2013-01-30 宸鸿光电科技股份有限公司 Gesture recognition method and touch system
CN103164067A (en) * 2011-12-19 2013-06-19 联想(北京)有限公司 Method for judging touch input and electronic device
CN104317466A (en) * 2014-10-28 2015-01-28 深圳市汇顶科技股份有限公司 Touch screen control method and touch screen device
CN109669567A (en) * 2017-10-13 2019-04-23 华为技术有限公司 A kind of control method and terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Large-Area Capacitive Active Touch Panel Using the Method of Pulse Overlapping Detection;Ya-Hsiang Tai;《 Journal of Display Technology》;20130530;第144-147页 *
基于人类行为学的触屏手机手势交互设计研究;孙岩;《包装工程》;20150720;第55-59页 *

Also Published As

Publication number Publication date
CN112527180A (en) 2021-03-19

Similar Documents

Publication Publication Date Title
WO2017124773A1 (en) Gesture recognition method and device
US10739854B2 (en) Terminal and touch response method and device
WO2021051650A1 (en) Method and apparatus for association detection for human face and human hand, electronic device and storage medium
CN106293283B (en) Key triggering method and device and terminal
CN107656666B (en) Mobile terminal and scrolling speed determination method
EP3046042A1 (en) Apparatus for implementing home button and fingerprint identification on single sensor
CN108255369B (en) Method and device for displaying fingerprint icon in screen and computer readable storage medium
CN109189185B (en) Terminal temperature adjusting method and device
RU2683979C2 (en) Method and device for detecting pressure
CN107402711B (en) Volume adjusting method and device and computer readable storage medium
US10061497B2 (en) Method, device and storage medium for interchanging icon positions
CN108319885B (en) Fingerprint identification method and device
CN112527180B (en) Touch event identification method, device, equipment and storage medium
EP3796139A1 (en) Method and apparatus for determining point report position of touch point, and electronic device
CN107402677B (en) Method and device for recognizing finger lifting in touch operation and terminal
CN116204073A (en) Touch control method, touch control device, electronic equipment and storage medium
CN106990893B (en) Touch screen operation processing method and device
CN111736718A (en) Touch screen control method and device
CN114185444A (en) Method and device for preventing mistaken touch of touch screen and storage medium
CN111314552B (en) User interface control method and device and storage medium
KR102158293B1 (en) Method for capturing image and electronic device thereof
CN115379044B (en) Holding state detection method, holding state detection device, electronic equipment and storage medium
CN109670432B (en) Action recognition method and device
CN113743166A (en) Optical fingerprint module and signal processing method
CN115963937A (en) Pocket mode recognition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant