CN111435283A - Operation intention determining method and device and electronic equipment - Google Patents

Operation intention determining method and device and electronic equipment Download PDF

Info

Publication number
CN111435283A
CN111435283A CN201910026538.2A CN201910026538A CN111435283A CN 111435283 A CN111435283 A CN 111435283A CN 201910026538 A CN201910026538 A CN 201910026538A CN 111435283 A CN111435283 A CN 111435283A
Authority
CN
China
Prior art keywords
centroid
touch
distance
determining
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910026538.2A
Other languages
Chinese (zh)
Inventor
李准
龙文勇
翟剑锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FocalTech Systems Ltd
Original Assignee
FocalTech Systems Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FocalTech Systems Ltd filed Critical FocalTech Systems Ltd
Priority to CN201910026538.2A priority Critical patent/CN111435283A/en
Priority to TW108140335A priority patent/TWI796530B/en
Publication of CN111435283A publication Critical patent/CN111435283A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0414Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using force sensing means to determine a position

Abstract

The method comprises the steps of receiving touch operation on the touch area, and collecting touch data of a selected pixel block in the touch area by taking an image frame as a time unit, wherein the touch data comprises a pressure value of the pixel block bearing the touch operation; determining the weight of the pixel block according to the pressure value born by the pixel block; further, calculating the centroid mode of each frame of touch data by using the weight of the pixel block and the position of the pixel block in the touch area; the purpose of determining the operation intention of the touch operation based on the centroid of each frame of touch data is achieved.

Description

Operation intention determining method and device and electronic equipment
Technical Field
The invention relates to the technical field of computers, in particular to an operation intention determining method and device and electronic equipment.
Background
Along with the development of science and technology, the application in the touch type electronic equipment is more and more extensive, not only provides convenience for the user to operate the electronic equipment, but also brings better visual effect for the user, has promoted user experience, has increased the viscidity of user to the electronic equipment.
A user can interact with the electronic device through a touch area provided on the electronic device (i.e., human-computer interaction), and thus, it is crucial for human-computer interaction how to determine an intention of the user to be characterized by a touch operation of the touch area.
In view of the above, the present application provides an operation intention determining method, an operation intention determining device and an electronic device, so as to determine an operation intention of a touch operation of the electronic device, which is a problem to be solved urgently.
Disclosure of Invention
In view of this, the present invention provides an operation intention determining method, an operation intention determining device and an electronic device, so as to determine an operation intention of a touch operation of the electronic device.
The technical scheme is as follows:
an operation intention determining method is applied to an electronic device provided with a touch area and comprises the following steps:
receiving touch operation on a touch area, and collecting touch data of a selected pixel block in the touch area by taking an image frame as a time unit, wherein the touch data comprises a pressure value of the pixel block subjected to the touch operation;
determining the weight of the pixel block according to the pressure value born by the pixel block;
calculating the centroid of the touch data of each frame by using the weight of the pixel block and the position of the pixel block in the touch area;
and determining the operation intention of the touch operation based on the centroid of the touch data of each frame.
Preferably, the determining the weight of the pixel block according to the pressure value born by the pixel block includes:
and inquiring the corresponding relation between the preset weight and the pressure range, and determining the weight corresponding to the pressure range to which the pressure value born by the pixel block belongs as the weight of the pixel block.
Preferably, the pixel block comprises a plurality of pixel sub-blocks, and acquiring the pressure values experienced by the pixel block comprises acquiring the pressure values experienced by each pixel sub-block in the pixel block,
the determining the weight of the pixel block according to the pressure value born by the pixel block comprises the following steps:
and determining the weight of each pixel sub-block according to the pressure value born by each pixel sub-block in the pixel block.
Preferably, the calculating the centroid of the touch data of each frame by using the weight of the pixel block and the position of the pixel block in the touch area includes:
acquiring the position of each pixel sub-block of each selected pixel block in the touch area, wherein the position of each pixel sub-block in the touch area comprises the abscissa and the ordinate of the pixel sub-block in the touch area, and respectively calculating the centroid of each frame of touch data by adopting a centroid formula;
the centroid formula is:
Figure BDA0001942687780000021
wherein, weightiIs the weight, x, of the ith pixel sub-block in the touch data of the current frameiThe abscissa and y of the ith pixel sub-block in the touch data of the current frameiIs the ordinate, M, of the ith pixel sub-block in the touch data of the current framexIs the centroid abscissa, M, of the current frame touch datayIs the centroid ordinate of the current frame touch data.
Preferably, the method further comprises the following steps:
acquiring the number of image frames acquired by receiving the touch operation;
if the number of the image frames is smaller than the first preset threshold value, storing the acquired touch data of each frame;
if the number of the image frames is not less than a first preset threshold value, storing and calculating the collected touch data of each frame, and judging whether the current touch operation exceeds a touch boundary or not based on the centroid of the current touch data;
and if the touch operation exceeds the touch boundary, stopping acquiring touch data and outputting the centroid of each frame of touch data.
Preferably, before the determining the operation intention of the touch operation based on the centroid of the touch data of each frame, the method further includes:
judging whether the number of the image frames is smaller than a second preset threshold value or not;
and if the number of the image frames is smaller than a second preset threshold, determining that the operation intention of the touch operation indicates a click operation, wherein the second preset threshold is smaller than the first preset threshold.
Preferably, the determining the operation intention of the touch operation based on the centroid of the touch data of each frame includes:
sequencing the centroids of the frames of touch data acquired by the touch operation according to the acquisition sequence of the touch data to obtain a centroid sequence;
removing a first preset number of centroids which are ranked most forward in the centroid sequence, and removing a second preset number of centroids which are ranked most backward in the centroid sequence to obtain a first centroid sequence;
determining an operational intent of the touch operation based on the first centroid sequence.
Preferably, the method further comprises the following steps:
determining a first centroid in the first centroid sequence as a starting point, determining a last centroid in the first centroid sequence as an end point, defining the number of image frames collected from the starting point to the end point as a total image frame, after receiving a touch operation, defining the distance from the centroid of an image frame before a current image frame to the starting point as a first distance, and initializing the first distance to zero;
calculating a second distance between the centroid of the current image frame and the starting point;
comparing whether the second distance is greater than the first distance;
if the second distance is larger than the first distance, updating the numerical value of the second distance to the numerical value of the first distance;
if the second distance is not larger than the current first distance, comparing the current image frame number with the total image frame number to obtain a comparison result;
updating the current centroid as a starting point when the comparison result of the image frame number meets a first preset condition and the second distance and the current first distance meet a first preset sub-condition;
when the image frame number comparison result meets a second preset condition, and the second distance and the current first distance meet a second preset sub-condition, updating the current centroid as a starting point;
and updating the current centroid as an end point when the image frame number comparison result meets a third preset condition.
Preferably, the method further comprises the following steps:
determining the first centroid as a starting point, presetting the total image frame number required to be collected by touch operation, defining the distance from the centroid of an image frame before the current image frame to the starting point as a third distance after receiving the touch operation, and initializing the third distance to zero;
calculating a fourth distance between the centroid of the current image frame and the starting point;
comparing whether the fourth distance is greater than the third distance;
if the fourth distance is greater than the third distance, updating the value of the third distance to the value of the fourth distance;
if the fourth distance is not greater than the third distance, comparing the number of the current image frames with the total number of the image frames to obtain a comparison result;
updating the current centroid as a starting point when the comparison result of the image frame number meets a first preset condition and the third distance and the fourth distance meet a first preset sub-condition;
when the image frame number comparison result meets a second preset condition, and the third distance and the fourth distance meet a second preset sub-condition, updating the current centroid as a starting point;
and updating the current centroid as an end point when the image frame number comparison result meets a third preset condition.
Preferably, the method further comprises the following steps:
and selecting a centroid in front of the image frame sequence where the updated end point is located, wherein the ratio of the serial number of the centroid to the total image frame number meets a fourth preset condition, the distance between the centroid and the centroid of the previous frame meets a fifth preset condition, and updating the centroid into the end point.
Preferably, the determining the operation intention of the touch operation based on the centroid of the touch data includes:
subtracting the abscissa of the starting point from the abscissa of the ending point to obtain an abscissa difference value;
subtracting the ordinate of the starting point from the ordinate of the ending point to obtain an ordinate difference value;
if the sum of the absolute value of the horizontal coordinate difference value and the absolute value of the vertical coordinate difference value meets a sixth preset condition, determining that the operation intention of the touch operation indicates a single-click operation;
if the sum of the absolute value of the horizontal coordinate difference and the absolute value of the vertical coordinate difference does not meet a sixth preset condition, judging whether the absolute value of the horizontal coordinate difference and the absolute value of the vertical coordinate difference meet a seventh preset condition;
if the absolute value of the abscissa difference value and the absolute value of the ordinate difference value satisfy the seventh preset condition, determining that the operation intention of the touch operation indicates a slide-up operation when the abscissa difference value is less than 0; when the abscissa difference is not less than 0, determining that the operation intention of the touch operation indicates a downslide operation;
if the absolute value of the horizontal coordinate difference value and the absolute value of the vertical coordinate difference value do not meet the seventh preset condition, judging whether the vertical coordinate difference value is larger than 0;
if the vertical coordinate difference value is larger than 0, judging whether the absolute value of the vertical coordinate difference value and the absolute value of the horizontal coordinate difference value meet an eighth preset condition; if the absolute value of the abscissa difference value and the absolute value of the ordinate difference value satisfy the eighth preset condition, determining that the operation intention of the touch operation indicates a left-sliding operation, and if the absolute value of the abscissa difference value and the absolute value of the ordinate difference value do not satisfy the eighth preset condition, determining that the operation intention of the touch operation indicates a slide-up operation;
if the vertical coordinate difference value is not greater than 0, judging whether the absolute value of the vertical coordinate difference value and the absolute value of the horizontal coordinate difference value meet a ninth preset condition or not; and if the absolute value of the abscissa difference value and the absolute value of the ordinate difference value satisfy the ninth preset condition, determining that the operation intention of the touch operation indicates a right slide operation, and if the absolute value of the abscissa difference value and the absolute value of the ordinate difference value do not satisfy the ninth preset condition, determining that the operation intention of the touch operation indicates a slide-up operation.
An operation intention determining apparatus applied to an electronic device provided with a touch area, comprising:
the touch operation receiving unit is used for receiving touch operation on a touch area, and acquiring touch data of a selected pixel block in the touch area by taking an image frame as a time unit, wherein the touch data comprises a pressure value of the pixel block subjected to the touch operation;
the weight determining unit is used for determining the weight of the pixel block according to the pressure value born by the pixel block;
the centroid calculating unit is used for calculating the centroid of the touch data of each frame by using the weight of the pixel block and the position of the pixel block in the touch area;
a first operation intention determining unit, configured to determine an operation intention of the touch operation based on a centroid of the touch data of each frame.
An electronic device includes the operation intention determining apparatus.
The method comprises the steps of receiving touch operation on the touch area, and collecting touch data of a selected pixel block in the touch area by taking an image frame as a time unit, wherein the touch data comprises a pressure value of the pixel block bearing the touch operation; determining the weight of the pixel block according to the pressure value born by the pixel block; further, calculating the centroid mode of each frame of touch data by using the weight of the pixel block and the position of the pixel block in the touch area; the purpose of determining the operation intention of the touch operation based on the centroid of each frame of touch data is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of an operational intent determination method provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a pixel block in a touch area according to an embodiment of the present application;
fig. 3(a) is a schematic diagram of a pixel block structure according to an embodiment of the present application;
FIG. 3(b) is a schematic diagram of another pixel block structure provided in an embodiment of the present application;
FIG. 4 is a flow chart of another operational intent determination method provided by an embodiment of the present application;
FIG. 5 is a flowchart of a method for determining an operational intent of a touch operation based on a centroid of touch data for each frame, according to an embodiment of the present application;
fig. 6 is a flowchart of a centroid moving trajectory optimization method according to an embodiment of the present application;
FIG. 7 is a flow chart of another centroid movement trajectory optimization method provided by an embodiment of the present application;
FIG. 8 is a flow chart of a method for determining an operational intent of a touch operation based on a centroid of touch data for each frame according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an operation intention determining apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The application of the touch type electronic device is more and more extensive, and a user can operate the touch type electronic device to realize interaction (namely man-machine interaction) with the electronic device. In the process of implementing human-computer interaction, a user generally performs a touch operation on a touch area of the electronic device to implement human-computer interaction with the electronic device.
In determining the operation intention of the touch operation performed by the user on the touch area of the electronic device, the operation intention determination may be implemented based on the operation intention determination manner shown in fig. 1 as follows. The touch area may be a HOME key, or may be a touch display screen, or may be any other portion of the electronic device for touch operation, such as a side edge or a back edge.
Fig. 1 is a flowchart of an operation intention determining method according to an embodiment of the present application.
As shown in fig. 1, the method includes:
s101, receiving touch operation on a touch area, and collecting touch data of a selected pixel block in the touch area by taking an image frame as a time unit, wherein the touch data comprises a pressure value of the pixel block bearing the touch operation;
referring to fig. 2, in the embodiment of the present application, a touch area of an electronic device is divided into a number of sub-blocks 21, and selected sub-blocks among the number of sub-blocks may be referred to as pixel blocks 22. Receiving a touch operation on the touch area, collecting touch data of the pixel blocks 22 in the touch area by taking the image frame as a time unit, wherein the touch data of the pixel blocks 22 comprises pressure values of the pixel blocks 22 subjected to the touch operation.
In the embodiment of the present application, a touch operation on a touch area is received, and a plurality of image frames are collected, where each image frame corresponds to a pressure value of each pixel block 22 in the touch area subjected to the touch operation at the time of collecting the image frame. That is, each image frame corresponds to a frame of touch data that is subjected to the pressure values of the touch operation by the respective pixel blocks 22 in the touch area at the time the image frame was acquired.
S102, determining the weight of the pixel block according to the pressure value born by the pixel block;
in the embodiment of the present application, for a pixel block 22 in a frame of touch data, the weight of the pixel block 22 may be calculated according to the pressure value suffered by the pixel block 22.
In this embodiment of the present application, a corresponding relationship between the weight and the pressure range may be preset, so as to determine the pressure range to which the pressure value borne by the pixel block belongs, and use the weight corresponding to the determined pressure range as the weight of the pixel block.
In this embodiment, the pixel block may include a plurality of pixel sub-blocks, and the pressure values applied to the pixel block include pressure values applied to each pixel sub-block in the pixel block. Correspondingly, the method for determining the weight of the pixel block according to the pressure value born by the pixel block comprises the following steps: and determining the weight of each pixel sub-block in the pixel block according to the pressure value born by the pixel sub-block.
The way of determining the weight of the pixel sub-block according to the pressure value born by the pixel sub-block may be: and presetting a corresponding relation between the weight and the pressure range, further determining the pressure range to which the pressure value born by the pixel sub-block belongs, and determining the weight corresponding to the determined pressure range as the weight of the pixel sub-block.
For example, a touch area with a size of 96 × 96 pixels may be divided into 24 × 12 sub-blocks; then, 4 × 4 — 16 subblocks are selected from the 24 × 12 subblocks, and each selected subblock is defined as a pixel block, so that there are 16 pixel blocks. The way of selecting 4 × 4 sub-blocks from 24 × 12 sub-blocks can be seen in fig. 2, and the sub-block 22 in fig. 2 can be understood as a pixel block selected from the image.
After a pixel block is selected, the pixel block is divided into a first pixel sub-block and a second pixel sub-block; referring to fig. 3(a), a pixel block comprises 4 × 8 pixels, which may be divided into 24 × 4 pixel sub-blocks; referring to fig. 3(b), the pixel block comprises 4 × 8 pixels, and may also be divided into 8 sub-blocks of 2 × 2 pixels.
In the embodiment of the application, the touch common force can be classified, and the touched pixel block weight value is correspondingly generated according to the pressing force level, if the maximum value of the touch common force is 1000, the force level is correspondingly set to be 0-15, and the weight value is correspondingly 0-15.
The following description will be made by taking an example in which each pixel block, i.e., 4X8 pixels, is divided into 2 pixel sub-blocks of 4X 4: areas larger than 0 are all areas touched by fingers; if the pixel is pressed, setting a small sub-block in the pixel sub-block as 1, otherwise, setting the small sub-block as 0; the data is stored by using bits, if the data output is 254, the binary 11111110 indicates that the pressing level of one pixel sub-block is 15, namely the weight is 15; another pixel sub-block compression level is 14, i.e. weight is 14.
S103, calculating the centroid of each frame of touch data by using the weight of the pixel block and the position of the pixel block in the touch area;
in one embodiment of the present application, for an image frame (i.e., a frame of touch data) that is acquired, a centroid of the image frame may be calculated based on the image frame including pressure values at which respective pixel blocks in the touch area are subjected to a touch operation at a time of acquiring the image frame and positions of the respective pixel blocks in the touch area.
When the centroid of the touch data is calculated by using the weight of the pixel block in the touch data and the position of the pixel block in the touch area, a centroid formula can be adopted, wherein the centroid formula is as follows:
Figure BDA0001942687780000081
wherein, weightiIs the weight, x, of the ith pixel sub-block in the touch data of the current frameiThe abscissa and y of the ith pixel sub-block in the touch data of the current frameiIs the ordinate, M, of the ith pixel sub-block in the touch data of the current framexIs the centroid abscissa, M, of the current frame touch datayIs the centroid ordinate of the current frame touch data. In the embodiment of the present application, the centroid of the touch data of the current frame is (M)x,My) I.e. (M)x,My) Coordinates of the centroid of the current frame touch data.
And S104, determining the operation intention of the touch operation based on the centroid of the touch data of each frame.
In the embodiment of the application, based on the centroid of the collected touch data of each frame, the operation intention of the touch operation can be determined.
Fig. 4 is a flowchart of another operation intention determining method provided in an embodiment of the present application.
As shown in fig. 4, the method includes:
s401, receiving touch operation on a touch area, and collecting touch data of a selected pixel block in the touch area by taking an image frame as a time unit, wherein the touch data comprises a pressure value of the pixel block bearing the touch operation;
s402, acquiring the number of image frames acquired by receiving touch operation;
s403, judging whether the number of image frames is smaller than a first preset threshold value; if the number of the image frames is less than the first preset threshold, executing step S404; if the number of the image frames is not less than the first preset threshold, executing step S405;
in this embodiment of the application, the first preset threshold may be 20 frames, and if the number of frames of the image acquired by receiving the touch operation is less than 20 frames, step S404 may be executed; if the number of frames of the image acquired by the touch operation is not less than 20 frames, step S405 may be executed.
S404, saving the collected touch data of each frame, and returning to execute the step S401;
s405, storing and calculating the collected touch data of each frame, and judging whether the current touch operation exceeds a touch boundary based on the centroid of the touch data of the current frame; if the touch operation exceeds the touch boundary, executing step S406;
in the embodiment of the present application, please refer to the method for calculating the centroid of the touch data provided in the above embodiment for the method for determining the centroid of the touch data of the current frame, which is not described herein again.
In the embodiment of the application, if the abscissa of the centroid of the current frame touch data is within a first threshold range and the ordinate of the centroid of the current frame touch data is within a second threshold range, determining that the current touch operation does not exceed the touch boundary based on the centroid of the current frame touch data; on the contrary, if the abscissa of the centroid of the current frame touch data is not within the first threshold range or the ordinate of the centroid of the current frame touch data is not within the second threshold range, determining that the current touch operation exceeds the touch boundary based on the centroid of the current frame touch data.
In the embodiment of the present application, the first threshold range and the second threshold range may be the same or different, and for the distribution modulus of the pixel block of fig. 2, the boundary minimum value is generally 10.5, and the maximum value is 84.5, so the first threshold range may be set to be greater than 12 and smaller than 82; the second threshold range is set to be greater than 12 and smaller than 84, which is just a preferable way of providing the first threshold range and the second threshold range in the embodiments of the present application, and the inventor can set the first threshold range and the second threshold range according to his own needs, which is not limited herein.
S406, stopping collecting the touch data, and outputting the mass center of each frame of touch data based on the stored touch data of each frame;
in the embodiment of the application, if it is determined that the current touch operation exceeds the touch boundary, the acquisition of touch data may be stopped, and the centroid of each frame of touch data may be output based on each stored frame of touch data.
Further, in the embodiment of the present application, if it is not detected that the current touch operation exceeds the touch boundary, but the touch operation is stopped, the collection of the touch data may also be stopped, and the centroid of each frame of touch data may be output based on the stored touch data of each frame.
S407, judging whether the number of image frames is smaller than a second preset threshold value, wherein the second preset threshold value is smaller than the first preset threshold value; if the number of the image frames is less than the second preset threshold, executing step S408; if the number of the image frames is not less than the second preset threshold, executing step S409;
s408, determining that the operation intention of the touch operation indicates single click operation;
further, before determining the operation intention of the touch operation based on the centroid of each frame of touch data, the touch intention determining method provided in the embodiment of the present application may further determine whether the number of image frames is less than a second preset threshold, determine that the operation intention of the touch operation indicates a click operation if the number of image frames is less than the second preset threshold, and execute step S409 if the number of image frames is not less than the second preset threshold.
In this embodiment of the application, the second preset threshold is smaller than the first preset threshold, the second preset threshold may be 3 frames, and if it is determined that the number of frames of the image is smaller than 3 frames, the operation intention of the touch operation may be directly determined as a click operation. The above is only a preferable mode of the second preset threshold provided in the embodiment of the present application, and regarding the specific value of the second preset threshold, the inventor can set the threshold according to his own needs, and is not limited herein.
And S409, determining the operation intention of the touch operation based on the centroid of the touch data of each frame.
In the embodiment of the application, the centroids of the frame touch data are sequentially output according to the sequence of the collected touch data, so that a centroid movement track in the touch operation process can be obtained, and the operation intention of the touch operation can be determined based on the centroid movement track.
In the embodiment of the application, before the operation intention of the touch operation is determined based on the centroid of each frame of touch data, the centroid of the collected touch data can be further filtered, so that the purpose of determining the operation intention of the touch operation is realized based on the filtered centroid, the operation amount is further simplified, and the operation intention determination precision is improved.
FIG. 5 is a flowchart of an operation intention method for determining a touch operation based on a centroid of touch data of each frame according to an embodiment of the present application.
As shown in fig. 5, the method includes:
s501, sequencing the centroids of the touch data of each frame acquired by the received touch operation according to the acquisition sequence of the touch data to obtain a centroid sequence;
in the embodiment of the application, for each stored frame of touch data, the centroids of each frame of touch data are sorted according to the acquisition sequence of each frame of touch data to obtain a centroid sequence, and the centroid sequence can reflect a centroid movement track in the touch operation process.
S502, removing a first preset number of centroids which are ranked most forward in the centroid sequence, and removing a second preset number of centroids which are ranked most backward in the centroid sequence to obtain a first centroid sequence;
through the analysis of the moving track of the mass center, when the touch operation is initially executed, the mass center of the collected touch data can have a shaking phenomenon, and when the touch operation is finished, the mass center of the collected touch data can have a trailing phenomenon. Therefore, according to the embodiment of the application, the influence of jitter and tailing on the determination of the operation intention is reduced by removing the first M frames of touch data collected in the touch operation process and the second N frames of touch data collected in the touch operation process.
In the embodiment of the present application, M frames may be regarded as a first preset number, and N frames may be regarded as a second preset number; for the case of fig. 3(a), M may be set to 1 and N to 2; for the case of fig. 3(b), M may be set to 1 and N may be set to any integer between 5 and 8. The above is only a preferred manner of the first preset number and the second preset number provided in the embodiment of the present application, and the inventor may set the first preset number and the second preset number according to his own requirement, which is not limited herein.
And S503, determining the operation intention of the touch operation based on the first centroid sequence.
In the embodiment of the application, the first M centroids in the centroid sequence are removed, and the last N centroids in the centroid sequence are removed, so that the first centroid sequence can be obtained, and the operation intention of the touch operation can be determined based on the first centroid sequence.
Considering that the centroid moving trajectory repeats when the half finger slides, the embodiment of the application can further judge the inflection point in the first centroid sequence after obtaining the first centroid sequence so as to optimize the centroid moving trajectory.
Fig. 6 is a flowchart of a centroid moving trajectory optimization method according to an embodiment of the present application.
As shown in fig. 6, the method includes:
s601, determining a first centroid in the first centroid sequence as a starting point, determining a last centroid in the first centroid sequence as an end point, defining the number of image frames acquired from the starting point to the end point as a total image frame, defining the distance from the centroid of an image frame before a current image frame to the starting point as a first distance after receiving a touch operation, and initializing the first distance to zero;
s602, calculating a second distance between the centroid of the current image frame and the starting point;
s603, comparing whether the second distance is greater than the first distance; if the second distance is greater than the first distance, go to step S604; if the second distance is not greater than the current first distance, go to step S605;
s604, updating the numerical value of the second distance to the numerical value of the first distance;
s605, comparing the current image frame number with the total image frame number to obtain a comparison result;
s606, updating the centroid of the current image frame to be a starting point when the comparison result of the image frame number meets a first preset condition and the second distance and the current first distance meet a first preset sub-condition;
in the embodiment of the present application, the first preset condition may be that the ratio of the current image frame number to the total image frame number is less than 70%. The image frame number comparison result represents the proportion of the current image frame number to the total image frame number, and when the proportion is less than 70%, the image frame number comparison result meets a first preset condition.
The second distance and the current first distance satisfy a first preset sub-condition, which may be that the second distance is less than 80% of the current first distance. And if the second distance is less than 80% of the current first distance, the second distance and the current first distance are considered to meet a first preset sub-condition, and then the starting point of the first centroid sequence is updated to be the centroid of the current image frame.
In this embodiment of the present application, when the comparison result of the image frame numbers meets the first preset condition and the second distance and the current first distance do not meet the first preset sub-condition, the next image frame adjacent to the current image frame may be updated to the current image frame, and the step S602 is executed again;
in this embodiment of the application, when the comparison result of the image frame numbers meets the first preset condition, and the second distance and the current first distance meet the first preset sub-condition, after the centroid of the current image frame is updated to the starting point, the next image frame adjacent to the image frame may also be updated to the current image frame, and the step S602 is executed again.
S607, when the comparison result of the image frame numbers meets a second preset condition, and the second distance and the current first distance meet a second preset sub-condition, updating the current centroid as a starting point;
in the embodiment of the present application, the second preset condition may be that the ratio of the current image frame number to the total image frame number is greater than or equal to 70% and less than 90%. The image frame number comparison result represents the proportion of the current image frame number to the total image frame number, and when the proportion is more than or equal to 70% and not less than 90%, the image frame number comparison result meets a second preset condition.
The second distance and the current first distance satisfy a second preset sub-condition, which may be that the second distance is less than 80% of the current first distance. And if the second distance is less than 80% of the current first distance, the second distance and the current first distance are considered to meet a second preset sub-condition, and the starting point of the first centroid sequence is updated to be the centroid of the current image frame.
Further, after step S607 is completed, it may be determined whether the ratio of the current image frame number to the total image frame number is greater than 80%, if so, the optimization process of the centroid moving trajectory in the first centroid sequence is stopped, and an operation intention is determined based on the current first centroid sequence.
Further, if the comparison result of the image frame numbers satisfies a second preset condition and the second distance and the current first distance satisfy a second preset sub-condition, the next image frame adjacent to the current image frame may be used as the current image frame and the process returns to step S602.
And S608, when the comparison result of the image frame numbers meets a third preset condition, updating the current centroid to be an end point.
In the embodiment of the present application, the third preset condition may be that the ratio of the current image frame number to the total image frame number is greater than or equal to 90%. The image frame number comparison result represents the proportion of the current image frame number to the total image frame number, and when the proportion is greater than or equal to 90%, the image frame number comparison result meets a third preset condition.
And when the image frame number comparison result meets a third preset condition, updating the centroid of the current image frame to be the end point of the first centroid sequence.
In the embodiment of the present application, the inventor may set specific values of the first preset condition, the second preset condition, the third preset condition, the first preset sub-condition, and the second preset sub-condition according to his own requirements, which is not limited herein.
In the embodiment of the present application, in addition to the optimization of the centroid movement trajectory by optimizing the first centroid sequence, the execution movement trajectory may also be optimized in the process of acquiring operation data, specifically please refer to fig. 7.
Fig. 7 is a flowchart of another centroid moving trajectory optimization method according to an embodiment of the present application.
As shown in fig. 7, the method includes:
s701, receiving touch operation on a touch area, collecting touch data of a selected pixel block in the touch area by taking an image frame as a time unit, determining the centroid of the collected first image frame as a starting point, presetting the total number of image frames required to be collected by the touch operation, defining the distance from the centroid of the image frame before the current image frame to the starting point as a third distance after receiving the touch operation, and initializing the third distance to zero;
s702, calculating a fourth distance between the centroid of the current image frame and the starting point;
s703, comparing whether the fourth distance is greater than the third distance; if the fourth distance is greater than the third distance, go to step S704; if the fourth distance is not greater than the third distance, go to step S705;
s704, updating the numerical value of the third distance to the numerical value of the fourth distance;
s705, comparing the current image frame number with the total image frame number to obtain a comparison result;
s706, when the comparison result of the image frame number meets a first preset condition and the third distance and the fourth distance meet a first preset sub-condition, updating the current centroid as a starting point;
s707, when the image frame number comparison result meets a second preset condition, and the third distance and the fourth distance meet a second preset sub-condition, updating the current centroid as a starting point;
and S708, updating the centroid of the current image frame to be an end point when the comparison result of the image frame number meets a third preset condition.
Further, according to the method for determining an operation intention provided in the embodiment of the present application, on the basis of the optimization process of the centroid movement trajectory, in order to further accurately determine the end point of the centroid movement trajectory, the end point of the centroid movement trajectory after the centroid movement trajectory optimization may be traced back, and the method specifically includes: and selecting a centroid in front of the image frame sequence where the updated end point is located, wherein the ratio of the serial number of the centroid to the total image frame number meets a fourth preset condition, the distance between the centroid and the centroid of the previous frame meets a fifth preset condition, and updating the centroid into the end point.
In the embodiment of the application, the updated end point is the centroid of the acquired image frame, a target image frame is selected from the image frames acquired before the image frame, the ratio of the serial number of the centroid of the target image frame to the total image frame number meets a fourth preset condition, the distance between the centroid of the target image frame and the centroid of the image frame acquired before the target image frame meets a fifth preset condition, and the centroid of the target image frame is updated to the end point.
The sequence number of the centroid of each acquired image frame may be sequentially set according to the order of the acquired image frames, for example, the sequence number of the centroid of the first acquired image frame is set to 1, the sequence number of the centroid of the second acquired image frame is set to 2, the sequence number of the centroid of the third acquired image frame is set to 3 …, and so on; accordingly, the ratio of the number of centroids of the target image frame to the total number of image frames may be understood as a result of dividing the number of centroids of the target image frame by the total number of image frames.
In the embodiment of the present application, the fourth preset condition may be 7/8, and if the ratio of the serial number of the centroid of the target image frame to the total number of image frames is greater than 7/8, the ratio of the serial number of the centroid of the target image frame to the total number of image frames may be considered to meet the fourth preset condition.
In this embodiment of the application, the fifth preset condition may be 10, and if the distance between the centroid of the target image frame and the centroid of the image frame acquired by the frame preceding the target image frame is greater than 10, it may be considered that the distance between the centroid of the target image frame and the centroid of the image frame acquired by the frame preceding the target image frame satisfies the fifth preset condition.
To facilitate a detailed description of an operation intention determining method provided by an embodiment of the present application, a flowchart of a method for determining an operation intention of a touch operation based on a centroid of touch data of each frame is provided, and refer to fig. 8 specifically.
As shown in fig. 8, the method includes:
s801, subtracting the abscissa of the starting point from the abscissa of the ending point to obtain an abscissa difference value;
s802, subtracting the ordinate of the starting point from the ordinate of the ending point to obtain an ordinate difference value;
s803, determining whether the sum of the absolute value of the abscissa difference and the absolute value of the ordinate difference satisfies a sixth preset condition, and if the sum of the absolute value of the abscissa difference and the absolute value of the ordinate difference satisfies the sixth preset condition, executing step S804; if the sum of the absolute value of the abscissa difference and the absolute value of the ordinate difference does not satisfy the sixth preset condition, performing step S805;
in the embodiment of the present application, when the sum of the absolute value of the abscissa difference and the absolute value of the ordinate difference is less than 15, it may be considered that the sum of the absolute value of the abscissa difference and the absolute value of the ordinate difference satisfies the sixth preset condition; on the contrary, the sum of the absolute value of the horizontal coordinate difference and the absolute value of the vertical coordinate difference is considered not to satisfy the sixth preset condition.
S804, determining that the operation intention of the touch operation indicates single click operation;
s805, judging whether the absolute value of the horizontal coordinate difference value and the absolute value of the vertical coordinate difference value meet a seventh preset condition; if the absolute value of the abscissa difference and the absolute value of the ordinate difference satisfy a seventh preset condition, performing step S806; if the absolute value of the abscissa difference and the absolute value of the ordinate difference do not satisfy the seventh preset condition, performing step S809;
in the embodiment of the present application, when the absolute value of the abscissa difference is greater than the absolute value of the ordinate difference, it may be considered that the absolute value of the abscissa difference and the absolute value of the ordinate difference satisfy a seventh preset condition; and otherwise, the absolute value of the horizontal coordinate difference value and the absolute value of the vertical coordinate difference value are not considered to meet the seventh preset condition.
S806, judging whether the horizontal coordinate difference value is smaller than 0; when the horizontal coordinate difference is less than 0, executing step S807; when the horizontal coordinate difference is not less than 0, executing step S808;
s807, determining an operation intention indication upglide operation of the touch operation;
s808, determining that the operation intention of the touch operation indicates a downslide operation;
s809, judging whether the difference value of the vertical coordinates is greater than 0; if the difference between the vertical coordinates is greater than 0, go to step S810; if the difference of the vertical coordinates is not greater than 0, go to step S813;
s810, judging whether the absolute value of the vertical coordinate difference value and the absolute value of the horizontal coordinate difference value meet an eighth preset condition; if the absolute value of the abscissa difference and the absolute value of the ordinate difference satisfy an eighth preset condition, performing step S811; if the absolute value of the abscissa difference and the absolute value of the ordinate difference do not satisfy the eighth preset condition, perform step S812;
in the embodiment of the present application, when the absolute value of the ordinate difference is greater than the absolute value of the abscissa difference, it may be considered that the absolute value of the ordinate difference and the absolute value of the abscissa difference satisfy an eighth preset condition; and otherwise, the absolute value of the difference value of the vertical coordinates and the absolute value of the difference value of the horizontal coordinates are not considered to meet the eighth preset condition.
S811, determining that the operation intention of the touch operation indicates a left slide operation;
s812, determining an operation intention indication upglide operation of the touch operation;
s813, judging whether the absolute value of the vertical coordinate difference value and the absolute value of the horizontal coordinate difference value meet a ninth preset condition; if the absolute value of the abscissa difference and the absolute value of the ordinate difference satisfy the ninth preset condition, go to step S814; if the absolute value of the abscissa difference and the absolute value of the ordinate difference do not satisfy the ninth preset condition, step S815 is performed.
In the embodiment of the present application, when the absolute value of the ordinate difference is greater than the absolute value of the abscissa difference, it may be considered that the absolute value of the ordinate difference and the absolute value of the abscissa difference satisfy a ninth preset condition; and otherwise, the absolute value of the difference value of the vertical coordinates and the absolute value of the difference value of the horizontal coordinates are not considered to meet the ninth preset condition.
S814, determining that the operation intention of the touch operation indicates a right slide operation;
and S815, determining that the operation intention of the touch operation indicates the upglide operation.
The above is only a preferred mode of the sixth preset condition, the seventh preset condition, the eighth preset condition and the ninth preset condition provided in the embodiments of the present application, and the inventor can set the sixth preset condition, the seventh preset condition, the eighth preset condition and the ninth preset condition according to his own needs, which is not limited herein.
Fig. 9 is a schematic structural diagram of an operation intention determining apparatus according to an embodiment of the present application.
As shown in fig. 9, the apparatus includes:
a touch operation receiving unit 91, configured to receive a touch operation on a touch area, and acquire touch data of a selected pixel block in the touch area by using an image frame as a time unit, where the touch data includes a pressure value at which the pixel block is subjected to the touch operation;
a weight determining unit 92, configured to determine a weight of the pixel block according to the pressure value borne by the pixel block;
a centroid calculating unit 93, configured to calculate a centroid of each frame of touch data by using the weight of the pixel block and the position of the pixel block in the touch area;
a first operation intention determining unit 94 for determining an operation intention of the touch operation based on the centroid of the touch data of each frame.
In this embodiment of the application, the weight determining unit is specifically configured to query a preset correspondence between a weight and a pressure range, and determine a weight corresponding to the pressure range to which a pressure value borne by a pixel block belongs, as the weight of the pixel block.
In this embodiment of the application, the pixel block includes a plurality of pixel sub-blocks, the pressure value borne by the collected pixel block includes a pressure value borne by each pixel sub-block in the collected pixel block, and the weight determining unit is specifically configured to determine the weight of the pixel sub-block according to the pressure value borne by each pixel sub-block in the pixel block.
In an embodiment of the present application, the centroid calculating unit includes:
the acquisition unit is used for acquiring the position of each pixel sub-block of each selected pixel block in the touch area, wherein the position of the pixel sub-block in the touch area comprises the abscissa and the ordinate of the pixel sub-block in the touch area;
the centroid calculating subunit is used for calculating the centroid of each frame of touch data by adopting a centroid formula, wherein the centroid formula is as follows:
Figure BDA0001942687780000171
wherein, weightiIs the weight, x, of the ith pixel sub-block in the touch data of the current frameiThe abscissa and y of the ith pixel sub-block in the touch data of the current frameiIs the ordinate, M, of the ith pixel sub-block in the touch data of the current framexIs the centroid abscissa, M, of the current frame touch datayIs the centroid ordinate of the current frame touch data.
Further, an operation intention determining apparatus provided in an embodiment of the present application further includes a touch boundary determining unit, configured to:
acquiring the number of image frames acquired by receiving touch operation;
if the number of the image frames is smaller than a first preset threshold value, storing the acquired touch data of each frame;
if the number of the image frames is not less than a first preset threshold value, storing and calculating the collected touch data of each frame, and judging whether the current touch operation exceeds a touch boundary or not based on the centroid of the current touch data;
and if the touch operation exceeds the touch boundary, stopping acquiring the touch data and outputting the mass center of each frame of touch data.
Further, an operation intention determining apparatus provided in an embodiment of the present application further includes a second operation intention determining unit, configured to:
judging whether the number of image frames is smaller than a second preset threshold value or not;
and if the number of the image frames is smaller than a second preset threshold, determining that the operation intention of the touch operation indicates the click operation, wherein the second preset threshold is smaller than the first preset threshold.
An embodiment of the present application provides a first operation intention determining unit in an operation intention determining device, including:
the centroid sequence acquisition unit is used for sequencing the centroids of the frames of touch data acquired by the received touch operation according to the acquisition sequence of the touch data to obtain a centroid sequence;
the centroid removing unit is used for removing a first preset number of centroids which are sequenced most in front in the centroid sequence and removing a second preset number of centroids which are sequenced most in back in the centroid sequence to obtain a first centroid sequence;
a first operation intention determining subunit for determining an operation intention of the touch operation based on the first centroid sequence.
Further, an operation intention determining apparatus provided by an embodiment of the present application further includes a first centroid sequence optimizing unit, configured to:
determining a first centroid in the first centroid sequence as a starting point, determining a last centroid in the first centroid sequence as an end point, defining the number of image frames collected from the starting point to the end point as a total image frame, after receiving a touch operation, defining the distance from the centroid of an image frame before a current image frame to the starting point as a first distance, and initializing the first distance to zero;
calculating a second distance between the centroid of the current image frame and the starting point;
comparing whether the second distance is greater than the first distance;
if the second distance is larger than the first distance, updating the numerical value of the second distance to the numerical value of the first distance;
if the second distance is not greater than the current first distance, comparing the current image frame number with the total image frame number to obtain a comparison result;
updating the current centroid as a starting point when the comparison result of the image frame number meets a first preset condition and the second distance and the current first distance meet a first preset sub-condition;
when the image frame number comparison result meets a second preset condition, and the second distance and the current first distance meet a second preset sub-condition, updating the current centroid as a starting point;
and when the image frame number comparison result meets a third preset condition, updating the current centroid as an end point.
Further, an operation intention determining apparatus provided by an embodiment of the present application further includes a second centroid sequence optimizing unit, configured to:
determining the first centroid as a starting point, presetting the total image frame number required to be collected by touch operation, defining the distance from the centroid of an image frame before the current image frame to the starting point as a third distance after receiving the touch operation, and initializing the third distance to zero;
calculating a fourth distance between the centroid of the current image frame and the starting point;
comparing whether the fourth distance is greater than the third distance;
if the fourth distance is greater than the third distance, updating the value of the third distance to the value of the fourth distance;
if the fourth distance is not greater than the third distance, comparing the current image frame number with the total image frame number to obtain a comparison result;
updating the current centroid as a starting point when the comparison result of the image frame number meets a first preset condition and the third distance and the fourth distance meet a first preset sub-condition;
when the image frame number comparison result meets a second preset condition, and the third distance and the fourth distance meet a second preset sub-condition, updating the current centroid as a starting point;
and when the image frame number comparison result meets a third preset condition, updating the current centroid as an end point.
Further, an operation intention determining apparatus provided by an embodiment of the present application further includes a third centroid sequence optimizing unit, configured to:
and selecting a centroid in front of the image frame sequence where the updated end point is located, wherein the ratio of the serial number of the centroid to the total image frame number meets a fourth preset condition, the distance between the centroid and the centroid of the previous frame meets a fifth preset condition, and updating the centroid into the end point.
In an embodiment of the present application, the first operation intention determining unit includes:
the first calculation unit is used for subtracting the abscissa of the starting point from the abscissa of the ending point to obtain an abscissa difference value;
the second calculation unit is used for subtracting the ordinate of the starting point from the ordinate of the ending point to obtain an ordinate difference value;
the first determining unit is used for determining that the operation intention of the touch operation indicates the click operation if the sum of the absolute value of the horizontal coordinate difference value and the absolute value of the vertical coordinate difference value meets a sixth preset condition;
a first determination unit configured to determine whether or not the absolute value of the abscissa difference and the absolute value of the ordinate difference satisfy a seventh preset condition if the sum of the absolute value of the abscissa difference and the absolute value of the ordinate difference does not satisfy the sixth preset condition;
a second determination unit configured to determine that the operation intention of the touch operation indicates a slide-up operation when the abscissa difference value is less than 0 if the absolute value of the abscissa difference value and the absolute value of the ordinate difference value satisfy a seventh preset condition; when the horizontal coordinate difference value is not less than 0, determining that the operation intention of the touch operation indicates a slide-down operation;
a second judgment unit configured to judge whether the ordinate difference is greater than 0 if the absolute value of the abscissa difference and the absolute value of the ordinate difference do not satisfy a seventh preset condition;
a third determining unit configured to determine whether an absolute value of the vertical coordinate difference and an absolute value of the horizontal coordinate difference satisfy an eighth preset condition if the vertical coordinate difference is greater than 0; if the absolute value of the horizontal coordinate difference value and the absolute value of the vertical coordinate difference value satisfy an eighth preset condition, determining that the operation intention of the touch operation indicates a left-sliding operation, and if the absolute value of the horizontal coordinate difference value and the absolute value of the vertical coordinate difference value do not satisfy the eighth preset condition, determining that the operation intention of the touch operation indicates a sliding-up operation;
a fourth determination unit configured to determine whether an absolute value of the vertical coordinate difference and an absolute value of the horizontal coordinate difference satisfy a ninth preset condition, if the vertical coordinate difference is not greater than 0; and if the absolute value of the horizontal coordinate difference value and the absolute value of the vertical coordinate difference value do not satisfy the ninth preset condition, determining that the operation intention of the touch operation indicates a right-sliding operation.
Further, an embodiment of the present application also provides an electronic device, which includes the operation intention determining apparatus provided in the foregoing embodiment.
The application provides an operation intention determining method, an operation intention determining device and electronic equipment, which are applied to the electronic equipment with a touch area, receive touch operation on the touch area, and collect touch data of a selected pixel block in the touch area by taking an image frame as a time unit, wherein the touch data comprises a pressure value of the touch operation borne by the pixel block; determining the weight of the pixel block according to the pressure value born by the pixel block; calculating the centroid of each frame of touch data by using the weight of the pixel block and the position of the pixel block in the touch area; and determining the operation intention of the touch operation based on the centroid of the touch data of each frame.
The method, the apparatus and the electronic device for determining an operational intention provided by the present invention are described in detail above, and a specific example is applied in the present disclosure to explain the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include or include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (13)

1. An operation intention determining method applied to an electronic device provided with a touch area, the method comprising:
receiving touch operation on a touch area, and collecting touch data of a selected pixel block in the touch area by taking an image frame as a time unit, wherein the touch data comprises a pressure value of the pixel block subjected to the touch operation;
determining the weight of the pixel block according to the pressure value born by the pixel block;
calculating the centroid of the touch data of each frame by using the weight of the pixel block and the position of the pixel block in the touch area;
and determining the operation intention of the touch operation based on the centroid of the touch data of each frame.
2. The method of claim 1, wherein determining the weight of the pixel block based on the pressure value experienced by the pixel block comprises:
and inquiring the corresponding relation between the preset weight and the pressure range, and determining the weight corresponding to the pressure range to which the pressure value born by the pixel block belongs as the weight of the pixel block.
3. The method of claim 2, wherein the block of pixels comprises a plurality of sub-blocks of pixels, wherein collecting the pressure values experienced by the block of pixels comprises collecting the pressure values experienced by each sub-block of pixels in the block of pixels,
the determining the weight of the pixel block according to the pressure value born by the pixel block comprises the following steps:
and determining the weight of each pixel sub-block according to the pressure value born by each pixel sub-block in the pixel block.
4. The method of claim 3, wherein calculating the centroid of the touch data for each frame using the weights of the pixel blocks and the locations of the pixel blocks in the touch area comprises:
acquiring the position of each pixel sub-block of each selected pixel block in the touch area, wherein the position of each pixel sub-block in the touch area comprises the abscissa and the ordinate of the pixel sub-block in the touch area, and respectively calculating the centroid of each frame of touch data by adopting a centroid formula;
the centroid formula is:
Figure FDA0001942687770000011
wherein, weightiIs the weight, x, of the ith pixel sub-block in the touch data of the current frameiThe abscissa and y of the ith pixel sub-block in the touch data of the current frameiIs the ordinate, M, of the ith pixel sub-block in the touch data of the current framexIs the centroid abscissa, M, of the current frame touch datayIs the centroid ordinate of the current frame touch data.
5. The method of claim 1, further comprising:
acquiring the number of image frames acquired by receiving the touch operation;
if the number of the image frames is smaller than the first preset threshold value, storing the acquired touch data of each frame;
if the number of the image frames is not less than a first preset threshold value, storing and calculating the collected touch data of each frame, and judging whether the current touch operation exceeds a touch boundary or not based on the centroid of the current touch data;
and if the touch operation exceeds the touch boundary, stopping acquiring touch data and outputting the centroid of each frame of touch data.
6. The method of claim 5, wherein prior to the determining the operational intent of the touch operation based on the centroid of the frames of touch data, the method further comprises:
judging whether the number of the image frames is smaller than a second preset threshold value or not;
and if the number of the image frames is smaller than a second preset threshold, determining that the operation intention of the touch operation indicates a click operation, wherein the second preset threshold is smaller than the first preset threshold.
7. The method of claim 1, wherein determining the operational intent of the touch operation based on the centroid of the frames of touch data comprises:
sequencing the centroids of the frames of touch data acquired by the touch operation according to the acquisition sequence of the touch data to obtain a centroid sequence;
removing a first preset number of centroids which are ranked most forward in the centroid sequence, and removing a second preset number of centroids which are ranked most backward in the centroid sequence to obtain a first centroid sequence;
determining an operational intent of the touch operation based on the first centroid sequence.
8. The method of claim 7, further comprising:
determining a first centroid in the first centroid sequence as a starting point, determining a last centroid in the first centroid sequence as an end point, defining the number of image frames collected from the starting point to the end point as a total image frame, after receiving a touch operation, defining the distance from the centroid of an image frame before a current image frame to the starting point as a first distance, and initializing the first distance to zero;
calculating a second distance between the centroid of the current image frame and the starting point;
comparing whether the second distance is greater than the first distance;
if the second distance is larger than the first distance, updating the numerical value of the second distance to the numerical value of the first distance;
if the second distance is not larger than the current first distance, comparing the current image frame number with the total image frame number to obtain a comparison result;
updating the current centroid as a starting point when the comparison result of the image frame number meets a first preset condition and the second distance and the current first distance meet a first preset sub-condition;
when the image frame number comparison result meets a second preset condition, and the second distance and the current first distance meet a second preset sub-condition, updating the current centroid as a starting point;
and updating the current centroid as an end point when the image frame number comparison result meets a third preset condition.
9. The method of claim 1, further comprising:
determining the first centroid as a starting point, presetting the total image frame number required to be collected by touch operation, defining the distance from the centroid of an image frame before the current image frame to the starting point as a third distance after receiving the touch operation, and initializing the third distance to zero;
calculating a fourth distance between the centroid of the current image frame and the starting point;
comparing whether the fourth distance is greater than the third distance;
if the fourth distance is greater than the third distance, updating the value of the third distance to the value of the fourth distance;
if the fourth distance is not greater than the third distance, comparing the number of the current image frames with the total number of the image frames to obtain a comparison result;
updating the current centroid as a starting point when the comparison result of the image frame number meets a first preset condition and the third distance and the fourth distance meet a first preset sub-condition;
when the image frame number comparison result meets a second preset condition, and the third distance and the fourth distance meet a second preset sub-condition, updating the current centroid as a starting point;
and updating the current centroid as an end point when the image frame number comparison result meets a third preset condition.
10. The method of claim 8 or 9, further comprising:
and selecting a centroid in front of the image frame sequence where the updated end point is located, wherein the ratio of the serial number of the centroid to the total image frame number meets a fourth preset condition, the distance between the centroid and the centroid of the previous frame meets a fifth preset condition, and updating the centroid into the end point.
11. The method of claim 10, wherein determining the operational intent of the touch operation based on the centroid of the touch data comprises:
subtracting the abscissa of the starting point from the abscissa of the ending point to obtain an abscissa difference value;
subtracting the ordinate of the starting point from the ordinate of the ending point to obtain an ordinate difference value;
if the sum of the absolute value of the horizontal coordinate difference value and the absolute value of the vertical coordinate difference value meets a sixth preset condition, determining that the operation intention of the touch operation indicates a single-click operation;
if the sum of the absolute value of the horizontal coordinate difference and the absolute value of the vertical coordinate difference does not meet a sixth preset condition, judging whether the absolute value of the horizontal coordinate difference and the absolute value of the vertical coordinate difference meet a seventh preset condition;
if the absolute value of the abscissa difference value and the absolute value of the ordinate difference value satisfy the seventh preset condition, determining that the operation intention of the touch operation indicates a slide-up operation when the abscissa difference value is less than 0; when the abscissa difference is not less than 0, determining that the operation intention of the touch operation indicates a downslide operation;
if the absolute value of the horizontal coordinate difference value and the absolute value of the vertical coordinate difference value do not meet the seventh preset condition, judging whether the vertical coordinate difference value is larger than 0;
if the vertical coordinate difference value is larger than 0, judging whether the absolute value of the vertical coordinate difference value and the absolute value of the horizontal coordinate difference value meet an eighth preset condition; if the absolute value of the abscissa difference value and the absolute value of the ordinate difference value satisfy the eighth preset condition, determining that the operation intention of the touch operation indicates a left-sliding operation, and if the absolute value of the abscissa difference value and the absolute value of the ordinate difference value do not satisfy the eighth preset condition, determining that the operation intention of the touch operation indicates a slide-up operation;
if the vertical coordinate difference value is not greater than 0, judging whether the absolute value of the vertical coordinate difference value and the absolute value of the horizontal coordinate difference value meet a ninth preset condition or not; and if the absolute value of the abscissa difference value and the absolute value of the ordinate difference value satisfy the ninth preset condition, determining that the operation intention of the touch operation indicates a right slide operation, and if the absolute value of the abscissa difference value and the absolute value of the ordinate difference value do not satisfy the ninth preset condition, determining that the operation intention of the touch operation indicates a slide-up operation.
12. An operation intention determining apparatus applied to an electronic device provided with a touch area, comprising:
the touch operation receiving unit is used for receiving touch operation on a touch area, and acquiring touch data of a selected pixel block in the touch area by taking an image frame as a time unit, wherein the touch data comprises a pressure value of the pixel block subjected to the touch operation;
the weight determining unit is used for determining the weight of the pixel block according to the pressure value born by the pixel block;
the centroid calculating unit is used for calculating the centroid of the touch data of each frame by using the weight of the pixel block and the position of the pixel block in the touch area;
a first operation intention determining unit, configured to determine an operation intention of the touch operation based on a centroid of the touch data of each frame.
13. An electronic device characterized by comprising the operation intention determining apparatus according to claim 12.
CN201910026538.2A 2019-01-11 2019-01-11 Operation intention determining method and device and electronic equipment Pending CN111435283A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910026538.2A CN111435283A (en) 2019-01-11 2019-01-11 Operation intention determining method and device and electronic equipment
TW108140335A TWI796530B (en) 2019-01-11 2019-11-06 Operation intention determining method, apparatus, and electronic device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910026538.2A CN111435283A (en) 2019-01-11 2019-01-11 Operation intention determining method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111435283A true CN111435283A (en) 2020-07-21

Family

ID=71580352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910026538.2A Pending CN111435283A (en) 2019-01-11 2019-01-11 Operation intention determining method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN111435283A (en)
TW (1) TWI796530B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103135832A (en) * 2011-11-30 2013-06-05 矽统科技股份有限公司 Touch coordinate calculation method for touch panel
CN103547982A (en) * 2011-05-24 2014-01-29 微软公司 Identifying contacts and contact attributes in touch sensor data using spatial and temporal features
CN104487922A (en) * 2012-07-26 2015-04-01 苹果公司 Gesture and touch input detection through force sensing
CN106095307A (en) * 2016-06-01 2016-11-09 努比亚技术有限公司 Rotate gesture identifying device and method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101201979B1 (en) * 2010-10-21 2012-11-15 주식회사 애트랩 Input device and touch position detecting method thereof
CN105261337B (en) * 2014-06-27 2018-10-09 敦泰电子有限公司 Touch control display apparatus and its driving method and driving circuit
TWI621101B (en) * 2015-10-23 2018-04-11 Morpho, Inc. Image processing device, electronic device, image processing method and non-transitory computer readable recording medium
CN106896950B (en) * 2015-12-17 2020-03-03 敦泰电子有限公司 Pressure detection method of embedded touch display device and mobile device using same
CN106502460B (en) * 2016-10-31 2019-04-09 北京交通大学 A kind of recognition methods of capacitance touching control track noise type

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103547982A (en) * 2011-05-24 2014-01-29 微软公司 Identifying contacts and contact attributes in touch sensor data using spatial and temporal features
CN103135832A (en) * 2011-11-30 2013-06-05 矽统科技股份有限公司 Touch coordinate calculation method for touch panel
CN104487922A (en) * 2012-07-26 2015-04-01 苹果公司 Gesture and touch input detection through force sensing
CN106095307A (en) * 2016-06-01 2016-11-09 努比亚技术有限公司 Rotate gesture identifying device and method

Also Published As

Publication number Publication date
TW202026840A (en) 2020-07-16
TWI796530B (en) 2023-03-21

Similar Documents

Publication Publication Date Title
CN107132986B (en) Method and device for intelligently adjusting touch response area through virtual keys
CN102402369B (en) Electronic equipment and its operation indicating mark moving method
TWI478041B (en) Method of identifying palm area of a touch panel and a updating method thereof
CN105468247B (en) Working mode switching method and mobile terminal
EP3167358B1 (en) Method of performing a touch action in a touch sensitive device
CN101916161B (en) Interface model selection method based on image of region pressed by finger and mobile terminal
JP5556270B2 (en) Candidate display device and candidate display method
KR101963782B1 (en) Method for identifying user operation mode on handheld device and handheld device
CN106293328A (en) Icon display method and device
JP2005530235A5 (en)
CN102117165A (en) Touch input processing method and mobile terminal
CN105409306A (en) Method and apparatus for predicting location of mobile terminal
CN104966011A (en) Method for non-collaborative judgment and operating authorization restriction for mobile terminal child user
CN102306053B (en) Virtual touch screen-based man-machine interaction method and device and electronic equipment
CN105867916A (en) Terminal control method and device
JP2012203563A (en) Operation input detection device using touch panel
CN107037951B (en) Automatic operation mode identification method and terminal
CN102662511A (en) Method and terminal for carrying out control operation through touch screen
CN108279848A (en) A kind of display methods and electronic equipment
CN106598422B (en) hybrid control method, control system and electronic equipment
CN103744609B (en) A kind of data extraction method and device
CN111435283A (en) Operation intention determining method and device and electronic equipment
CN114064168A (en) Interface optimization method and medical equipment
CN102214028B (en) Gesture recognition method and device for touch panel
CN102262492A (en) Touch control identifying method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination