CN114578959A - Gesture recognition method and system based on touch pad - Google Patents

Gesture recognition method and system based on touch pad Download PDF

Info

Publication number
CN114578959A
CN114578959A CN202111667309.2A CN202111667309A CN114578959A CN 114578959 A CN114578959 A CN 114578959A CN 202111667309 A CN202111667309 A CN 202111667309A CN 114578959 A CN114578959 A CN 114578959A
Authority
CN
China
Prior art keywords
touch
sampling
matching coefficient
parameter set
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111667309.2A
Other languages
Chinese (zh)
Other versions
CN114578959B (en
Inventor
卜俊吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou Huayang General Intelligence Vehicle System Development Co ltd
Original Assignee
Huizhou Huayang General Intelligence Vehicle System Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou Huayang General Intelligence Vehicle System Development Co ltd filed Critical Huizhou Huayang General Intelligence Vehicle System Development Co ltd
Priority to CN202111667309.2A priority Critical patent/CN114578959B/en
Publication of CN114578959A publication Critical patent/CN114578959A/en
Application granted granted Critical
Publication of CN114578959B publication Critical patent/CN114578959B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Abstract

The invention relates to the technical field of touch screen control, and provides a gesture recognition method and system based on a touch pad. The monitoring thread for executing the cancel operation on the user is added, the judgment accuracy of the intention of the user can be further improved, and training and adaptation can be performed according to a plurality of results in the operation process of a driver through correction of the target matching coefficient, so that the machine intelligently follows the habit change of the user.

Description

Gesture recognition method and system based on touch pad
Technical Field
The invention relates to the technical field of touch screen control, in particular to a gesture recognition method and system based on a touch pad.
Background
Touch screen (also called touch screen or touch panel) is an inductive liquid crystal display device capable of receiving input signals such as touch, when touching graphic buttons on the screen, the touch feedback system on the screen can drive various connecting devices according to a pre-programmed program, and can replace a mechanical button panel and produce vivid video and audio effects by the liquid crystal display picture. As a latest computer input device, the touch screen is the simplest, convenient and natural man-machine interaction mode at present. The multimedia interactive device gives the multimedia a brand-new appearance and is a brand-new multimedia interactive device with great attractiveness. The method is mainly applied to inquiry of public information, leadership office, industrial control, military command, electronic games, song and dish ordering, multimedia teaching, real estate pre-sale and the like.
Touch screens have become an important channel for human-computer interaction.
Along with the trend of large screens in automobiles, UI (user interface) pictures which are more and more dazzling, more and more exquisite and more complex to operate appear on entertainment screens. However, in the driving process, when the media list and the function list are selected, the attention of the driver is dispersed due to large gesture amplitude and long time consumption required by gesture recognition, and certain driving safety hazards exist.
Disclosure of Invention
The invention provides a gesture recognition method and system based on a touch pad, and solves the technical problems that in the existing touch technology, gesture recognition amplitude is large, time consumption is long, and driving potential safety hazards are generated due to the fact that the attention of a driver is dispersed.
In order to solve the technical problems, the invention provides a gesture recognition method based on a touch panel, which comprises the following steps:
s1, acquiring a touch path of the touch operation, acquiring a plurality of sampling points according to a preset sampling strategy, calculating touch parameters of the sampling points, and acquiring a sampling parameter set;
s2, judging whether a history parameter set exists or not, if so, determining a target matching coefficient according to the history parameter set and the sampling parameter set, and if not, determining the target matching coefficient according to the sampling parameter set;
s3, substituting the target matching coefficient into a preset calculation formula, determining a user operation intention according to the obtained calculation result, and executing corresponding system operation;
s4, when detecting that the user executes the undo operation, correcting the target matching coefficient and returning to the step S3.
The basic scheme is characterized in that a preset calculation formula is preset, when the touch operation input by a user is obtained, a sampling parameter set and a target matching coefficient are obtained through sampling analysis on a touch path, substituted into the preset calculation formula, and the user operation intention is determined and the corresponding system operation is executed according to the obtained calculation result; the conventional touch control operation of the user is substituted into the training model, the user operation intention can be predicted according to a preset calculation formula only by executing simple sliding touch control, the user intention is quickly and effectively recognized and executed, the operation time and the operation difficulty of the user are reduced, and the driving safety is improved. The monitoring thread for executing the cancel operation on the user is added, the judgment accuracy of the intention of the user can be further improved, and training and adaptation can be performed according to a plurality of results in the operation process of a driver through correction of the target matching coefficient, so that the machine intelligently follows the habit change of the user.
In further embodiments, the step S1 includes:
s11, acquiring the number of corresponding touch points on the touch path in the touch operation;
s12, selecting a plurality of touch points as sampling points, and dividing the touch path into a plurality of sampling intervals according to the sampling points;
and S13, calculating the acceleration of each sampling interval as the touch parameter of the sampling point according to the trigger time of each touch point, and acquiring a sampling parameter set.
According to the scheme, a plurality of touch points are selected as sampling points in the touch path in the touch operation for sampling detection, the operation efficiency of the algorithm can be improved, and the acceleration of each sampling interval is calculated to serve as the touch parameter of the corresponding sampling point, so that the accuracy of the algorithm can be further improved.
In further embodiments, the step S2 includes:
s21, judging whether a history parameter set exists, if yes, entering the next step, and if not, entering a step S24;
s22, acquiring a preset number of historical parameter sets and acquiring corresponding historical change characteristics;
s23, determining the change characteristics of the sampling parameter set, matching the change characteristics with the historical change characteristics, if the matching is successful, acquiring the matching coefficient corresponding to the historical parameter set as a target matching coefficient, and if the matching is failed, entering the next step;
and S24, acquiring the initial matching coefficient as a target matching coefficient.
According to the scheme, whether the historical parameter set exists or not can be judged, whether the historical use record exists or not can be judged, whether the matched historical change characteristics exist or not can be further judged, the intention identification service which accords with personal habits can be rapidly provided for the user, and the system reaction rate is improved on the premise that the intention identification accuracy is guaranteed.
In further embodiments, the step S3 includes:
s31, substituting the target matching coefficient and the sampling parameter set into a first matching function and a second matching function respectively to obtain first sampling data and second sampling data;
and S32, substituting the first sampling data and the second sampling data into a decision function, if the calculation result is greater than a preset threshold value, judging that the tail end of the operation gesture of the touch operation is accelerated, and executing a first operation, otherwise, judging that the tail end of the operation gesture of the touch operation is decelerated, and executing a second operation.
In a further embodiment, the first matching function is calculated as follows:
y(i)=m(j)a(i)-2-n;
wherein y (i) is the first sampling data, m (j) is the target matching coefficient, a (i) is the touch parameter in the sampling parameter set, and n is the sampling interval of the sampling point.
In a further embodiment, the second matching function is calculated as follows:
Figure BDA0003448610650000031
wherein z (i) is the second sampling data, m (j) is the target matching coefficient, a (i) is the touch parameter in the sampling parameter set, and n is the sampling interval of the sampling point.
In a further embodiment, the decision function is calculated as follows:
Figure BDA0003448610650000041
wherein y (i), z (i) are the first sample data and the second sample data respectively, and k is the total number of the sample points.
According to the scheme, a large amount of user behavior data are substituted into a training model, a first matching function, a second matching function and a judgment function capable of effectively predicting the user behavior intention are established, the touch behavior of the user is predicted through a digitalized calculation logic, the intelligent degree of equipment can be further improved, the use difficulty of the user is reduced, and the driving safety is further improved.
In further embodiments, the step S4 includes:
s41, monitoring whether a user executes a cancelling operation for cancelling the system operation, if so, entering the next step, and if not, setting the target matching coefficient as a default matching coefficient;
and S42, if the calculation result is larger than a preset threshold value, returning to the step S3 after the numerical value of the current target matching coefficient is reduced by a preset amplitude value, otherwise returning to the step S3 after the numerical value of the current target matching coefficient is increased by the preset amplitude value, and re-judging the user operation intention and executing.
According to the scheme, after the user intention is predicted and executed, a revocation monitoring program is added, whether the user intention is accurately identified or not is judged by monitoring whether the user revokes the predicted system operation or not, then offset correction is carried out according to the size relation between the calculation result and the preset threshold, and the corresponding target matching coefficient is corrected so as to improve the adaptation degree of the intention identification habit of the current user.
In a further embodiment, the initial matching coefficients are predicted matching coefficients generated by importing common human behavior habits into a training model.
According to the scheme, through big data analysis, behavior habits of ordinary people are imported into the training model, and the generated prediction matching coefficients are used as initial parameters of users, so that most of the users can be adapted, and meanwhile, the adaptation difficulty of each user can be reduced.
The invention also provides a gesture recognition system based on the touch pad, which comprises a touch sensor, a processor and a memory, wherein the touch sensor is connected with the touch pad; the processor comprises a calculation module, a judgment module, a result output module, a comparison module, a matching coefficient determination module and a monitoring module, wherein the calculation module, the judgment module and the result output module are sequentially connected, the matching coefficient determination module is connected with the comparison module, the monitoring module, the judgment module and the memory, and the comparison module is connected with the calculation module and the memory;
the touch sensor is used for sensing touch operation on the touch pad and acquiring the number of corresponding touch points;
the memory is used for storing a preset calculation formula and a historical parameter set;
the calculation module is used for acquiring a plurality of sampling points from the touch points according to a preset sampling strategy, calculating touch parameters of the sampling points and acquiring a sampling parameter set;
the comparison module is used for judging whether a history parameter set exists or not and judging whether the change characteristics of the sampling parameter set are matched with the history change characteristics of the history parameter set or not;
the matching coefficient determining module is used for determining a target matching coefficient according to the judgment result and the matching result of the judging module;
the judging module is used for substituting the target matching coefficient into the preset calculation formula and determining the user operation intention according to the obtained calculation result;
and the result output module is used for outputting the user operation intention and executing corresponding system operation.
The monitoring module is used for correcting the target matching coefficient when a monitoring user executes revocation operation, and judging the operation intention of the user again and executing the operation; otherwise, setting the target matching coefficient as a default matching coefficient.
Drawings
Fig. 1 is a flowchart of a method for recognizing a gesture based on a touch panel according to embodiment 1 of the present invention;
fig. 2 is a system framework diagram of a touch panel-based gesture recognition system according to embodiment 2 of the present invention;
fig. 3 is a schematic diagram of a touch action according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the accompanying drawings, which are given solely for the purpose of illustration and are not to be construed as limitations of the invention, including the drawings which are incorporated herein by reference and for illustration only and are not to be construed as limitations of the invention, since many variations thereof are possible without departing from the spirit and scope of the invention.
Example 1
As shown in fig. 1, the gesture recognition method based on a touch panel according to the embodiment of the present invention includes steps S1 to S4:
s1, acquiring a touch path of the touch operation, acquiring a plurality of sampling points according to a preset sampling strategy, calculating touch parameters of the sampling points, and acquiring a sampling parameter set, wherein the method comprises the following steps of S11-S13:
s11, acquiring the number N of corresponding touch points on a touch path in the touch operation;
s12, selecting a plurality of touch points as sampling points, and dividing the touch path into a plurality of sampling intervals according to the sampling points;
specifically, k touch points are selected from the N touch points as sampling points, and the number of touch points in each sampling interval is N. The sampling interval can be set according to the number of the touch points actually acquired.
For example, the number of touch points is 200, and 10 (i.e., k is 10) sample points are selected from the touch points at equal intervals: s0, s1, … s9, then 20 touch points are included between each sampling point, i.e., s0 ═ p19, s1 ═ p39, and s2 ═ p59 … s9 ═ p 199.
And S13, calculating the acceleration of each sampling interval as the touch parameter of the sampling point according to the trigger time of each touch point, and acquiring a sampling parameter set.
Specifically, the time t (i) (0, 1,2 … k) corresponding to k sampling points is obtained with the initial touch point as the time 0, and the acceleration a (i) of each sampling interval is calculated and integrated to obtain the sampling parameter set a (0).
In the embodiment, in the touch operation, the touch path selects a plurality of touch points as sampling points for sampling detection, so that the operation efficiency of the algorithm can be improved, and the acceleration of each sampling interval is calculated as the touch parameter of the corresponding sampling point, so that the accuracy of the algorithm can be further improved.
S2, judging whether a history parameter set exists, if so, determining a target matching coefficient according to the history parameter set and the sampling parameter set, otherwise, determining the target matching coefficient according to the sampling parameter set, and the method comprises the following steps of S21-S24:
s21, judging whether a history parameter set exists, if yes, entering the next step, and if not, entering the step S24;
s22, acquiring a preset number of historical parameter sets A (g) and acquiring corresponding historical change characteristics;
the preset number of the historical parameter sets can be set as required, for example, if the number of the historical acceleration sets is greater than 50, 50 stored recently are read; if less than 50, all reads (i.e., (g ═ 1, -2, … -h, h ≦ 50)).
S23, determining the change characteristics of the sampling parameter set A (0), matching with the historical change characteristics, if the matching is successful, acquiring the matching coefficient corresponding to the historical parameter set as a target matching coefficient, and if the matching is failed, entering the next step;
and S24, acquiring an initial matching coefficient m' as a target matching coefficient.
In this embodiment, the initial matching coefficient is a predicted matching coefficient generated by introducing behavior habits of an ordinary person into a training model.
According to the embodiment, through big data analysis, behavior habits of ordinary people are imported into the training model, and the generated prediction matching coefficients are used as initial parameters of users, so that most of users can be adapted, and meanwhile, the adaptation difficulty of each user can be reduced.
According to the embodiment, whether the historical parameter set exists or not can be judged, whether the current user has the historical use record or not can be judged, whether the matched historical change characteristics exist or not can be further judged, the intention identification service according with the personal habits can be rapidly provided for the user, and the system reaction rate is improved on the premise of ensuring the accuracy of intention identification.
S3, substituting the target matching coefficient into a preset calculation formula, determining the user operation intention according to the obtained calculation result and executing the corresponding system operation, wherein the method comprises the following steps of S31-S32:
and S31, substituting the target matching coefficient and the sampling parameter set into the first matching function and the second matching function respectively to obtain first sampling data and second sampling data.
In this embodiment, the calculation formula of the first matching function is as follows:
y(i)=m(j)a(i)-2-n;
where y (i) is the first sampling data, m (j) is the target matching coefficient, a (i) is the touch parameter in the sampling parameter set, and n is the sampling interval of the sampling point.
m (j) corresponds to j operating habits, for example: m (1) corresponds to a faster hand rate operation habit, m (2) corresponds to a slower hand rate operation habit, and so on. And the hand speed can be represented by the acceleration a (i) in the sliding process.
The calculation formula of the second matching function is as follows:
Figure BDA0003448610650000081
wherein z (i) is the second sampling data, m (j) is the target matching coefficient, a (i) is the touch parameter in the sampling parameter set, and n is the sampling interval of the sampling point.
And S32, substituting the first sampling data and the second sampling data into a decision function, if the calculation result is greater than a preset threshold value, judging that the tail end of the operation gesture of the touch operation is accelerated, and executing the first operation, otherwise, judging that the tail end of the operation gesture of the touch operation is decelerated, and executing the second operation.
The decision function is calculated as follows:
Figure BDA0003448610650000082
where y (i0, z (i0 is the first sample data and the second sample data, respectively), and k is the total number of sample points.
The first operation and the second operation can be predefined according to a touch interface of an actual application, for example, in a song playlist, the first operation is a page turning operation, and the second operation is a track switching operation.
According to the embodiment, a first matching function, a second matching function and a judgment function which can effectively predict the behavior intention of the user are built by substituting a large amount of user behavior data into the training model, and the touch behavior of the user is predicted through a digitalized calculation logic, so that the intelligent degree of equipment can be further improved, the use difficulty of the user is reduced, and the driving safety is further improved.
S4, when detecting that the user executes the undo operation, correcting the target matching coefficient and returning to the step S3, wherein the steps S41 to S42 are as follows:
s41, monitoring whether a user executes a cancelling operation for cancelling the system operation, if so, entering the next step, and if not, setting the target matching coefficient as a default matching coefficient;
and S42, if the calculation result is larger than the preset threshold value, reducing the numerical value of the current target matching coefficient by the preset amplitude value and returning to the step S3, otherwise, increasing the numerical value of the current target matching coefficient by the preset amplitude value and returning to the step S3, and re-judging the user operation intention and executing.
In the embodiment, after the user intention is predicted and executed, a revocation monitoring program is added, whether the user intention is accurately identified is judged by monitoring whether the user revokes the predicted system operation, then, offset correction is carried out according to the size relation between the calculation result and the preset threshold, and the corresponding target matching coefficient is corrected so as to improve the adaptation degree of the intention identification habit of the current user.
In this embodiment, taking 10 sampling points (i.e., k is 10, the number of touch points in each sampling interval is n), the preset threshold is 0, the first operation is a page turning operation, and the second operation is a track switching operation as an example, the gesture recognition calculation determination is as follows:
substituting a (i) and n into the first matching function to obtain y0-y9 with 9 values:
y0=2a(0)-2-n,
y1=2a(1)-2-n,
...
y9=2a(9)-2-n。
substituting a (i) and n into a second matching function to obtain 9 values of z0-z 9:
Figure BDA0003448610650000091
Figure BDA0003448610650000092
...
Figure BDA0003448610650000093
substituting y0-y9 and z0-z9 into a decision function, if Q is larger than zero, judging that the tail end of the operation gesture accelerates, and according to page turning operation, if Q is smaller than or equal to zero, judging that the tail end of the operation gesture decelerates, and according to track switching operation.
Subsequently, when it is determined that the operation of Q >0 is cancelled by the user, the target matching coefficient is decreased by the preset amplitude value to obtain the corrected target matching coefficient, and then the step S3 is returned; and when the operation that the user cancels the operation that the Q is less than 0 is judged, increasing the numerical value of the current target matching coefficient by a preset amplitude value to obtain the corrected target matching coefficient, and returning to the step S3.
The embodiment of the invention presets a preset calculation formula, when the touch operation input by a user is obtained, a sampling parameter set and a target matching coefficient are obtained by sequentially calculating through sampling analysis on a touch path, the sampling parameter set and the target matching coefficient are substituted into the preset calculation formula, the user operation intention is determined according to the obtained calculation result, and the corresponding system operation is executed; the conventional touch control operation of the user is substituted into the training model, the user operation intention can be predicted according to a preset calculation formula only by executing simple sliding touch control, the user intention is quickly and effectively recognized and executed, the operation time and the operation difficulty of the user are reduced, and the driving safety is improved. The monitoring thread for executing the cancel operation on the user is added, the judgment accuracy of the intention of the user can be further improved, and training and adaptation can be performed according to a plurality of results in the operation process of a driver through correction of the target matching coefficient, so that the machine intelligently follows the habit change of the user.
Example 2
The reference numbers in the drawings in the present embodiment include: a touch sensor 1; the device comprises a processor 2, a calculation module 21, a judgment module 22, a result output module 23, a comparison module 24, a matching coefficient determination module 25 and a monitoring module 26; a memory 3.
The embodiment of the invention also provides a gesture recognition system based on the touch pad, which is shown in fig. 2 and comprises a touch sensor 1, a processor 2 and a memory 3, wherein the touch sensor 1 is connected with the touch pad; the processor 2 comprises a calculation module 21, a judgment module 22, a result output module 23, a comparison module 24, a matching coefficient determination module 25 and a monitoring module 26, wherein the calculation module 21, the judgment module 22 and the result output module 23 are sequentially connected, the matching coefficient determination module 25 is connected with the comparison module 24, the monitoring module 26, the judgment module 22 and the memory 3, and the comparison module 24 is connected with the calculation module 21 and the memory 3;
the touch sensor 1 is used for sensing touch operation on the touch pad and acquiring the number of corresponding touch points;
the memory 3 is used for storing a preset calculation formula and a historical parameter set;
the calculation module 21 is configured to obtain a plurality of sampling points from the touch points according to a preset sampling strategy, calculate touch parameters of the sampling points, and obtain a sampling parameter set;
the comparison module 24 is configured to determine whether a history parameter set exists, and determine whether a change characteristic of the sampling parameter set matches a history change characteristic of the history parameter set;
the matching coefficient determining module 25 is configured to determine a target matching coefficient according to the determination result and the matching result of the determining module;
the judging module 22 is used for substituting the target matching coefficient into a preset calculation formula and determining the user operation intention according to the obtained calculation result;
the result output module 23 is used for outputting the user operation intention and executing the corresponding system operation.
The monitoring module 26 is used for correcting the target matching coefficient when the monitoring user performs the undo operation, and re-judging the user operation intention and performing the operation; otherwise, setting the target matching coefficient as a default matching coefficient.
The identification system provided by the embodiment adopts each module to realize each step in the identification method, provides a hardware basis for the identification method, and is convenient for the method to implement.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (10)

1. A gesture recognition method based on a touch pad is characterized by comprising the following steps:
s1, acquiring a touch path of the touch operation, acquiring a plurality of sampling points according to a preset sampling strategy, calculating touch parameters of the sampling points, and acquiring a sampling parameter set;
s2, judging whether a history parameter set exists or not, if so, determining a target matching coefficient according to the history parameter set and the sampling parameter set, and if not, determining the target matching coefficient according to the sampling parameter set;
s3, substituting the target matching coefficient into a preset calculation formula, determining a user operation intention according to the obtained calculation result, and executing corresponding system operation;
s4, when detecting that the user executes the undo operation, correcting the target matching coefficient and returning to the step S3.
2. The touch-pad based gesture recognition method of claim 1, wherein said step S1 comprises:
s11, acquiring the number of corresponding touch points on the touch path in the touch operation;
s12, selecting a plurality of touch points as sampling points, and dividing the touch path into a plurality of sampling intervals according to the sampling points;
and S13, calculating the acceleration of each sampling interval as the touch parameter of the sampling point according to the trigger time of each touch point, and acquiring a sampling parameter set.
3. The touch-pad based gesture recognition method of claim 2, wherein said step S2 comprises:
s21, judging whether a history parameter set exists, if yes, entering the next step, and if not, entering the step S24;
s22, acquiring a preset number of historical parameter sets and acquiring corresponding historical change characteristics;
s23, determining the change characteristics of the sampling parameter set, matching the change characteristics with the historical change characteristics, if the matching is successful, acquiring the matching coefficient corresponding to the historical parameter set as a target matching coefficient, and if the matching is failed, entering the next step;
and S24, acquiring the initial matching coefficient as a target matching coefficient.
4. The touch-pad based gesture recognition method of claim 1, wherein said step S3 comprises:
s31, substituting the target matching coefficient and the sampling parameter set into a first matching function and a second matching function respectively to obtain first sampling data and second sampling data;
and S32, substituting the first sampling data and the second sampling data into a judgment function, if the calculation result is larger than a preset threshold value, judging that the tail end of the operation gesture of the touch operation is accelerated, executing the first operation, and if not, judging that the tail end of the operation gesture of the touch operation is decelerated, and executing the second operation.
5. A method as recited in claim 4, wherein the first matching function is calculated as follows:
y(i)=m(j)a(i)-2-n;
wherein y (i) is the first sampling data, m (j) is the target matching coefficient, a (i) is the touch parameter in the sampling parameter set, and n is the sampling interval of the sampling point.
6. A method as claimed in claim 4, wherein the second matching function is calculated as follows:
Figure FDA0003448610640000021
wherein z (i) is the second sampling data, m (j) is the target matching coefficient, a (i) is the touch parameter in the sampling parameter set, and n is the sampling interval of the sampling point.
7. The touch-pad based gesture recognition method of claim 6, wherein the decision function is calculated as follows:
Figure FDA0003448610640000022
wherein y (i), z (i) are the first sample data and the second sample data respectively, and k is the total number of the sample points.
8. The touch-pad based gesture recognition method of claim 1, wherein said step S4 comprises:
s41, monitoring whether a user executes a cancelling operation for cancelling the system operation, if so, entering the next step, and if not, setting the target matching coefficient as a default matching coefficient;
and S42, if the calculation result is larger than a preset threshold value, returning to the step S3 after the numerical value of the current target matching coefficient is reduced by a preset amplitude value, otherwise returning to the step S3 after the numerical value of the current target matching coefficient is increased by the preset amplitude value, and re-judging the user operation intention and executing.
9. A touch-pad based gesture recognition method according to claim 3, wherein: the initial matching coefficient is a prediction matching coefficient generated by importing the behavior habits of common people into a training model.
10. A gesture recognition system based on a touch pad is characterized in that: the touch control system comprises a touch sensor, a processor and a memory, wherein the touch sensor is connected with a touch pad; the processor comprises a calculation module, a judgment module, a result output module, a comparison module, a matching coefficient determination module and a monitoring module, wherein the calculation module, the judgment module and the result output module are sequentially connected, the matching coefficient determination module is connected with the comparison module, the monitoring module, the judgment module and the memory, and the comparison module is connected with the calculation module and the memory;
the touch sensor is used for sensing touch operation on the touch pad and acquiring the number of corresponding touch points;
the memory is used for storing a preset calculation formula and a historical parameter set;
the calculation module is used for acquiring a plurality of sampling points from the touch points according to a preset sampling strategy, calculating touch parameters of the sampling points and acquiring a sampling parameter set;
the comparison module is used for judging whether a history parameter set exists or not and judging whether the change characteristics of the sampling parameter set are matched with the history change characteristics of the history parameter set or not;
the matching coefficient determining module is used for determining a target matching coefficient according to the judgment result and the matching result of the judging module;
the judging module is used for substituting the target matching coefficient into the preset calculation formula and determining the user operation intention according to the obtained calculation result;
the result output module is used for outputting the user operation intention and executing corresponding system operation;
the monitoring module is used for correcting the target matching coefficient when a monitoring user executes revocation operation, and judging the operation intention of the user again and executing the operation; otherwise, setting the target matching coefficient as a default matching coefficient.
CN202111667309.2A 2021-12-30 2021-12-30 Gesture recognition method and system based on touch pad Active CN114578959B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111667309.2A CN114578959B (en) 2021-12-30 2021-12-30 Gesture recognition method and system based on touch pad

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111667309.2A CN114578959B (en) 2021-12-30 2021-12-30 Gesture recognition method and system based on touch pad

Publications (2)

Publication Number Publication Date
CN114578959A true CN114578959A (en) 2022-06-03
CN114578959B CN114578959B (en) 2024-03-29

Family

ID=81768809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111667309.2A Active CN114578959B (en) 2021-12-30 2021-12-30 Gesture recognition method and system based on touch pad

Country Status (1)

Country Link
CN (1) CN114578959B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8436821B1 (en) * 2009-11-20 2013-05-07 Adobe Systems Incorporated System and method for developing and classifying touch gestures
CN106814939A (en) * 2015-11-27 2017-06-09 北京奇虎科技有限公司 A kind of method and terminal of terminal screenshotss
US20170320501A1 (en) * 2016-05-06 2017-11-09 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methodologies for controlling an autonomous vehicle
CN107357516A (en) * 2017-07-10 2017-11-17 南京邮电大学 A kind of gesture query intention Forecasting Methodology based on hidden Markov model
CN107704190A (en) * 2017-11-06 2018-02-16 广东欧珀移动通信有限公司 Gesture identification method, device, terminal and storage medium
CN109213419A (en) * 2018-10-19 2019-01-15 北京小米移动软件有限公司 touch operation processing method, device and storage medium
US20200201536A1 (en) * 2017-07-28 2020-06-25 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Black screen gesture detection method and device, storage medium, and mobile terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8436821B1 (en) * 2009-11-20 2013-05-07 Adobe Systems Incorporated System and method for developing and classifying touch gestures
CN106814939A (en) * 2015-11-27 2017-06-09 北京奇虎科技有限公司 A kind of method and terminal of terminal screenshotss
US20170320501A1 (en) * 2016-05-06 2017-11-09 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methodologies for controlling an autonomous vehicle
CN107357516A (en) * 2017-07-10 2017-11-17 南京邮电大学 A kind of gesture query intention Forecasting Methodology based on hidden Markov model
US20200201536A1 (en) * 2017-07-28 2020-06-25 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Black screen gesture detection method and device, storage medium, and mobile terminal
CN107704190A (en) * 2017-11-06 2018-02-16 广东欧珀移动通信有限公司 Gesture identification method, device, terminal and storage medium
CN109213419A (en) * 2018-10-19 2019-01-15 北京小米移动软件有限公司 touch operation processing method, device and storage medium

Also Published As

Publication number Publication date
CN114578959B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
US8120586B2 (en) Electronic devices with touch-sensitive navigational mechanisms, and associated methods
US20180024643A1 (en) Gesture Based Interface System and Method
JP3764171B2 (en) Object position detector using edge motion function and gesture recognition
US20130222338A1 (en) Apparatus and method for processing a plurality of types of touch inputs
US20180011619A1 (en) Systems and methods for adaptive gesture recognition
US20110246952A1 (en) Electronic device capable of defining touch gestures and method thereof
US20100117959A1 (en) Motion sensor-based user motion recognition method and portable terminal using the same
US20090153495A1 (en) Input method for use in an electronic device having a touch-sensitive screen
US10089061B2 (en) Electronic device and method
US20110216075A1 (en) Information processing apparatus and method, and program
US10770077B2 (en) Electronic device and method
CN106998393A (en) A kind of video playing control method and mobile terminal
US20170024119A1 (en) User interface and method for controlling a volume by means of a touch-sensitive display unit
CN107037965A (en) A kind of information displaying method based on input, device and mobile terminal
CN112383805A (en) Method for realizing man-machine interaction at television end based on human hand key points
KR20100052372A (en) Method for recognizing motion based on motion sensor and mobile terminal using the same
WO2020232738A1 (en) Haptic feedback method, electronic device and storage medium
KR20160133305A (en) Gesture recognition method, a computing device and a control device
CN107831987A (en) The error touch control method and device of anti-gesture operation
CN103869952B (en) Tactile feedback system and the method that tactile feedback is provided
CN114578959A (en) Gesture recognition method and system based on touch pad
CN110691016B (en) Interactive method realized based on audio equipment and audio equipment
CN115972906A (en) Control method and device for vehicle copilot screen
CN104516566A (en) Handwriting input method and device
CN104951293B (en) The key response method and mobile terminal of a kind of mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant