CN114578959B - Gesture recognition method and system based on touch pad - Google Patents

Gesture recognition method and system based on touch pad Download PDF

Info

Publication number
CN114578959B
CN114578959B CN202111667309.2A CN202111667309A CN114578959B CN 114578959 B CN114578959 B CN 114578959B CN 202111667309 A CN202111667309 A CN 202111667309A CN 114578959 B CN114578959 B CN 114578959B
Authority
CN
China
Prior art keywords
touch
sampling
matching coefficient
module
parameter set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111667309.2A
Other languages
Chinese (zh)
Other versions
CN114578959A (en
Inventor
卜俊吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou Huayang General Intelligence Vehicle System Development Co ltd
Original Assignee
Huizhou Huayang General Intelligence Vehicle System Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou Huayang General Intelligence Vehicle System Development Co ltd filed Critical Huizhou Huayang General Intelligence Vehicle System Development Co ltd
Priority to CN202111667309.2A priority Critical patent/CN114578959B/en
Publication of CN114578959A publication Critical patent/CN114578959A/en
Application granted granted Critical
Publication of CN114578959B publication Critical patent/CN114578959B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to the technical field of touch screen control, and provides a gesture recognition method and a gesture recognition system based on a touch panel. The monitoring thread for performing the withdrawal operation on the user is added, the judgment accuracy of the user intention can be further improved, and training and adaptation can be performed according to multiple results in the operation process of the driver through correcting the target matching coefficient, so that the machine intelligently follows the habit change of the user.

Description

Gesture recognition method and system based on touch pad
Technical Field
The invention relates to the technical field of touch screen control, in particular to a gesture recognition method and system based on a touch pad.
Background
Touch screens (touch screens), also known as "touch screens" and "touch panels", are an inductive liquid crystal display device that receives input signals such as contacts, and when a graphical button on a screen is touched, a haptic feedback system on the screen can drive various connecting devices according to a preprogrammed program, and can be used to replace a mechanical button panel and to produce a vivid audio/video effect by means of a liquid crystal display. The touch screen is used as the latest computer input equipment, and is the simplest, convenient and natural man-machine interaction mode at present. It gives the multimedia a new appearance, and is a very attractive and totally new multimedia interaction device. The method is mainly applied to public information inquiry, lead office, industrial control, military command, electronic games, song ordering, multimedia teaching, real estate pre-selling and the like.
Touch screens have become an important channel for human-machine interaction.
With the trend of large screens in vehicles, UI images with more and more dazzling, more and more exquisite and more complex operation appear on entertainment screens. However, during driving, when the media list and the function list are selected, the driver's attention will be dispersed and a certain driving safety hidden trouble exists due to the large gesture range and long time consumption required by gesture recognition.
Disclosure of Invention
The invention provides a gesture recognition method and a gesture recognition system based on a touch pad, which solve the technical problems that in the existing touch technology, the gesture recognition amplitude is large, the time consumption is long, and the attention of a driver is dispersed to generate potential driving safety hazards.
In order to solve the technical problems, the invention provides a gesture recognition method based on a touch pad, which comprises the following steps:
s1, acquiring a touch path of the touch operation, acquiring a plurality of sampling points according to a preset sampling strategy, calculating touch parameters of the sampling points, and acquiring a sampling parameter set;
s2, judging whether a historical parameter set exists, if so, determining a target matching coefficient according to the historical parameter set and the sampling parameter set, and if not, determining the target matching coefficient according to the sampling parameter set;
s3, substituting the target matching coefficient into a preset calculation formula, determining the operation intention of a user according to the obtained calculation result, and executing corresponding system operation;
and S4, when detecting that the user executes the revocation operation, correcting the target matching coefficient and returning to the step S3.
The basic scheme presets a preset calculation formula, when touch operation input by a user is acquired, sampling analysis is carried out on a touch path, a sampling parameter set and a target matching coefficient are obtained through sequential calculation, the sampling parameter set and the target matching coefficient are substituted into the preset calculation formula, and the user operation intention is determined and corresponding system operation is executed according to the obtained calculation result; the conventional touch operation of the user is substituted into the training model, and the user operation intention can be predicted according to a preset calculation formula only by executing simple sliding touch operation by the user, so that the user intention is rapidly and effectively identified and executed, the operation time and the operation difficulty of the user are reduced, and the driving safety is further improved. The monitoring thread for performing the withdrawal operation on the user is added, the judgment accuracy of the user intention can be further improved, and training and adaptation can be performed according to multiple results in the operation process of the driver through correcting the target matching coefficient, so that the machine intelligently follows the habit change of the user.
In a further embodiment, the step S1 includes:
s11, acquiring the number of corresponding touch points on a touch path in the touch operation;
s12, selecting a plurality of touch points as sampling points, and dividing the touch path into a plurality of sampling intervals according to the sampling points;
s13, calculating the acceleration of each sampling interval as the touch parameter of the sampling point according to the triggering time of each touch point, and obtaining a sampling parameter set.
According to the scheme, a plurality of touch points are selected as sampling points in the touch control operation, sampling detection is carried out, the operation efficiency of an algorithm can be improved, the acceleration of each sampling interval is calculated as the touch parameter of the corresponding sampling point, and the accuracy of the algorithm can be further improved.
In a further embodiment, the step S2 includes:
s21, judging whether a history parameter set exists, if so, entering the next step, and if not, entering the step S24;
s22, acquiring a preset number of history parameter sets and acquiring corresponding history change characteristics;
s23, determining the change characteristics of the sampling parameter set, matching the change characteristics with the history change characteristics, if the matching is successful, acquiring a matching coefficient corresponding to the history parameter set as a target matching coefficient, and if the matching is failed, entering the next step;
s24, acquiring an initial matching coefficient as a target matching coefficient.
According to the scheme, whether the current user has a history use record or not can be judged according to whether the history parameter set exists or not, and then whether the current user has matched history change characteristics or not can be further provided with intention recognition services conforming to personal habits rapidly, so that the system response rate is improved on the premise of ensuring the intention recognition accuracy.
In a further embodiment, the step S3 includes:
s31, substituting the target matching coefficient and the sampling parameter set into a first matching function and a second matching function respectively to obtain first sampling data and second sampling data;
s32, substituting the first sampling data and the second sampling data into a judgment function, judging that the tail end of the operation gesture of the current touch operation is accelerated if the calculation result is larger than a preset threshold value, executing the first operation, and otherwise, judging that the tail end of the operation gesture of the current touch operation is decelerated, and executing the second operation.
In a further embodiment, the first matching function is calculated as follows:
y(i)=m(j) a(i) -2-n;
wherein y (i) is first sampling data, m (j) is a target matching coefficient, a (i) is a touch parameter in the sampling parameter set, and n is a sampling interval of the sampling point.
In a further embodiment, the second matching function is calculated as follows:
wherein z (i) is second sampling data, m (j) is a target matching coefficient, a (i) is a touch parameter in the sampling parameter set, and n is a sampling interval of the sampling point.
In a further embodiment, the decision function is calculated as follows:
wherein y (i) and z (i) are respectively first sampling data and second sampling data, and k is the total number of the sampling points.
According to the scheme, a large amount of user behavior data are substituted into the training model, a first matching function, a second matching function and a judging function which can effectively predict the user behavior intention are established, the touch behavior of the user is predicted through the datamation calculation logic, the intelligent degree of equipment can be further improved, the use difficulty of the user is reduced, and the driving safety is further improved.
In a further embodiment, the step S4 includes:
s41, monitoring whether a user executes a revocation operation for revoking the system operation, if so, entering the next step, and if not, setting the target matching coefficient as a default matching coefficient;
and S42, if the calculation result is larger than a preset threshold value, reducing the value of the current target matching coefficient by a preset amplitude value, returning to the step S3, otherwise, increasing the value of the current target matching coefficient by the preset amplitude value, returning to the step S3, and judging the operation intention of the user again and executing.
According to the scheme, after the user intention is predicted and executed, a revocation monitoring program is added, whether the user intention recognition is accurate or not is judged by monitoring whether the user withdraws the predicted system operation, further offset correction is carried out according to the magnitude relation between the calculation result and the preset threshold value, and the corresponding target matching coefficient is corrected so as to improve the adaptation degree of the intention recognition habit of the current user.
In a further embodiment, the initial matching coefficients are predictive matching coefficients generated by introducing conventional human behavior learning inertial to a training model.
According to the scheme, through big data analysis, the ordinary human behavior learning is imported into the training model, and the generated prediction matching coefficient is used as an initial parameter for users, so that most users can be adapted, and meanwhile, the adaptation difficulty for each user can be reduced.
The invention also provides a gesture recognition system based on the touch panel, which comprises a touch sensor, a processor and a memory, wherein the touch sensor is connected with the touch panel; the processor comprises a calculation module, a judgment module, a result output module, a comparison module, a matching coefficient determination module and a monitoring module, wherein the calculation module, the judgment module and the result output module are sequentially connected, the matching coefficient determination module is connected with the comparison module, the monitoring module, the judgment module and the memory, and the comparison module is connected with the calculation module and the memory;
the touch sensor is used for sensing touch operation on the touch panel and acquiring the corresponding number of touch points;
the memory is used for storing a preset calculation formula and a historical parameter set;
the computing module is used for acquiring a plurality of sampling points from the touch points according to a preset sampling strategy, computing touch parameters of the sampling points and acquiring a sampling parameter set;
the comparison module is used for judging whether a historical parameter set exists or not and judging whether the change characteristics of the sampling parameter set are matched with the historical change characteristics of the historical parameter set or not;
the matching coefficient determining module is used for determining a target matching coefficient according to the judging result and the matching result of the judging module;
the judging module is used for substituting the target matching coefficient into the preset calculation formula and determining the operation intention of the user according to the obtained calculation result;
the result output module is used for outputting the user operation intention and executing corresponding system operation.
The monitoring module is used for correcting the target matching coefficient when monitoring that a user executes the cancel operation, judging the operation intention of the user again and executing the operation intention; otherwise, the target matching coefficient is set as a default matching coefficient.
Drawings
FIG. 1 is a workflow diagram of a touch pad based gesture recognition method provided in embodiment 1 of the present invention;
FIG. 2 is a system frame diagram of a touch pad based gesture recognition system according to embodiment 2 of the present invention;
fig. 3 is a schematic diagram of a touch operation according to an embodiment of the present invention.
Detailed Description
The following examples are given for the purpose of illustration only and are not to be construed as limiting the invention, including the drawings for reference and description only, and are not to be construed as limiting the scope of the invention as many variations thereof are possible without departing from the spirit and scope of the invention.
Example 1
The gesture recognition method based on the touch pad provided by the embodiment of the invention, as shown in fig. 1, in the embodiment, includes steps S1 to S4:
s1, acquiring a touch path of the touch operation, acquiring a plurality of sampling points according to a preset sampling strategy, calculating touch parameters of the sampling points, and acquiring a sampling parameter set, wherein the method comprises the following steps of S11-S13:
s11, acquiring the number N of corresponding touch points on a touch path in the touch operation;
s12, selecting a plurality of touch points as sampling points, and dividing a touch path into a plurality of sampling intervals according to the sampling points;
specifically, k touch points are selected from the N touch points as sampling points, and the number of touch points in each sampling interval is N. The sampling interval can be set according to the number of touch points actually acquired.
For example, the number of touch points is 200, and 10 (i.e., k=10) sampling points are selected from the intermediate intervals: s0, s1, … s9, then 20 touch points are included between each sampling point, i.e., s0=p19, s1=p39, s2=p59 … s9=p199.
S13, according to the triggering time of each touch point, calculating the acceleration of each sampling interval as the touch parameter of the sampling point, and obtaining a sampling parameter set.
Specifically, with the initial touch point being the 0 moment, the moment t (i) corresponding to k sampling points is obtained (i=0, 1,2 … k), and then the acceleration a (i) of each sampling interval is calculated, and the sampling parameter set a (0) is obtained by integration.
In the embodiment, a plurality of touch points are selected as sampling points by the touch path in the touch operation, sampling detection is performed, the operation efficiency of the algorithm can be improved, and the acceleration of each sampling interval is calculated as the touch parameter of the corresponding sampling point, so that the accuracy of the algorithm can be further improved.
S2, judging whether a historical parameter set exists, if so, determining a target matching coefficient according to the historical parameter set and the sampling parameter set, and if not, determining the target matching coefficient according to the sampling parameter set, wherein the steps comprise S21-S24:
s21, judging whether a history parameter set exists, if so, entering the next step, and if not, entering the step S24;
s22, acquiring a preset number of history parameter sets A (g) and acquiring corresponding history change characteristics;
the preset number of the historical parameter sets can be set according to needs, for example, if the number of the historical acceleration sets is greater than 50, the latest saved 50 are read; if less than 50, then all reads (i.e., (g= -1, -2, … -h, h.ltoreq.50)).
S23, determining the change characteristics of the sampling parameter set A (0), matching the change characteristics with the history change characteristics, if the matching is successful, acquiring a matching coefficient corresponding to the history parameter set as a target matching coefficient, and if the matching is failed, entering the next step;
s24, acquiring an initial matching coefficient m' as a target matching coefficient.
In this embodiment, the initial matching coefficient is a predicted matching coefficient generated by introducing the conventional human behavior learning inertial navigation into the training model.
According to the method, through big data analysis, the conventional human behavior learning is imported into the training model, and the generated prediction matching coefficient is used as an initial parameter for users, so that most users can be adapted, and meanwhile, the adaptation difficulty for each user can be reduced.
According to the method and the device, whether the current user has a history use record can be judged according to whether the history parameter set exists or not, and then the user can be rapidly provided with the intention recognition service conforming to personal habits on the basis of whether the current user has the matched history change characteristics, so that the system response rate is improved on the premise of ensuring the intention recognition accuracy.
S3, substituting the target matching coefficient into a preset calculation formula, determining the operation intention of the user according to the obtained calculation result, and executing corresponding system operation, wherein the method comprises the steps of S31-S32:
s31, substituting the target matching coefficient and the sampling parameter set into the first matching function and the second matching function respectively to obtain first sampling data and second sampling data.
In this embodiment, the calculation formula of the first matching function is as follows:
y(i)=m(j) a(i) -2-n;
wherein y (i) is first sampling data, m (j) is a target matching coefficient, a (i) is a touch parameter in a sampling parameter set, and n is a sampling interval of a sampling point.
m (j) corresponds to j operating habits, for example: m (1) corresponds to an operation habit in which the hand speed is faster, m (2) corresponds to an operation habit in which the hand speed is slower, and so on. The speed of the hand can be represented by the acceleration a (i) in the sliding process.
The calculation formula of the second matching function is as follows:
wherein z (i) is second sampling data, m (j) is a target matching coefficient, a (i) is a touch parameter in a sampling parameter set, and n is a sampling interval of a sampling point.
S32, substituting the first sampling data and the second sampling data into a judgment function, judging that the tail end of the operation gesture of the current touch operation is accelerated if the calculation result is larger than a preset threshold value, executing the first operation, and otherwise, judging that the tail end of the operation gesture of the current touch operation is decelerated, and executing the second operation.
The calculation formula of the decision function is as follows:
where y (i 0, z (i 0 is the first sample data and the second sample data, respectively), and k is the total number of sample points.
The first operation and the second operation can be predefined according to a touch interface of an actual application, for example, in a song play list, the first operation is a page turning operation and the second operation is a track switching operation.
According to the embodiment, a large amount of user behavior data is substituted into the training model, a first matching function, a second matching function and a judging function which can effectively predict the user behavior intention are established, the touch behavior of the user is predicted through the datamation calculation logic, the intelligent degree of equipment can be further improved, the use difficulty of the user is reduced, and the driving safety is further improved.
S4, when detecting that the user executes the revocation operation, correcting the target matching coefficient and returning to the step S3, wherein the method comprises the following steps of S41-S42:
s41, monitoring whether a user executes a revocation operation for revoking the system operation, if so, entering the next step, and if not, setting a target matching coefficient as a default matching coefficient;
and S42, if the calculation result is larger than the preset threshold value, reducing the value of the current target matching coefficient by a preset amplitude value, returning to the step S3, otherwise, increasing the value of the current target matching coefficient by the preset amplitude value, returning to the step S3, and judging the operation intention of the user again and executing.
According to the method, after the user intention is predicted and executed, a revocation monitoring program is added, whether the user intention is accurately identified is judged by monitoring whether the user withdraws the predicted system operation, further offset correction is carried out according to the magnitude relation between the calculation result and the preset threshold value, and the corresponding target matching coefficient is corrected to improve the adaptation degree of the intention identification habit of the current user.
In this embodiment, taking 10 sampling points (i.e. k=10, the number of touch points in each sampling interval is n), the preset threshold is 0, the first operation is a page turning operation, the second operation is a track switching operation, and the gesture recognition calculation is determined as follows:
substituting a (i) and n into the first matching function to obtain y0-y9, wherein the total value of the y0-y9 is 9:
y0=2 a(0) -2-n,
y1=2 a(1) -2-n,
...
y9=2 a(9) -2-n。
substituting a (i) and n into the second matching function to obtain 9 values of z0-z 9:
...
substituting y0-y9 and z0-z9 into a decision function, judging that the tail end of the operation gesture accelerates if Q is larger than zero, judging that the tail end of the operation gesture decelerates according to page turning operation if Q is smaller than or equal to zero, and operating according to a track switching operation.
Then, when judging that the operation of Q >0 is cancelled by the user, reducing the target matching coefficient by a preset amplitude value to obtain a corrected target matching coefficient, and returning to the step S3; and when judging that the operation of Q <0 is cancelled by the user, increasing the value of the current target matching coefficient by a preset amplitude value to obtain a corrected target matching coefficient, and returning to the step S3.
The embodiment of the invention presets a preset calculation formula, when touch operation input by a user is acquired, a sampling parameter set and a target matching coefficient are obtained by sampling analysis on a touch path, the sampling parameter set and the target matching coefficient are calculated in sequence, the sampling parameter set and the target matching coefficient are substituted into the preset calculation formula, and the user operation intention is determined and the corresponding system operation is executed according to the obtained calculation result; the conventional touch operation of the user is substituted into the training model, and the user operation intention can be predicted according to a preset calculation formula only by executing simple sliding touch operation by the user, so that the user intention is rapidly and effectively identified and executed, the operation time and the operation difficulty of the user are reduced, and the driving safety is further improved. The monitoring thread for performing the withdrawal operation on the user is added, the judgment accuracy of the user intention can be further improved, and training and adaptation can be performed according to multiple results in the operation process of the driver through correcting the target matching coefficient, so that the machine intelligently follows the habit change of the user.
Example 2
Reference numerals in the drawings of the specification in this embodiment include: a touch sensor 1; the device comprises a processor 2, a calculation module 21, a judgment module 22, a result output module 23, a comparison module 24, a matching coefficient determination module 25 and a monitoring module 26; and a memory 3.
The embodiment of the invention also provides a gesture recognition system based on the touch panel, referring to fig. 2, which comprises a touch sensor 1, a processor 2 and a memory 3, wherein the touch sensor 1 is connected with the touch panel; the processor 2 comprises a calculation module 21, a judgment module 22, a result output module 23, a comparison module 24, a matching coefficient determination module 25 and a monitoring module 26, wherein the calculation module 21, the judgment module 22 and the result output module 23 are sequentially connected, the matching coefficient determination module 25 is connected with the comparison module 24, the monitoring module 26, the judgment module 22 and the memory 3, and the comparison module 24 is connected with the calculation module 21 and the memory 3;
the touch sensor 1 is used for sensing touch operation on the touch panel and acquiring the corresponding number of touch points;
the memory 3 is used for storing a preset calculation formula and a history parameter set;
the calculating module 21 is configured to obtain a plurality of sampling points from the touch points according to a preset sampling strategy, calculate touch parameters of the sampling points, and obtain a sampling parameter set;
the comparison module 24 is configured to determine whether a historical parameter set exists, and determine whether a change feature of the sampled parameter set matches a historical change feature of the historical parameter set;
the matching coefficient determining module 25 is configured to determine a target matching coefficient according to the determination result and the matching result of the determining module;
the judging module 22 is used for substituting the target matching coefficient into a preset calculation formula and determining the operation intention of the user according to the obtained calculation result;
the result output module 23 is used for outputting the user operation intention and executing the corresponding system operation.
The monitoring module 26 is configured to correct the target matching coefficient when monitoring that the user performs the revocation operation, and re-determine the user operation intention and perform the revocation operation; otherwise, the target matching coefficient is set as a default matching coefficient.
The identification system provided by the embodiment adopts each module to realize each step in the identification method, provides a hardware basis for the identification method, and is convenient for the implementation of the method.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.

Claims (7)

1. A touch pad-based gesture recognition method, comprising the steps of:
s1, acquiring a touch path of the touch operation, acquiring a plurality of sampling points according to a preset sampling strategy, calculating touch parameters of the sampling points, and acquiring a sampling parameter set;
s2, judging whether a historical parameter set exists, if so, determining a target matching coefficient according to the historical parameter set and the sampling parameter set, and if not, determining the target matching coefficient according to the sampling parameter set;
s3, substituting the target matching coefficient into a preset calculation formula, determining the operation intention of a user according to the obtained calculation result, and executing corresponding system operation;
s4, when the user is detected to execute the cancel operation, correcting the target matching coefficient and returning to the step S3;
the step S3 includes:
s31, substituting the target matching coefficient and the sampling parameter set into a first matching function and a second matching function respectively to obtain first sampling data and second sampling data;
s32, substituting the first sampling data and the second sampling data into a judgment function, judging that the tail end of the operation gesture of the current touch operation is accelerated if the calculation result is larger than a preset threshold value, executing the first operation, and otherwise, judging that the tail end of the operation gesture of the current touch operation is decelerated, and executing the second operation;
the calculation formula of the first matching function is as follows:
wherein,for the first sample data, +.>For the target matching coefficient, +.>For a touch parameter in said sample parameter set, < >>Sampling intervals of the sampling points;
the calculation formula of the second matching function is as follows:
wherein,for the second sample data +.>For the target matching coefficient, +.>For a touch parameter in said sample parameter set, < >>Is saidSampling intervals of the sampling points.
2. The method for recognizing gesture based on touch pad as set forth in claim 1, wherein the step S1 includes:
s11, acquiring the number of corresponding touch points on a touch path in the touch operation;
s12, selecting a plurality of touch points as sampling points, and dividing the touch path into a plurality of sampling intervals according to the sampling points;
s13, calculating the acceleration of each sampling interval as the touch parameter of the sampling point according to the triggering time of each touch point, and obtaining a sampling parameter set.
3. The method for recognizing gesture based on touch pad as set forth in claim 2, wherein the step S2 includes:
s21, judging whether a history parameter set exists, if so, entering the next step, and if not, entering the step S24;
s22, acquiring a preset number of history parameter sets and acquiring corresponding history change characteristics;
s23, determining the change characteristics of the sampling parameter set, matching the change characteristics with the history change characteristics, if the matching is successful, acquiring a matching coefficient corresponding to the history parameter set as a target matching coefficient, and if the matching is failed, entering the next step;
s24, acquiring an initial matching coefficient as a target matching coefficient.
4. The touch pad-based gesture recognition method of claim 1, wherein the decision function is calculated as follows:
wherein,k is the total number of the sampling points for the first sampling data and the second sampling data respectively.
5. The method for recognizing gesture based on touch pad as set forth in claim 1, wherein the step S4 includes:
s41, monitoring whether a user executes a revocation operation for revoking the system operation, if so, entering the next step, and if not, setting the target matching coefficient as a default matching coefficient;
and S42, if the calculation result is larger than a preset threshold value, reducing the value of the current target matching coefficient by a preset amplitude value, returning to the step S3, otherwise, increasing the value of the current target matching coefficient by the preset amplitude value, returning to the step S3, and judging the operation intention of the user again and executing.
6. A touch pad based gesture recognition method according to claim 3, wherein: the initial matching coefficient is a prediction matching coefficient generated by leading the conventional human behavior learning inertia into a training model.
7. A touch pad-based gesture recognition system, applied to implement the touch pad-based gesture recognition method according to any one of claims 1 to 6, characterized in that: the touch sensor is connected with the touch pad; the processor comprises a calculation module, a judgment module, a result output module, a comparison module, a matching coefficient determination module and a monitoring module, wherein the calculation module, the judgment module and the result output module are sequentially connected, the matching coefficient determination module is connected with the comparison module, the monitoring module, the judgment module and the memory, and the comparison module is connected with the calculation module and the memory;
the touch sensor is used for sensing touch operation on the touch panel and acquiring the corresponding number of touch points;
the memory is used for storing a preset calculation formula and a historical parameter set;
the computing module is used for acquiring a plurality of sampling points from the touch points according to a preset sampling strategy, computing touch parameters of the sampling points and acquiring a sampling parameter set;
the comparison module is used for judging whether a historical parameter set exists or not and judging whether the change characteristics of the sampling parameter set are matched with the historical change characteristics of the historical parameter set or not;
the matching coefficient determining module is used for determining a target matching coefficient according to the judging result and the matching result of the judging module;
the judging module is used for substituting the target matching coefficient into the preset calculation formula and determining the operation intention of the user according to the obtained calculation result;
the result output module is used for outputting the user operation intention and executing corresponding system operation;
the monitoring module is used for correcting the target matching coefficient when monitoring that a user executes the cancel operation, judging the operation intention of the user again and executing the operation intention; otherwise, the target matching coefficient is set as a default matching coefficient.
CN202111667309.2A 2021-12-30 2021-12-30 Gesture recognition method and system based on touch pad Active CN114578959B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111667309.2A CN114578959B (en) 2021-12-30 2021-12-30 Gesture recognition method and system based on touch pad

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111667309.2A CN114578959B (en) 2021-12-30 2021-12-30 Gesture recognition method and system based on touch pad

Publications (2)

Publication Number Publication Date
CN114578959A CN114578959A (en) 2022-06-03
CN114578959B true CN114578959B (en) 2024-03-29

Family

ID=81768809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111667309.2A Active CN114578959B (en) 2021-12-30 2021-12-30 Gesture recognition method and system based on touch pad

Country Status (1)

Country Link
CN (1) CN114578959B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8436821B1 (en) * 2009-11-20 2013-05-07 Adobe Systems Incorporated System and method for developing and classifying touch gestures
CN106814939A (en) * 2015-11-27 2017-06-09 北京奇虎科技有限公司 A kind of method and terminal of terminal screenshotss
CN107357516A (en) * 2017-07-10 2017-11-17 南京邮电大学 A kind of gesture query intention Forecasting Methodology based on hidden Markov model
CN107704190A (en) * 2017-11-06 2018-02-16 广东欧珀移动通信有限公司 Gesture identification method, device, terminal and storage medium
CN109213419A (en) * 2018-10-19 2019-01-15 北京小米移动软件有限公司 touch operation processing method, device and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10053110B2 (en) * 2016-05-06 2018-08-21 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methodologies for controlling an autonomous vehicle
CN107463329B (en) * 2017-07-28 2019-08-27 Oppo广东移动通信有限公司 Detection method, device, storage medium and the mobile terminal of blank screen gesture

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8436821B1 (en) * 2009-11-20 2013-05-07 Adobe Systems Incorporated System and method for developing and classifying touch gestures
CN106814939A (en) * 2015-11-27 2017-06-09 北京奇虎科技有限公司 A kind of method and terminal of terminal screenshotss
CN107357516A (en) * 2017-07-10 2017-11-17 南京邮电大学 A kind of gesture query intention Forecasting Methodology based on hidden Markov model
CN107704190A (en) * 2017-11-06 2018-02-16 广东欧珀移动通信有限公司 Gesture identification method, device, terminal and storage medium
CN109213419A (en) * 2018-10-19 2019-01-15 北京小米移动软件有限公司 touch operation processing method, device and storage medium

Also Published As

Publication number Publication date
CN114578959A (en) 2022-06-03

Similar Documents

Publication Publication Date Title
US20080150715A1 (en) Operation control methods and systems
EP4009166A1 (en) Device, method, and graphical user interface for synchronizing two or more displays
US20140098038A1 (en) Multi-function configurable haptic device
US20110246952A1 (en) Electronic device capable of defining touch gestures and method thereof
KR20130099717A (en) Apparatus and method for providing user interface based on touch screen
US10770077B2 (en) Electronic device and method
CN106873869A (en) A kind of control method and device of music
WO2020038108A1 (en) Dynamic motion detection method and dynamic motion control method and device
CN110544473A (en) Voice interaction method and device
US20210278902A1 (en) Signal processing device, signal processing method, program, and electronic device
KR20100052372A (en) Method for recognizing motion based on motion sensor and mobile terminal using the same
JP2004355426A (en) Software for enhancing operability of touch panel and terminal
KR20160133305A (en) Gesture recognition method, a computing device and a control device
CN114578959B (en) Gesture recognition method and system based on touch pad
US9696901B2 (en) Apparatus and method for recognizing touch of user terminal based on acoustic wave signal
CN110691016B (en) Interactive method realized based on audio equipment and audio equipment
CN108418908A (en) A kind of mobile phone right-hand man operates identification device and method
CN115972906A (en) Control method and device for vehicle copilot screen
CN105528057A (en) Control response method and electronic apparatus
US10802593B2 (en) Device and method for recognizing gestures for a user control interface
CN112015291B (en) Electronic equipment control method and device
CN107783687A (en) Control intelligent back vision mirror method, apparatus, can touch control operation intelligent back vision mirror
EP2558964B1 (en) Apparatus for providing digital content and method thereof
CN106445095A (en) Input method and device and storage equipment
CN106873779B (en) Gesture recognition device and gesture recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant