CN113064545B - Gesture recognition method and system - Google Patents

Gesture recognition method and system Download PDF

Info

Publication number
CN113064545B
CN113064545B CN202110540651.XA CN202110540651A CN113064545B CN 113064545 B CN113064545 B CN 113064545B CN 202110540651 A CN202110540651 A CN 202110540651A CN 113064545 B CN113064545 B CN 113064545B
Authority
CN
China
Prior art keywords
gesture
recognized
touch point
data
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110540651.XA
Other languages
Chinese (zh)
Other versions
CN113064545A (en
Inventor
史元春
喻纯
杨欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202110540651.XA priority Critical patent/CN113064545B/en
Publication of CN113064545A publication Critical patent/CN113064545A/en
Application granted granted Critical
Publication of CN113064545B publication Critical patent/CN113064545B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Abstract

The invention provides a gesture recognition method and a system, wherein the method comprises the following steps: acquiring first gesture data of a gesture to be recognized, which is used for operating a screen by a user; preprocessing the first gesture data to obtain at least the ID number of the touch point of the gesture to be recognized, a Boolean value corresponding to each ID number of the touch point and a data point set vector corresponding to each ID number of the touch point; and determining a sample gesture matched with the gesture to be recognized from a preset gesture library by using the ID number of the touch point ID number of the gesture to be recognized, the Boolean value corresponding to each touch point ID number and the data point set vector corresponding to each touch point ID number, and taking the sample gesture matched with the gesture to be recognized as a final recognition result of the gesture to be recognized. And determining a sample gesture matched with the gesture to be recognized from the gesture library as a final recognition result by utilizing the ID number and related data of the touch point of the gesture to be recognized so as to realize accurate recognition of diversified gestures and further improve the use experience of a user.

Description

Gesture recognition method and system
Technical Field
The invention relates to the technical field of gesture recognition, in particular to a gesture recognition method and system.
Background
With the development of the internet, intelligent terminals such as smart phones and tablet computers are gradually widely used. The main mode of interaction between the user and the intelligent terminal is gesture interaction, and along with the increase of application scenes of the intelligent terminal, gestures used by the user when the user operates the intelligent terminal are more and more diversified. Therefore, how to accurately identify diversified gestures to ensure the user experience is a problem to be solved urgently.
Disclosure of Invention
In view of this, embodiments of the present invention provide a gesture recognition method and system to achieve the purpose of accurately recognizing diversified gestures.
In order to achieve the above purpose, the embodiments of the present invention provide the following technical solutions:
the first aspect of the embodiments of the present invention discloses a gesture recognition method, including:
acquiring first gesture data of a gesture to be recognized, which is used by a user for operating a screen, wherein the first gesture data comprises: the method comprises the following steps that at least one finger forming the gesture to be recognized acquires a touch point ID number, at least one operation event type and at least one group of positioning information when operating on the screen, wherein the positioning information comprises: the abscissa and the ordinate and the timestamp for collecting the operation event types are acquired, and each operation event type corresponds to one group of positioning information;
preprocessing the first gesture data to obtain at least the ID number of the touch point of the gesture to be recognized, a Boolean value corresponding to each ID number of the touch point and a data point set vector corresponding to each ID number of the touch point, wherein the data point set vector is composed of the operation event type and the positioning information;
and determining a sample gesture matched with the gesture to be recognized from a preset gesture library by using the ID number of the touch point ID number of the gesture to be recognized, the Boolean value corresponding to each touch point ID number and the data point set vector corresponding to each touch point ID number, and taking the sample gesture matched with the gesture to be recognized as a final recognition result of the gesture to be recognized, wherein the gesture library comprises a plurality of preset sample gestures and preprocessed second gesture data corresponding to each sample gesture.
Preferably, the preprocessing the first gesture data to obtain at least the ID number of the touch point ID number of the gesture to be recognized, the boolean value corresponding to each touch point ID number, and the data point set vector corresponding to each touch point ID number includes:
determining the ID number of the ID numbers of the touch points of the gesture to be recognized, wherein the ID number is consistent with the number of fingers forming the gesture to be recognized;
determining a data point set corresponding to each touch point ID number of the gesture to be recognized, wherein the data point set comprises at least one operation event type and at least one group of positioning information, and the data in the data point set are arranged according to the time sequence of the time stamps from front to back;
for each touch point ID number of the gesture to be recognized, judging whether the abscissa and the ordinate of the last preset time length in the data point set of the touch point ID number change or not based on the timestamp in the data point set of the touch point ID number, and determining a Boolean value corresponding to the touch point ID number according to the judgment result;
and aiming at each ID number of the touch points of the gesture to be recognized, converting the data point set corresponding to the ID number of the touch points into a vector to obtain a corresponding data point set vector.
Preferably, the determining, by using the ID number of the touch point ID number of the gesture to be recognized, the boolean value corresponding to each touch point ID number, and the data point set vector corresponding to each touch point ID number, a sample gesture matched with the gesture to be recognized from a preset gesture library, and taking the sample gesture as the final recognition result of the gesture to be recognized, includes:
determining at least one first sample gesture which has the same ID number of touch point ID numbers as the gesture to be recognized and the same Boolean value of each touch point ID number as the gesture to be recognized from a preset gesture library;
for each first sample gesture, calculating the similarity between the data point set vectors of the gesture to be recognized and the first sample gesture;
and determining the first sample gesture with the minimum similarity and within a threshold range as a second sample gesture, and taking the second sample gesture as a final recognition result of the gesture to be recognized.
Preferably, the process of constructing the gesture library includes:
obtaining a plurality of sample gestures, and obtaining second gesture data corresponding to each sample gesture;
for each sample gesture, preprocessing second gesture data of the sample gesture to obtain the ID number of the touch point ID number of the sample gesture, a Boolean value corresponding to each touch point ID number and a data point set vector corresponding to each touch point ID number;
and for each sample gesture, storing the ID number of the touch point ID number of the sample gesture, the Boolean value corresponding to each touch point ID number and the data point set vector corresponding to each touch point ID number into a gesture library.
Preferably, the converting, for each touch point ID number of the gesture to be recognized, a data point set corresponding to the touch point ID number into a vector to obtain a corresponding data point set vector includes:
and for each ID number of the touch point of the gesture to be recognized, performing L2 norm normalization on the data point set corresponding to the ID number of the touch point to obtain a corresponding data point set vector.
Preferably, for each of the first sample gestures, calculating a similarity between the data point set vectors of the gesture to be recognized and the first sample gesture includes:
and calculating the cosine distance or Euclidean distance between the gesture to be recognized and the data point set vector of the first sample gesture for each first sample gesture, and taking the cosine distance or Euclidean distance as the similarity between the gesture to be recognized and the data point set vector of the first sample gesture.
A second aspect of the embodiments of the present invention discloses a gesture recognition system, including:
an acquisition unit configured to acquire first gesture data of a gesture to be recognized by a user for operating a screen, the first gesture data including: the method comprises the following steps that at least one finger forming the gesture to be recognized acquires a touch point ID number, at least one operation event type and at least one group of positioning information when operating on the screen, wherein the positioning information comprises: the abscissa and the ordinate and the timestamp for collecting the operation event types are acquired, and each operation event type corresponds to one group of positioning information;
the preprocessing unit is used for preprocessing the first gesture data to at least obtain the ID number of the touch point ID number of the gesture to be recognized, a Boolean value corresponding to each touch point ID number and a data point set vector corresponding to each touch point ID number, wherein the data point set vector is composed of the operation event type and the positioning information;
and the matching unit is used for determining a sample gesture matched with the gesture to be recognized from a preset gesture library by utilizing the ID number of the touch point ID number of the gesture to be recognized, the Boolean value corresponding to each touch point ID number and the data point set vector corresponding to each touch point ID number, and taking the sample gesture matched with the gesture to be recognized as a final recognition result of the gesture to be recognized, wherein the gesture library comprises a plurality of preset sample gestures and preprocessed second gesture data corresponding to each sample gesture.
Preferably, the pretreatment unit includes:
the first determining module is used for determining the ID number of the ID numbers of the touch points of the gesture to be recognized, wherein the ID number is consistent with the number of fingers forming the gesture to be recognized;
the second determining module is used for determining a data point set corresponding to each touch point ID number of the gesture to be recognized, the data point set comprises at least one operation event type and at least one group of positioning information, and data in the data point set are arranged in time sequence from front to back according to timestamps;
the processing module is used for judging whether the abscissa and the ordinate of the last preset time length in the data point set of the touch point ID number change or not according to the ID number of each touch point of the gesture to be recognized and the timestamp in the data point set of the touch point ID number, and determining the Boolean value corresponding to the touch point ID number according to the judgment result;
and the conversion module is used for converting the data point set corresponding to the ID number of the touch point into a vector aiming at the ID number of each touch point of the gesture to be recognized, so as to obtain a corresponding data point set vector.
Preferably, the matching unit includes:
the first determining module is used for determining at least one first sample gesture which has the same ID number of touch point ID numbers as the gesture to be recognized and the same Boolean value of each touch point ID number as the gesture to be recognized from a preset gesture library;
the calculation module is used for calculating the similarity between the data point set vectors of the gesture to be recognized and the first sample gesture aiming at each first sample gesture;
and the second determination module is used for determining that the first sample gesture with the minimum similarity and within the threshold range is a second sample gesture, and taking the second sample gesture as a final recognition result of the gesture to be recognized.
Preferably, the conversion module is specifically configured to: and for each ID number of the touch point of the gesture to be recognized, performing L2 norm normalization on the data point set corresponding to the ID number of the touch point to obtain a corresponding data point set vector.
Based on the gesture recognition method and system provided by the embodiment of the invention, the method comprises the following steps: acquiring first gesture data of a gesture to be recognized, which is used for operating a screen by a user; preprocessing the first gesture data to obtain at least the ID number of the touch point of the gesture to be recognized, a Boolean value corresponding to each ID number of the touch point and a data point set vector corresponding to each ID number of the touch point; and determining a sample gesture matched with the gesture to be recognized from a preset gesture library by using the ID number of the touch point ID number of the gesture to be recognized, the Boolean value corresponding to each touch point ID number and the data point set vector corresponding to each touch point ID number, and taking the sample gesture matched with the gesture to be recognized as a final recognition result of the gesture to be recognized. In the scheme, a gesture library containing a plurality of sample gestures and second gesture data corresponding to the sample gestures is constructed in advance. And determining a sample gesture matched with the gesture to be recognized from the gesture library as a final recognition result by utilizing the ID number and related data of the touch point of the gesture to be recognized so as to realize accurate recognition of diversified gestures and further improve the use experience of a user.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a gesture recognition method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a format of gesture data provided in an embodiment of the invention;
FIG. 3 is a flowchart illustrating a first gesture data preprocessing according to an embodiment of the present invention;
fig. 4 is a block diagram of a gesture recognition system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
As can be seen from the background art, with the wide application of smart terminals such as smart phones and tablet computers, gesture interaction becomes a main way for a user to interact with the smart terminal, and meanwhile, in the application scene of diversified smart terminals, gestures used by the user when operating the smart terminal are more and more diversified. Therefore, how to accurately identify diversified gestures is a problem to be solved urgently.
Therefore, the embodiment of the invention provides a gesture recognition method and a gesture recognition system, which are used for pre-constructing a gesture library comprising a plurality of sample gestures and second gesture data corresponding to the sample gestures. And determining a sample gesture matched with the gesture to be recognized from the gesture library as a final recognition result by using the ID number of the touch point of the gesture to be recognized and related data thereof, so as to realize accurate recognition of diversified gestures.
It should be noted that the complete process of the gesture involved in the embodiment of the present invention is as follows: and each finger touches the screen to generate a corresponding touch point on the screen. Specifically, a complete gesture starts with the first touch point falling on the screen and ends with the last touch point leaving on the screen and is delayed for a specified time (e.g., 100ms), and if no new touch point is generated on the screen within the specified time, a gesture ends.
It will be appreciated that the termination of a gesture is determined with a specified duration in order not to miss multi-tap gestures, such as: in the single-finger double-click process, the operation event type experienced by the gesture (single-finger double-click) is that the first touch point falls- > the first touch point leaves- > a new touch point falls within 100 ms- > a new touch point leaves.
Referring to fig. 1, a flowchart of a gesture recognition method provided by an embodiment of the present invention is shown, where the gesture recognition method includes:
step S101: first gesture data of a gesture to be recognized, which is used by a user to operate a screen, are acquired.
It can be understood that, when a user operates a screen (a touch screen of an intelligent terminal) through fingers, one more touch point is provided on the screen when each finger is in contact with the screen, and after each finger leaves the screen, one less touch point is provided on the screen, that is, when N fingers are in contact with the screen, N touch points are provided on the screen.
In order to distinguish the operation of different fingers on the screen, each touch point on the screen is assigned with a corresponding touch point ID number, and different touch points correspond to different touch point ID numbers, namely different fingers are indicated through the touch point ID numbers. For example: when a user operates the screen by two fingers, 2 touch points exist on the screen, and the touch point ID numbers of the 2 touch points are 1 and 2 respectively (the numbering form here is used for example only).
It can be determined by the ID number of the touch point ID numbers (i.e., the number of different touch point ID numbers) that the user operates the screen with several fingers. For example: assuming that there are 3 touch points on the screen (the user operates the screen with 3 fingers), and the touch point ID numbers are 1, 2, and 3, respectively, the number of the touch point ID numbers is 3, and the touch point ID numbers 1, 2, and 3 correspond to 3 fingers of the user, respectively.
With the above, in the process of implementing step S101 specifically, when the user operates the screen through the gesture to be recognized, the first gesture data of the gesture to be recognized is obtained.
The first gesture data includes: the method comprises the following steps that at least one finger forming a gesture to be recognized acquires a touch point ID number, at least one operation event type and at least one group of positioning information when operating on a screen, wherein the positioning information comprises: the abscissa, the ordinate and the timestamp of the collected operation event type, and each operation event type corresponds to a group of positioning information.
It can be understood that each finger constituting the gesture to be recognized corresponds to one touch point ID number, at least one operation event type, and at least one set of positioning information, that is, one touch point ID number may correspond to at least one operation event type and at least one set of positioning information.
It should be noted that, when the finger of the user performs an operation event such as falling, leaving, and sliding on the screen, the operation event is responded, that is, each operation event type corresponding to one touch point ID number is finger falling, finger leaving, or finger sliding, and a plurality of operation event types can be collected by one touch point ID number.
For example: it is assumed that the user operates the screen with a single finger, and the operation event type of the falling and leaving of the finger is collected only once, but the operation event type of the sliding of the finger can be collected a plurality of times.
The frequency of the operation event type corresponding to the operation event collected by the screen is the sampling rate of the screen, for example: assuming that the screen sampling rate is 60Hz, it means that 60 operation event types will be collected within 1 second, and the time when a certain operation event type is collected is the timestamp of the operation event type.
In the process of collecting the operation event type of the operation event, an abscissa and an ordinate corresponding to the operation event are also collected, and the abscissa and the ordinate indicate where the operation event occurs on the screen.
It can be understood that, when the user operates the screen by a plurality of fingers, for example, by operating the screen by two fingers, the information related to the operation events of 2 fingers is alternately collected, and the collected information related to the operation events each time includes: touch point ID number (to indicate which finger), abscissa, ordinate, timestamp, and operation event type. In a gesture, the related information of the above-mentioned operation event is collected for a plurality of times, and the collected related information constitutes gesture data of the gesture.
That is to say, the first gesture data of the gesture to be recognized is composed of vectors formed by a plurality of touch points, the first gesture data can also be regarded as a data point set, and the information contained in each touch point is: touch point ID number, operation event type, abscissa, ordinate and timestamp.
First gesture data format referring to the schematic format diagram of the gesture data shown in fig. 2, a point 1 to a point n respectively represent data points, and it is understood that ID values (i.e., touch point ID numbers) of any two points from the point 1 to the point n in fig. 2 may be the same, but any one or more of operation event types, abscissa, ordinate, and timestamp may be different between the any two points.
For example: with reference to fig. 2, assuming that the user operates the screen with a single-finger sliding gesture, the touch point IDs of all data points in the gesture data of the single-finger sliding gesture are the same, the operation event type corresponding to point 1 is that the touch point falls, the operation event type corresponding to point n is that the touch point leaves, the operation event types of the other points are that the touch point slides, and the abscissa, the ordinate and the timestamp of each point indicate the position and the occurrence time of the operation event on the screen.
Another example is: with reference to fig. 2, assuming that the user operates the screen with the two-finger sliding gesture, the touch point ID numbers of some data points in the gesture data of the two-finger sliding gesture are 1, and the touch point ID numbers of other data points are 2, and the gesture data of the two-finger sliding gesture can be regarded as being formed by combining a single-finger sliding data point set with a touch point ID number of 1 and a single-finger sliding data point set with a touch point ID number of 2 in a staggered manner.
For the gesture data corresponding to the multi-finger operation screen, reference may be made to the contents in the example of the two-finger sliding gesture operation screen, which is not repeated here.
Step S102: and preprocessing the first gesture data to obtain at least the ID number of the ID numbers of the touch points of the gesture to be recognized, the Boolean value corresponding to each ID number of the touch points and the data point set vector corresponding to each ID number of the touch points.
As can be seen from the above description in step S101, the first gesture data includes: at least one touch point ID number, and an operation event type and positioning information corresponding to each touch point ID number. And the first gesture data can be formed by combining the data point sets corresponding to the ID numbers of the touch points in a staggered mode.
The data point set corresponding to the touch point ID number includes: the operation event type and the positioning information (ordinate, abscissa and timestamp) corresponding to the touch point ID number.
In the process of implementing step S102 specifically, the first gesture data is preprocessed, the number of IDs of touch point ID numbers of the gesture to be recognized is counted, the boolean value of the touch point ID number is determined by using a timestamp in a data point set of the touch point ID number, and the data point set corresponding to the touch point ID number is converted into a vector to obtain a corresponding data point set vector, where the data point set vector is composed of an operation event type and positioning information.
That is to say, after the first gesture data is preprocessed, the ID number of the touch point ID number of the gesture to be recognized, the boolean value corresponding to each touch point ID number, and the data point set vector corresponding to each touch point ID number can be obtained.
It can be understood that, as can be seen from the above-mentioned content in step S101, each touch point ID number corresponds to a certain finger constituting a gesture to be recognized, and therefore, for a touch point ID number, whether the finger corresponding to the touch point ID number stops on the screen can be reflected by the determined boolean value corresponding to the touch point ID number, when the boolean value corresponding to the touch point ID number is TRUE, it indicates that the corresponding finger does not stop on the screen, and when the boolean value corresponding to the touch point ID number is FALSE, it indicates that the corresponding finger stops on the screen.
In summary, after the first gesture data is preprocessed, the content and format of the obtained data are specifically as follows:
the number of ID numbers of the touch points ID numbers;
ID1 (touch point ID number), boolean value, abscissa 1, ordinate 1, timestamp 1, operation event type 1, abscissa 2, ordinate 2, timestamp 2, operation event type 2 … … abscissa k, ordinate k, timestamp k, operation event type k, k being the total number of operation event types corresponding to ID 1;
……
IDn, boolean value, abscissa 1, ordinate 1, timestamp 1, operation event type 1, abscissa 2, ordinate 2, timestamp 2, operation event type 2 … … abscissa h, ordinate h, timestamp h, operation event type h, h being the total number of operation event types to which IDn corresponds.
Note that n is the ID number of the touch point ID number included in the first gesture data.
Step S103: and determining a sample gesture matched with the gesture to be recognized from a preset gesture library by using the ID number of the touch point ID number of the gesture to be recognized, the Boolean value corresponding to each touch point ID number and the data point set vector corresponding to each touch point ID number, and taking the sample gesture matched with the gesture to be recognized as a final recognition result of the gesture to be recognized.
It should be noted that, according to the preset gesture category, the user is guided to make a corresponding sample gesture on the screen in advance, and second gesture data of the sample gesture is collected. And constructing a gesture library by utilizing the plurality of sample gestures and the preprocessed second gesture data corresponding to each sample gesture.
In some embodiments, the specific process of constructing the gesture library is: obtaining a plurality of sample gestures, and obtaining second gesture data corresponding to each sample gesture; for each sample gesture, preprocessing second gesture data of the sample gesture to obtain the ID number of the touch point ID number of the sample gesture, a Boolean value corresponding to each touch point ID number and a data point set vector corresponding to each touch point ID number; and for each sample gesture, storing the ID number of the touch point ID number of the sample gesture, the Boolean value corresponding to each touch point ID number and the data point set vector corresponding to each touch point ID number into a gesture library.
That is to say, a plurality of sample gestures are stored in the gesture library, and the ID number of the touch point ID number corresponding to each sample gesture, the boolean value corresponding to each touch point ID number, and the data point set vector corresponding to each touch point ID number are stored.
It should be noted that, for the process of preprocessing the second gesture data, reference may be made to the process of preprocessing the first gesture data in step S102, which is not described herein again.
In the process of implementing step S103 specifically, at least one first sample gesture is determined from a preset gesture library, where the number of IDs of the touch point ID numbers is the same as that of the gesture to be recognized, and the boolean value of each touch point ID number is the same as that of the gesture to be recognized.
That is, the number of IDs of the gesture to be recognized is compared with the number of IDs of each sample gesture in the gesture library, and the sample gesture with the same number of IDs as the gesture to be recognized is determined. And determining at least one first sample gesture with the same number of the touch points ID and the gesture to be recognized from the sample gestures with the same number of the IDs and the gesture to be recognized. That is, for each first sample gesture, the number of IDs of the first sample gesture is the same as the number of IDs of the gesture to be recognized, and the boolean value of each touch point ID number of the first sample gesture is the same as the gesture to be recognized (that is, the boolean values corresponding to the touch point ID numbers with the same number between the first sample gesture and the gesture to be recognized are the same).
For each first sample gesture, calculating the similarity between the gesture to be recognized and the data point set vector of the first sample gesture, wherein the specific calculation mode is as follows: and calculating a cosine distance or Euclidean distance between the gesture to be recognized and the data point set vector of the first sample gesture aiming at each first sample gesture, and taking the cosine distance or Euclidean distance as the similarity between the gesture to be recognized and the data point set vector of the first sample gesture.
It can be understood that when the number of IDs of the gesture to be recognized and the first sample gesture is greater than 1 (there are multiple touch point ID numbers), that is, it indicates that there are multiple data point set vectors for the gesture to be recognized and the first sample gesture, at this time, the similarity between the data point set vectors with the same touch point ID number is calculated to obtain multiple similarities, and a value obtained by performing averaging calculation on the multiple similarities (or in a weight calculation manner) is used as the similarity between the gesture to be recognized and the first sample gesture.
It should be noted that, in order to ensure data compactness, a cosine distance may be used as a similarity between a gesture to be recognized and a data point set vector of the first sample gesture.
After the similarity between the gesture to be recognized and the data point set vector of each first sample gesture is calculated, the first sample gesture with the minimum similarity and within the threshold range is determined to be a second sample gesture, and the second sample gesture is used as a final recognition result of the gesture to be recognized.
That is, the final recognition result of the gesture to be recognized is: the first sample gesture with the least similarity and the similarity within a threshold range.
In the embodiment of the invention, a gesture library containing a plurality of sample gestures and second gesture data corresponding to the sample gestures is constructed in advance. And determining a sample gesture matched with the gesture to be recognized from the gesture library as a final recognition result by utilizing the ID number and related data of the touch point of the gesture to be recognized so as to realize accurate recognition of diversified gestures and further improve the use experience of a user.
In the foregoing embodiment of the present invention, referring to fig. 3, the process of preprocessing the first gesture data in step S102 in fig. 1 is a flowchart illustrating the preprocessing of the first gesture data according to the embodiment of the present invention, which includes the following steps:
step S301: and determining the ID number of the ID numbers of the touch points of the gesture to be recognized.
It should be noted that the number of IDs of the touch point ID numbers of the gesture to be recognized is consistent with the number of fingers constituting the gesture to be recognized.
In the process of implementing step S301 specifically, the number of different touch point ID numbers is determined by traversing the touch point ID numbers in the first gesture data, and the number is used as the ID number of the touch point ID number of the gesture to be recognized.
Step S302: and determining a data point set corresponding to each touch point ID number of the gesture to be recognized.
It should be noted that the data point set corresponding to each touch point ID number includes at least one operation event type and at least one set of positioning information, and the data in the data point set are arranged in time sequence from front to back according to the time stamp.
As can be seen from the above description of step S101 in fig. 1 according to the embodiment of the present invention, when the user operates the screen with a plurality of fingers, the user collects corresponding operation event types and positioning information for the operation events executed by the fingers, that is, the operation event types and the positioning information corresponding to the operation events executed by the fingers are mixed together.
Therefore, it is necessary to distinguish the operation event types and the positioning information corresponding to the operation events performed by different fingers. In the process of implementing step S302 specifically, all operation event types and positioning information are classified according to the touch point ID numbers of the gesture to be recognized, so as to determine a data point set (that is, an operation event type and corresponding positioning information) corresponding to each touch point ID number.
For example: assuming that a user operates a screen through a two-finger sliding gesture, the ID numbers of touch points corresponding to the two-finger sliding gesture are ID1 and ID2 respectively, all collected operation event types and positioning information are classified, the operation event type and the positioning information corresponding to ID1 are determined, and the operation event type and the positioning information corresponding to ID2 are determined.
Meanwhile, for each touch point ID number, the operation event types and the positioning information of the touch point ID number are arranged according to the time sequence of the timestamps from front to back.
Step S303: and judging whether the abscissa and the ordinate of the last preset time length in the data point set of the ID number of the touch point change or not according to the ID number of each touch point of the gesture to be recognized and the timestamp in the data point set of the ID number of the touch point, and determining the Boolean value corresponding to the ID number of the touch point according to the judgment result.
In the process of the specific implementation step S303, for each touch point ID number of the gesture to be recognized, whether the abscissa and the ordinate of the last preset time duration in the data point set of the touch point ID number change is determined, and the boolean value corresponding to the touch point ID number is determined according to the determination result.
In a specific implementation, for each touch point ID number of a gesture to be recognized, if the abscissa and/or ordinate of the last preset time length in a data point set of the touch point ID number changes, determining that the Boolean value corresponding to the touch point ID number is TRUE; and if the abscissa and the ordinate of the last preset time length in the data point set of the ID number of the touch point are not changed, determining that the Boolean value corresponding to the ID number of the touch point is FALSE.
In some embodiments, for each touch point ID number of a gesture to be recognized, acquiring an abscissa and an ordinate corresponding to a timestamp (herein referred to as a timestamp to be compared) separated from a last timestamp in a data point set of the touch point ID number by no more than a preset time duration, and comparing each timestamp to be compared with an coordinate of the last timestamp one by one (comparing the abscissa with the abscissa, and comparing the ordinate with the ordinate); if the coordinates of any timestamp to be compared and the last timestamp are different (the abscissa and/or the ordinate are different), determining that the abscissa and/or the ordinate of the last preset time duration in the data point set of the ID number of the touch point is changed; and if the coordinates of all the timestamps to be compared are the same as the coordinate of the last timestamp, determining that neither the abscissa nor the ordinate of the last preset time length in the data point set with the ID number of the touch point is changed.
For example: for a certain touch point ID number of a gesture to be recognized, acquiring an abscissa and an ordinate corresponding to a timestamp (referred to as a timestamp to be compared herein) which is separated from the last timestamp of a data point set of the touch point ID number by no more than 500ms, and comparing the coordinate of each timestamp to be compared with the coordinate of the last timestamp one by one.
Step S304: and aiming at each touch point ID number of the gesture to be recognized, converting the data point set corresponding to the touch point ID number into a vector to obtain a corresponding data point set vector.
In the process of specifically implementing step S304, for each touch point ID number of the gesture to be recognized, the data point set corresponding to the touch point ID number is normalized by the L2 norm, and the data point set is converted into a unit vector with a length of 1, so as to obtain a data point set vector corresponding to the touch point ID number.
It can be understood that the L2 norm normalization is performed on the data point set corresponding to the ID number of the touch point, specifically, the gesture to be recognized is enlarged or reduced according to a certain proportion, and the sliding distance of the gesture to be recognized is unified as a unit distance.
It should be noted that steps S301 to S304 are processes for preprocessing the first gesture data, and correspondingly, in the process of constructing the gesture library, the processes for preprocessing the second gesture data corresponding to the sample gesture can be referred to in steps S301 to S304, which are not described herein again.
In the embodiment of the invention, the first gesture data is preprocessed to obtain the ID number of the ID numbers of the touch points of the gesture to be recognized, the Boolean value corresponding to each ID number of the touch points and the data point set vector corresponding to each ID number of the touch points. And determining a sample gesture matched with the gesture to be recognized from the gesture library as a final recognition result by using the obtained related data so as to realize accurate recognition of diversified gestures and further improve the use experience of a user.
Corresponding to the gesture recognition method provided by the embodiment of the present invention, referring to fig. 4, an embodiment of the present invention further provides a structural block diagram of a gesture recognition system, where the gesture recognition system includes: an acquisition unit 401, a preprocessing unit 402 and a matching unit 403;
an obtaining unit 401, configured to obtain first gesture data of a gesture to be recognized, which is used by a user to operate a screen, where the first gesture data includes: the method comprises the following steps that at least one finger forming a gesture to be recognized acquires a touch point ID number, at least one operation event type and at least one group of positioning information when operating on the screen, wherein the positioning information comprises: the abscissa, the ordinate and the timestamp of the collected operation event type, and each operation event type corresponds to a group of positioning information.
The preprocessing unit 402 is configured to preprocess the first gesture data, and obtain at least the ID number of the touch point ID number of the gesture to be recognized, a boolean value corresponding to each touch point ID number, and a data point set vector corresponding to each touch point ID number, where the data point set vector is composed of an operation event type and positioning information.
The matching unit 403 is configured to determine, by using the ID number of the touch point ID number of the gesture to be recognized, the boolean value corresponding to each touch point ID number, and the data point set vector corresponding to each touch point ID number, a sample gesture matched with the gesture to be recognized from a preset gesture library, and use the sample gesture as a final recognition result of the gesture to be recognized, where the gesture library includes a plurality of preset sample gestures and preprocessed second gesture data corresponding to each sample gesture.
Preferably, the matching unit 403 for constructing the gesture library is specifically configured to: obtaining a plurality of sample gestures, and obtaining second gesture data corresponding to each sample gesture; for each sample gesture, preprocessing second gesture data of the sample gesture to obtain the ID number of the touch point ID number of the sample gesture, a Boolean value corresponding to each touch point ID number and a data point set vector corresponding to each touch point ID number; and for each sample gesture, storing the ID number of the touch point ID number of the sample gesture, the Boolean value corresponding to each touch point ID number and the data point set vector corresponding to each touch point ID number into a gesture library.
In the embodiment of the invention, a gesture library containing a plurality of sample gestures and second gesture data corresponding to the sample gestures is constructed in advance. And determining a sample gesture matched with the gesture to be recognized from the gesture library as a final recognition result by utilizing the ID number and related data of the touch point of the gesture to be recognized so as to realize accurate recognition of diversified gestures and further improve the use experience of a user.
Preferably, in conjunction with the content shown in fig. 4, the preprocessing unit 402 includes: the device comprises a first determining module, a second determining module, a processing module and a converting module;
the first determining module is used for determining the ID number of the ID numbers of the touch points of the gesture to be recognized, and the ID number is consistent with the number of fingers forming the gesture to be recognized.
And the second determining module is used for determining a data point set corresponding to each touch point ID number of the gesture to be recognized, the data point set comprises at least one operation event type and at least one group of positioning information, and the data in the data point set are arranged according to the time sequence of the time stamps from front to back.
And the processing module is used for judging whether the abscissa and the ordinate of the last preset time length in the data point set of the touch point ID number change or not according to each touch point ID number of the gesture to be recognized and the timestamp in the data point set of the touch point ID number, and determining the Boolean value corresponding to the touch point ID number according to the judgment result.
And the conversion module is used for converting the data point set corresponding to the ID number of the touch point into a vector aiming at the ID number of each touch point of the gesture to be recognized, so as to obtain a corresponding data point set vector.
In a specific implementation, the transformation module is specifically configured to: and for each ID number of the touch point of the gesture to be recognized, carrying out L2 norm normalization on the data point set corresponding to the ID number of the touch point to obtain a corresponding data point set vector.
In the embodiment of the invention, the first gesture data is preprocessed to obtain the ID number of the ID numbers of the touch points of the gesture to be recognized, the Boolean value corresponding to each ID number of the touch points and the data point set vector corresponding to each ID number of the touch points. And determining a sample gesture matched with the gesture to be recognized from the gesture library as a final recognition result by using the obtained related data so as to realize accurate recognition of diversified gestures and further improve the use experience of a user.
Preferably, in conjunction with the content shown in fig. 4, the matching unit 403 includes: the device comprises a first determining module, a calculating module and a second determining module, wherein the execution principle of each module is as follows:
the first determining module is used for determining at least one first sample gesture which has the same ID number of the touch point ID numbers as the gesture to be recognized and the same Boolean value of each touch point ID number as the gesture to be recognized from a preset gesture library.
And the calculation module is used for calculating the similarity between the data point set vectors of the gesture to be recognized and the first sample gesture aiming at each first sample gesture.
In a specific implementation, the calculation module is specifically configured to: and calculating the cosine distance or Euclidean distance between the gesture to be recognized and the data point set vector of the first sample gesture aiming at each first sample gesture, and taking the cosine distance or Euclidean distance as the similarity between the gesture to be recognized and the data point set vector of the first sample gesture.
And the second determination module is used for determining that the first sample gesture with the minimum similarity and within the threshold range is a second sample gesture and taking the second sample gesture as a final recognition result of the gesture to be recognized.
In summary, the embodiments of the present invention provide a gesture recognition method and system, which pre-construct a gesture library including a plurality of sample gestures and second gesture data corresponding to the sample gestures. And determining a sample gesture matched with the gesture to be recognized from the gesture library as a final recognition result by utilizing the ID number and related data of the touch point of the gesture to be recognized so as to realize accurate recognition of diversified gestures and further improve the use experience of a user.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of gesture recognition, the method comprising:
acquiring first gesture data of a gesture to be recognized, which is used by a user for operating a screen, wherein the first gesture data comprises: the method comprises the following steps that at least one finger forming the gesture to be recognized acquires a touch point ID number, at least one operation event type and at least one group of positioning information when operating on the screen, wherein the positioning information comprises: the abscissa and the ordinate and the timestamp for collecting the operation event types are acquired, and each operation event type corresponds to one group of positioning information;
preprocessing the first gesture data to obtain at least the ID number of the touch point of the gesture to be recognized, a Boolean value corresponding to each ID number of the touch point and a data point set vector corresponding to each ID number of the touch point, wherein the data point set vector is composed of the operation event type and the positioning information;
and determining a sample gesture matched with the gesture to be recognized from a preset gesture library by using the ID number of the touch point ID number of the gesture to be recognized, the Boolean value corresponding to each touch point ID number and the data point set vector corresponding to each touch point ID number, and taking the sample gesture matched with the gesture to be recognized as a final recognition result of the gesture to be recognized, wherein the gesture library comprises a plurality of preset sample gestures and preprocessed second gesture data corresponding to each sample gesture.
2. The method according to claim 1, wherein the preprocessing the first gesture data to obtain at least an ID number of a touch point ID number of the gesture to be recognized, a boolean value corresponding to each touch point ID number, and a data point set vector corresponding to each touch point ID number comprises:
determining the ID number of the ID numbers of the touch points of the gesture to be recognized, wherein the ID number is consistent with the number of fingers forming the gesture to be recognized;
determining a data point set corresponding to each touch point ID number of the gesture to be recognized, wherein the data point set comprises at least one operation event type and at least one group of positioning information, and the data in the data point set are arranged according to the time sequence of the time stamps from front to back;
for each touch point ID number of the gesture to be recognized, judging whether the abscissa and the ordinate of the last preset time length in the data point set of the touch point ID number change or not based on the timestamp in the data point set of the touch point ID number, and determining a Boolean value corresponding to the touch point ID number according to the judgment result;
and aiming at each ID number of the touch points of the gesture to be recognized, converting the data point set corresponding to the ID number of the touch points into a vector to obtain a corresponding data point set vector.
3. The method according to claim 1, wherein the determining a sample gesture matching the gesture to be recognized from a preset gesture library by using the ID number of the touch point ID number of the gesture to be recognized, the boolean value corresponding to each touch point ID number, and the data point set vector corresponding to each touch point ID number as a final recognition result of the gesture to be recognized comprises:
determining at least one first sample gesture which has the same ID number of touch point ID numbers as the gesture to be recognized and the same Boolean value of each touch point ID number as the gesture to be recognized from a preset gesture library;
for each first sample gesture, calculating the similarity between the data point set vectors of the gesture to be recognized and the first sample gesture;
and determining the first sample gesture with the minimum similarity and within a threshold range as a second sample gesture, and taking the second sample gesture as a final recognition result of the gesture to be recognized.
4. The method of claim 1, wherein the process of building the gesture library comprises:
obtaining a plurality of sample gestures, and obtaining second gesture data corresponding to each sample gesture;
for each sample gesture, preprocessing second gesture data of the sample gesture to obtain the ID number of the touch point ID number of the sample gesture, a Boolean value corresponding to each touch point ID number and a data point set vector corresponding to each touch point ID number;
and for each sample gesture, storing the ID number of the touch point ID number of the sample gesture, the Boolean value corresponding to each touch point ID number and the data point set vector corresponding to each touch point ID number into a gesture library.
5. The method according to claim 2, wherein the converting, for each touch point ID number of the gesture to be recognized, a data point set corresponding to the touch point ID number into a vector to obtain a corresponding data point set vector comprises:
and for each ID number of the touch point of the gesture to be recognized, performing L2 norm normalization on the data point set corresponding to the ID number of the touch point to obtain a corresponding data point set vector.
6. The method of claim 3, wherein the calculating, for each of the first sample gestures, a similarity between the data point set vectors of the gesture to be recognized and the first sample gesture comprises:
and calculating the cosine distance or Euclidean distance between the gesture to be recognized and the data point set vector of the first sample gesture for each first sample gesture, and taking the cosine distance or Euclidean distance as the similarity between the gesture to be recognized and the data point set vector of the first sample gesture.
7. A gesture recognition system, the system comprising:
an acquisition unit configured to acquire first gesture data of a gesture to be recognized by a user for operating a screen, the first gesture data including: the method comprises the following steps that at least one finger forming the gesture to be recognized acquires a touch point ID number, at least one operation event type and at least one group of positioning information when operating on the screen, wherein the positioning information comprises: the abscissa and the ordinate and the timestamp for collecting the operation event types are acquired, and each operation event type corresponds to one group of positioning information;
the preprocessing unit is used for preprocessing the first gesture data to at least obtain the ID number of the touch point ID number of the gesture to be recognized, a Boolean value corresponding to each touch point ID number and a data point set vector corresponding to each touch point ID number, wherein the data point set vector is composed of the operation event type and the positioning information;
and the matching unit is used for determining a sample gesture matched with the gesture to be recognized from a preset gesture library by utilizing the ID number of the touch point ID number of the gesture to be recognized, the Boolean value corresponding to each touch point ID number and the data point set vector corresponding to each touch point ID number, and taking the sample gesture matched with the gesture to be recognized as a final recognition result of the gesture to be recognized, wherein the gesture library comprises a plurality of preset sample gestures and preprocessed second gesture data corresponding to each sample gesture.
8. The system of claim 7, wherein the preprocessing unit comprises:
the first determining module is used for determining the ID number of the ID numbers of the touch points of the gesture to be recognized, wherein the ID number is consistent with the number of fingers forming the gesture to be recognized;
the second determining module is used for determining a data point set corresponding to each touch point ID number of the gesture to be recognized, the data point set comprises at least one operation event type and at least one group of positioning information, and data in the data point set are arranged in time sequence from front to back according to timestamps;
the processing module is used for judging whether the abscissa and the ordinate of the last preset time length in the data point set of the touch point ID number change or not according to the ID number of each touch point of the gesture to be recognized and the timestamp in the data point set of the touch point ID number, and determining the Boolean value corresponding to the touch point ID number according to the judgment result;
and the conversion module is used for converting the data point set corresponding to the ID number of the touch point into a vector aiming at the ID number of each touch point of the gesture to be recognized, so as to obtain a corresponding data point set vector.
9. The system of claim 7, wherein the matching unit comprises:
the first determining module is used for determining at least one first sample gesture which has the same ID number of touch point ID numbers as the gesture to be recognized and the same Boolean value of each touch point ID number as the gesture to be recognized from a preset gesture library;
the calculation module is used for calculating the similarity between the data point set vectors of the gesture to be recognized and the first sample gesture aiming at each first sample gesture;
and the second determination module is used for determining that the first sample gesture with the minimum similarity and within the threshold range is a second sample gesture, and taking the second sample gesture as a final recognition result of the gesture to be recognized.
10. The system of claim 8, wherein the conversion module is specifically configured to: and for each ID number of the touch point of the gesture to be recognized, performing L2 norm normalization on the data point set corresponding to the ID number of the touch point to obtain a corresponding data point set vector.
CN202110540651.XA 2021-05-18 2021-05-18 Gesture recognition method and system Active CN113064545B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110540651.XA CN113064545B (en) 2021-05-18 2021-05-18 Gesture recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110540651.XA CN113064545B (en) 2021-05-18 2021-05-18 Gesture recognition method and system

Publications (2)

Publication Number Publication Date
CN113064545A CN113064545A (en) 2021-07-02
CN113064545B true CN113064545B (en) 2022-04-29

Family

ID=76568449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110540651.XA Active CN113064545B (en) 2021-05-18 2021-05-18 Gesture recognition method and system

Country Status (1)

Country Link
CN (1) CN113064545B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107168636A (en) * 2017-05-18 2017-09-15 广州视源电子科技股份有限公司 Gesture identification method, device, touch screen terminal and the storage medium of multiple point touching
CN111459395A (en) * 2020-03-30 2020-07-28 北京集创北方科技股份有限公司 Gesture recognition method and system, storage medium and man-machine interaction device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9164674B2 (en) * 2013-03-28 2015-10-20 Stmicroelectronics Asia Pacific Pte Ltd Three-dimensional gesture recognition system, circuit, and method for a touch screen

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107168636A (en) * 2017-05-18 2017-09-15 广州视源电子科技股份有限公司 Gesture identification method, device, touch screen terminal and the storage medium of multiple point touching
CN111459395A (en) * 2020-03-30 2020-07-28 北京集创北方科技股份有限公司 Gesture recognition method and system, storage medium and man-machine interaction device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于空中手势的跨屏幕内容分享技术研究";黄培恺、喻纯、史元春;《清华大学计算机科学与技术系》;20190430;全文 *

Also Published As

Publication number Publication date
CN113064545A (en) 2021-07-02

Similar Documents

Publication Publication Date Title
CN108197532B (en) The method, apparatus and computer installation of recognition of face
CN105045454B (en) A kind of terminal false-touch prevention method and terminal
CN107633227A (en) A kind of fine granularity gesture identification method and system based on CSI
CN101968714B (en) Method and system for identifying operation locus input on mobile terminal interface
CN107194213B (en) Identity recognition method and device
CN107368820B (en) Refined gesture recognition method, device and equipment
CN112070416B (en) AI-based RPA flow generation method, apparatus, device and medium
CN110087021A (en) Online Video method, apparatus and video terminal
CN108256071B (en) Method and device for generating screen recording file, terminal and storage medium
US8868571B1 (en) Systems and methods for selecting interest point descriptors for object recognition
CN111476595A (en) Product pushing method and device, computer equipment and storage medium
CN113064545B (en) Gesture recognition method and system
CN110164417A (en) A kind of languages vector obtains, languages know method for distinguishing and relevant apparatus
CN107918635B (en) Bill inquiry method, operation device and computer readable storage medium
CN109190946A (en) Business revenue data determination method, device, electronic equipment and storage medium
CN112070487B (en) AI-based RPA flow generation method, apparatus, device and medium
CN109450963A (en) Information push method and terminal device
CN111982149B (en) Step counting identification method, step counting identification device, step counting identification equipment and readable storage medium
CN116560552A (en) Information processing method, device, electronic equipment and medium
CN112084780B (en) Coreference resolution method, device, equipment and medium in natural language processing
CN111339829B (en) User identity authentication method, device, computer equipment and storage medium
CN109993592A (en) Information-pushing method and device
CN103547982A (en) Identifying contacts and contact attributes in touch sensor data using spatial and temporal features
Singh et al. A Temporal Convolutional Network for modeling raw 3D sequences and air-writing recognition
CN110163083A (en) Matching process, device and the terminal device of user information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant