CN114529978A - Motion trend identification method and device - Google Patents

Motion trend identification method and device Download PDF

Info

Publication number
CN114529978A
CN114529978A CN202011210353.6A CN202011210353A CN114529978A CN 114529978 A CN114529978 A CN 114529978A CN 202011210353 A CN202011210353 A CN 202011210353A CN 114529978 A CN114529978 A CN 114529978A
Authority
CN
China
Prior art keywords
dynamic
trend
gesture
target
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011210353.6A
Other languages
Chinese (zh)
Inventor
崔艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Technology Group Co Ltd
Original Assignee
TCL Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Technology Group Co Ltd filed Critical TCL Technology Group Co Ltd
Priority to CN202011210353.6A priority Critical patent/CN114529978A/en
Publication of CN114529978A publication Critical patent/CN114529978A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application is applicable to the field of human-computer interaction, and provides a motion trend identification method and a motion trend identification device, wherein the identification method comprises the following steps: acquiring a plurality of identification images; each recognition image comprises at least one characteristic point related to the dynamic gesture; respectively selecting target feature points from the feature points contained in each identification image, and determining the coordinate information of each target feature point; and outputting a target dynamic trend corresponding to the dynamic gesture from a plurality of preset candidate dynamic trends according to the coordinate information. The identification method provided by the application can compare the difference between the dynamic gesture and a plurality of preset candidate dynamic trends based on the coordinate information of the target feature points of the dynamic gesture, so that the target motion trend closest to the dynamic gesture is determined, the motion trend is identified, the operation amount of motion trend identification is reduced, and the response rate is improved.

Description

Motion trend identification method and device
Technical Field
The application belongs to the field of human-computer interaction, and particularly relates to a method and a device for identifying a motion trend.
Background
With the technological progress, the field of human-computer interaction is greatly developed, and particularly, the control of a machine is realized by replacing the traditional human-computer interaction method with gesture recognition. In human-computer interaction, a terminal can generally generate a corresponding control instruction by recognizing a static gesture or a dynamic gesture of a user, for example, when photographing, the terminal generates a photographing instruction when recognizing a pendulum V-shaped static gesture; when the photo is displayed, the terminal generates a command for switching the picture when recognizing a dynamic gesture of shaking hands from right to left.
In the prior art, dynamic gesture recognition has been developed for a long time, a main implementation scheme is to train a dynamic gesture recognition model by using a deep learning neural network, the method is high in recognition rate, but sample collection is difficult, the training period is long, the operation of the dynamic gesture recognition model has high requirements on hardware, the calculation amount is large, and the calculation pressure is increased.
Disclosure of Invention
The embodiment of the application provides a method and a device for recognizing a motion trend, which can compare the difference between a dynamic gesture and a plurality of preset candidate dynamic trends based on coordinate information of a target feature point of the dynamic gesture, thereby determining a target motion trend closest to the dynamic gesture, realizing recognition of the motion trend, reducing the computation amount of motion trend recognition, improving the response rate, and solving the problems of difficult sample collection and long training period when dynamic gesture recognition is realized in the prior art.
In a first aspect, an embodiment of the present application provides a method for identifying a motion trend, including: acquiring a plurality of identification images; each recognition image comprises at least one characteristic point related to the dynamic gesture; respectively selecting target feature points from the feature points contained in each identification image, and determining the coordinate information of each target feature point; and outputting a target dynamic trend corresponding to the dynamic gesture from a plurality of preset candidate dynamic trends according to the coordinate information.
In one possible implementation manner of the first aspect, a gesture image related to a gesture operation of a user is monitored in real time; if the gesture image is monitored to contain at least one feature point related to the dynamic gesture, identifying a static gesture corresponding to the gesture image, and determining the type of a target feature point based on the static gesture; acquiring a target image only containing target feature points corresponding to the target feature point types in a preset time period, and determining coordinate information of the target feature points in each target image; and determining a target dynamic trend corresponding to the dynamic gesture from a plurality of preset candidate dynamic trends according to all the coordinate information.
It should be understood that, in the above possible implementation manners, the gesture operation of the user can be monitored in real time to adapt to a specific application scenario. And triggering and identifying the static gesture when monitoring the feature points related to the dynamic gesture, determining the type of the target feature points according to the static gesture, and acquiring the target image in a preset time period to only contain the target feature points corresponding to the type of the target feature points.
In a second aspect, an embodiment of the present application provides an apparatus for identifying a movement trend, including: the identification image acquisition module is used for acquiring a plurality of identification images; each recognition image comprises at least one characteristic point related to the dynamic gesture; the target characteristic point coordinate information determining module is used for respectively selecting target characteristic points from the characteristic points contained in each identification image and determining the coordinate information of each target characteristic point; and the target dynamic trend determining module is used for outputting a target dynamic trend corresponding to the dynamic gesture from a plurality of preset candidate dynamic trends according to the coordinate information.
In a third aspect, an embodiment of the present application provides a terminal device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method of any of the above first aspects when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, including: the computer readable storage medium stores a computer program which, when executed by a processor, implements the method of any of the first aspects described above.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the method of any one of the above first aspects.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Compared with the prior art, the embodiment of the application has the advantages that:
compared with the prior art, the method provided by the application has the advantages that the deviation between the dynamic gesture and a plurality of preset candidate dynamic trends is compared based on the coordinate information of the target feature points of the dynamic gesture, so that the target motion trend closest to the dynamic gesture is determined, and the motion trend is identified; the method can replace a method for recognizing dynamic gestures based on deep learning in the prior art to realize human-computer interaction based on the dynamic gestures, reduces the operation amount of motion trend recognition, improves the response rate, and solves the problems of difficult sample collection and long training period when the dynamic gestures are recognized based on the deep learning in the prior art.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the embodiments or the prior art description will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings may be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart of an implementation of an identification method provided in a first embodiment of the present application;
FIG. 2 is a schematic diagram of an application scenario provided in an embodiment of the present application;
FIG. 3 is a flow chart of an implementation of the identification method provided in the second embodiment of the present application;
FIG. 4 is a flowchart of an implementation of the identification method provided in the third embodiment of the present application;
FIG. 5 is a flowchart of an implementation of an identification method according to a fourth embodiment of the present application;
fig. 6 is a flowchart of an implementation of an identification method provided in a fifth embodiment of the present application;
fig. 7 is a flowchart of an implementation of an identification method according to a sixth embodiment of the present application;
FIG. 8 is a block flow diagram of an identification method provided in a seventh embodiment of the present application;
FIG. 9 is a schematic structural diagram of an identification device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In the embodiment of the present application, the main execution body of the flow is a terminal device. The terminal devices include but are not limited to: the device comprises a server, a computer, a smart phone, a tablet computer and the like, and can execute the identification method provided by the application. Preferably, the terminal device is a device controllable by a user, and the terminal device can generate a control instruction based on a dynamic gesture of the user. Fig. 1 shows a flowchart of an implementation of the identification method provided in the first embodiment of the present application, which is detailed as follows:
in S101, a plurality of recognition images are acquired.
In this embodiment, each recognition image contains at least one feature point associated with the dynamic gesture. The feature point may be a feature point corresponding to any specific part on the hand of the user, such as an index finger feature point or a palm center feature point, and a finger-related feature point is preferred, because a general dynamic gesture is associated with the finger-related feature point.
In a possible implementation manner, the acquiring of the plurality of identification images may specifically be: the method comprises the steps that a plurality of identification images related to dynamic gestures are obtained through a feature point identification device, the feature point identification device is used for marking feature points related to the dynamic gestures in gesture images of users, the feature point identification device can be any device capable of identifying the feature points related to the dynamic gestures in the gesture images of the users, and can be arranged on the terminal equipment of the application or other independent equipment; if the feature point recognition device is arranged on the terminal equipment, the feature point recognition device can be an application program capable of recognizing gesture feature points, namely the terminal equipment is provided with the application program capable of recognizing the gesture feature points, and a plurality of gesture images are used as input to output a plurality of recognition images related to dynamic gestures through the application program; if the feature point recognition device is an independent other device, the feature point recognition device can be connected with the terminal device through wired communication or wireless communication, and the terminal device can receive a plurality of recognition images which are fed back by the independent other device based on a plurality of gesture images and are related to the dynamic gesture through the established communication connection.
It should be understood that the size of the plurality of identification images obtained may be the same.
In S102, target feature points are selected from the feature points included in each of the recognition images, and coordinate information of each of the target feature points is determined.
In this embodiment, the type of the target feature point is a feature point type common to all recognition images, for example, if one recognition image includes an index finger feature point and a middle finger feature point, and the other recognition image includes an index finger feature point and a thumb feature point, the type of the target feature point should be the index finger feature point. The selecting of the target feature points from the feature points included in each recognition image may specifically be: and selecting target feature points corresponding to feature point types common to all the identification images from the feature points contained in each identification image. The target feature point may be a preset target feature point corresponding to a certain specific target feature point type, for example, a feature point at the center of a preset palm is used as the target feature point.
It should be understood that there is one and only one target feature point in each recognition image, and the coordinate information is used to indicate the relative position of the target feature point in the recognition image. The determining of the coordinate information of each target feature point may specifically be: establishing a coordinate system in the recognition image, wherein the origin of the coordinate system is the lower left corner of the recognition image, the x-axis of the coordinate system is the left edge of the recognition image, and the y-axis of the coordinate system is the lower edge of the recognition image. It should be understood that the present embodiment does not limit the manner in which the coordinate information is determined.
In S103, outputting a target dynamic trend corresponding to the dynamic gesture from a plurality of preset candidate dynamic trends according to the coordinate information.
In this embodiment, the candidate dynamic trends are preset, and may include two dynamic trends corresponding to horizontal and vertical directions, or four dynamic trends corresponding to horizontal, oblique upward (45 degrees), vertical, and oblique downward (45 degrees). It should be understood that the identification method provided by the embodiment of the present application is to reduce the calculation amount as much as possible when identifying the dynamic trend, and therefore the candidate dynamic trend is generally a straight line.
In a possible embodiment, the outputting the target dynamic trend corresponding to the dynamic gesture from the preset candidate dynamic trends according to the coordinate information may specifically be: calculating a linear equation of all candidate dynamic trends according to the coordinate information of any target feature point (preferably the first target feature point), wherein all the candidate dynamic trends pass through the target feature point; calculating the relative deviation value of all target characteristic points and any candidate dynamic trend; the calculation formula of the relative deviation value is as follows:
Figure BDA0002758618640000081
Figure BDA0002758618640000082
wherein L isjFor all target feature points and jth candidate dynamic trendRelative deviation values of potentials; a isj、bjAnd cjEquation a for the line of the jth candidate dynamic trendjx+bjy+cjA parameter of 0; x is the number ofiAnd yiThe x is the horizontal coordinate and the vertical coordinate of the ith target characteristic point, and n is the number of all target characteristic points;
and taking the candidate dynamic trend with the relative deviation value closest to zero as the target dynamic trend.
It should be understood that, in a possible application scenario, after the target dynamic trend corresponding to the dynamic gesture is determined, the start-point coordinate information and the end-point coordinate information in all the coordinate information are compared, so as to determine the direction and the displacement of the dynamic gesture, thereby completing the recognition of the dynamic gesture.
In this embodiment, based on the coordinate information of the target feature point of the dynamic gesture, the deviation between the dynamic gesture and a plurality of preset candidate dynamic trends is compared, so that the target motion trend closest to the dynamic gesture is determined, and the motion trend is identified. The recognition method provided by the embodiment can replace a method for recognizing dynamic gestures based on deep learning in the prior art to realize human-computer interaction based on the dynamic gestures, reduce the operation amount of motion trend recognition and improve the response rate.
Fig. 2 shows a schematic view of an application scenario provided in an embodiment of the present application. Referring to fig. 2, in the application scenario provided in this embodiment, the terminal device is a television, and the television is capable of generating a control instruction based on a dynamic gesture of a user. In the application scene, the television is internally provided with a camera and is used for acquiring a plurality of identification images related to the dynamic gesture, the acquired identification images contain characteristic points related to the dynamic gesture, index finger characteristic points are selected as target characteristic points, and a target dynamic trend corresponding to the dynamic gesture is determined from a plurality of preset candidate dynamic trends according to coordinate information of index finger characteristic points in each identification image. In the application scene, the dynamic gesture of the user is specifically that the index finger is stretched out, and other fingers are in a fist shape; illustratively, the preset candidate dynamic trends of the television set comprise a horizontal trend and a vertical trend, wherein the horizontal trend is used for adjusting the volume, and the vertical trend is used for adjusting the brightness. In a possible application scenario, when the user makes a dynamic gesture in which the index finger moves horizontally from left to right, the television recognizes a dynamic trend corresponding to the dynamic gesture as a horizontal trend based on the recognition method provided by the embodiment, and generates a control instruction for adjusting the volume. Subsequently, when the control instruction for adjusting the volume is responded, the television determines the volume change amount corresponding to the control instruction for adjusting the volume based on the distance between the start point coordinate information of the index finger characteristic point in the first identification image and the end point coordinate information of the index finger characteristic point in the last identification image, and adjusts the volume according to the volume change amount to finish the man-machine interaction.
Fig. 3 shows a flowchart of an implementation of the identification method according to the second embodiment of the present application. Referring to fig. 3, with respect to the embodiment shown in fig. 1, the identification method S101 provided in this embodiment includes S1011 to S1012, which are detailed as follows:
further, the acquiring a plurality of identification images includes:
in S1011, gesture images regarding the gesture operations of the user are monitored in real time.
In this embodiment, the monitoring of the gesture image related to the gesture operation of the user in real time may specifically be: and acquiring a gesture image related to the gesture operation of the user in real time through the camera. Generally, the shooting angle of the camera is fixed, the shooting picture of the camera corresponds to a preset gesture operation area, and a user makes a dynamic gesture in the gesture operation area to realize human-computer interaction.
In S1012, if it is monitored that the gesture image includes at least one feature point related to the dynamic gesture, a plurality of recognition images related to the dynamic gesture are obtained within a preset time period.
In this embodiment, the starting time of the preset time period is the acquisition time of the gesture image, and the gesture image is a first gesture image including at least one feature point related to the dynamic gesture during the monitoring period; the duration of the preset time period is preset and can be adjusted according to the requirement of recognition accuracy, for example, the monitoring frequency when monitoring the gesture images related to the gesture operation of the user is ten frames per second, and the duration of the preset time period is two seconds, so that twenty frames of gesture images within the preset time period at the moment of acquisition are acquired, that is, the plurality of recognition images related to the dynamic gesture are obtained.
In the embodiment, the gesture image about the gesture operation of the user is monitored in real time, so that the user can perform human-computer interaction through dynamic gestures, and a specific application scene is met; and taking the monitored characteristic points related to the dynamic gesture as marks for starting the man-machine interaction through the dynamic gesture, acquiring a plurality of identification images in a next period of preset time so as to conveniently identify the dynamic trend subsequently, and generating a corresponding instruction based on the dynamic trend to finish the man-machine interaction.
Fig. 4 shows a flowchart of an implementation of the identification method S102 according to the third embodiment of the present application. Referring to fig. 4, with respect to the embodiment shown in fig. 1, the selecting of the target feature point from the feature points included in each of the recognition images in the recognition method S102 provided in this embodiment includes S401 to S404, which are detailed as follows:
in S401, a first frame identification image of the plurality of identification images is determined.
In this embodiment, when the plurality of recognition images are acquired, that is, when the recognition is started according to the dynamic gesture corresponding to the recognition image, the first frame of recognition image includes the starting form of the dynamic gesture, which means that the dynamic gesture in the recognition image of the subsequent frame of recognition images in the plurality of recognition images is evolved from the dynamic gesture corresponding to the first frame of recognition image. It is therefore necessary to determine the first frame identification image of the plurality of identification images.
In S402, a static gesture corresponding to the first frame of recognition image is obtained.
In this embodiment, the first frame of recognition image includes at least one feature point associated with the dynamic gesture.
In a possible implementation manner, the determining the static gesture corresponding to the first frame of recognized image may specifically be: according to the feature point type of the at least one feature point and the position information of each feature point in the first frame of recognition image, determining a static gesture corresponding to the first frame of recognition image, exemplarily establishing a static gesture database, wherein the static gesture database comprises a plurality of candidate static gestures, each candidate static gesture corresponds to a standard gesture image, and the standard gesture image comprises each feature point; comparing the first frame of recognition image with the characteristic points of the standard gesture image corresponding to each candidate static gesture to obtain the candidate static gesture closest to the first frame of recognition image, and taking the candidate static gesture as the static gesture corresponding to the first frame of recognition image; illustratively, the first frame of recognition image is input into a static gesture recognition model for processing, and a static gesture corresponding to the first frame of recognition image is output, wherein the static gesture recognition model can be any model capable of determining a static gesture corresponding to the first frame of recognition image through information of feature points in the first frame of recognition image, and the static gesture recognition model is trained through a training set.
It should be understood that the static gesture recognition model may be provided on other terminal devices on which the above-described operations of recognizing the static gesture may be performed. The terminal device, that is, the execution main body of this embodiment, may receive the static gesture corresponding to the first frame of recognition image fed back from another terminal device, without performing an operation of specifically recognizing the static gesture.
In S403, a target feature point type is determined based on the static gesture.
In this embodiment, the association relationship between the static gesture possibly occurring when the static gesture is determined and each target feature point type is preset, and the target feature point type is determined based on the static gesture and the association relationship, for example, if the static gesture is used to represent a gesture in which an index finger extends and other fingers make a fist, and the gesture indicates that the index finger is used as a main dynamic gesture output carrier in the recognition image of the subsequent frame, that is, the user will perform a "stroke" with the index finger next, so as to make a dynamic gesture that the user wants to make, the target feature point type corresponding to the static gesture is the index finger feature point. Exemplarily, if the static gesture user represents a palm gesture in which all of the five fingers of the user extend, and the recognition images of the subsequent frames all use the palm as a main dynamic gesture output carrier, the type of the target feature point corresponding to the static gesture is a palm feature point, that is, the palm of the user is used as the target feature point.
In S404, a feature point corresponding to the type of the target feature point is selected from feature points included in each of the recognition images as the target feature point.
By way of example and not limitation, the target feature point type is an index finger feature point. In this embodiment, taking an identification image as an example, the identification image includes at least one feature point related to a dynamic gesture, illustratively, an index finger feature point and a thumb feature point, and the index finger feature point of the identification image is selected as a target feature point of the identification image.
In this embodiment, the target feature point is determined according to the static gesture, so that the selection of the target feature point is more reasonable and meets the requirements of actual application scenes; when the coordinate information of each target characteristic point is determined subsequently to determine the target dynamic trend, only the target characteristic point is considered and other characteristic points are not considered, so that the target dynamic trend can be determined, and meanwhile, the calculation amount is reduced. It should be understood that the identification method provided by the present embodiment is only suitable for identifying application scenarios with simple dynamic trends, and if it is necessary to identify the dynamic trend of rotation of the dynamic hand of the user, at least two target feature points are required, and the present embodiment does not relate to the dynamic trend of rotation.
Fig. 5 shows a flowchart of an implementation of the identification method according to the fourth embodiment of the present application. Referring to fig. 5, in the embodiment, with respect to fig. 1, before determining the target dynamic trend corresponding to the dynamic gesture from the preset multiple candidate dynamic trends according to all the coordinate information, the recognition method provided in this embodiment further includes S501 to S502, which are detailed as follows:
in S501, a dynamic trend center is determined based on the start point coordinate information and the end point coordinate information of all the coordinate information.
In this embodiment, the candidate dynamic trend is used to characterize the motion direction of the dynamic gesture, and the candidate dynamic trend is generally a straight line. In order to facilitate subsequent determination of the target dynamic trend corresponding to the dynamic gesture, the preset candidate dynamic trends are intersected at a point for easy differentiation, and the point is a dynamic trend center. The start point coordinate information is coordinate information of a target feature point of a first frame identification image among the plurality of identification images, and the end point coordinate information is coordinate information of a target feature point of a last frame identification image among the plurality of identification images.
The target feature point of the first frame of the recognition images is set as a starting point target feature point, and the target feature point of the last frame of the recognition images is set as an ending point target feature point. In a possible implementation manner, a center of a connection line between the starting point target feature point and the ending point target feature point is taken as the dynamic trend center, that is, an abscissa of the dynamic trend center is an average of abscissas of the starting point target feature point and the ending point target feature point, an ordinate of the dynamic trend center is an average of ordinates of the starting point target feature point and the ending point target feature point, and it is assumed that the coordinate of the starting point target feature point is (x) and the coordinate of the ending point target feature point is (x)1,y1) The coordinate of the end point target feature point is (x)n,yn) The coordinates of the dynamic trend center are
Figure BDA0002758618640000131
In S502, a plurality of candidate dynamic trends are obtained based on the dynamic trend center.
In this embodiment, the candidate dynamic trends intersect at the dynamic trend center for easy differentiation, and the candidate dynamic trends are different in direction, which may be horizontal and vertical, or horizontal, obliquely upward (45 degrees), vertical, and obliquely downward (45 degrees), and the directions of the candidate dynamic trends are adjusted according to specific application scenarios.
In the embodiment, a plurality of candidate dynamic trends are preset by determining a dynamic trend center so as to determine a target dynamic trend subsequently; particularly, the center of the connecting line between the starting point target feature point and the ending point target feature point is taken as the dynamic trend center, so that each target feature point can be dispersed around the dynamic trend center as much as possible, that is, each target feature point can be dispersed on two sides of any candidate dynamic trend as much as possible.
Fig. 6 shows a flowchart of an implementation of the identification method provided in the fifth embodiment of the present application. Referring to fig. 6, with respect to any embodiment described in fig. 1 to 4, the identification method S103 provided in this embodiment includes S601 to S603, which are detailed as follows:
further, the determining a target dynamic trend corresponding to the dynamic gesture from a plurality of preset candidate dynamic trends according to all the coordinate information includes:
in S601, a distance value between any one of the candidate dynamic trends and each of the target feature points is determined.
In this embodiment, an arbitrary candidate dynamic trend is taken as an example for explanation, the distance value refers to a vertical distance between the target feature point and the candidate dynamic trend (straight line), and a calculation formula of the distance value is as follows:
Figure BDA0002758618640000141
wherein d isijThe distance value from the ith target characteristic point to the jth candidate dynamic trend is obtained; a isj、bjAnd cjEquation a for the line of the jth candidate dynamic trendjx+bjy+cjA parameter of 0; x is the number ofiAnd yiThe abscissa and the ordinate of the ith target feature point are shown.
In S602, summing all distance values corresponding to the candidate dynamic trend to obtain a trend deviation amount between the dynamic gesture and the candidate dynamic trend.
In the present embodiment, the calculation formula of the trend deviation amount is as follows:
Figure BDA0002758618640000142
wherein D isjThe trend deviation amount between the dynamic gesture and the jth candidate dynamic trend; dijThe distance value from the ith target characteristic point to the jth candidate dynamic trend is obtained; n is the number of all target feature points for the dynamic gesture.
In S603, a target dynamic trend is determined from the candidate dynamic trends according to the trend deviation amount.
In this embodiment, in order to obtain a target dynamic trend that can better represent a user operation, the determining a target dynamic trend from the candidate dynamic trends according to the trend deviation amount may specifically include: and comparing each candidate dynamic trend obtained in the above way with the trend deviation amount of the dynamic gesture, and taking the candidate dynamic trend with the minimum trend deviation amount as the target dynamic trend.
It should be understood that, in order to satisfy a certain accuracy requirement, when the candidate dynamic trend with the smallest trend deviation amount is taken as the target dynamic trend, the specific magnitude of the trend deviation amount should be considered. Therefore, a deviation threshold value can be preset, or the minimum trend deviation value is greater than the deviation threshold value, the dynamic gesture is recognized as an invalid gesture, that is, the dynamic trend corresponding to the dynamic gesture is null, which means that the dynamic gesture does not play a role in human-computer interaction, that is, the terminal device does not generate a control instruction when receiving the dynamic gesture, or the terminal device generates an input invalid instruction when receiving the dynamic gesture, so as to remind a user that the dynamic gesture input is invalid.
In this embodiment, the distance value between each target feature point and any candidate dynamic trend is calculated, that is, the difference between the dynamic gesture and the candidate dynamic trend is digitized, so that the candidate dynamic trend with the smallest distance value is subsequently selected as the closest target dynamic trend of the dynamic gesture, and the candidate dynamic trend is intersected with the prior art, and the target dynamic trend can be determined through simple calculation.
Fig. 7 shows a flowchart of an implementation of the identification method according to the sixth embodiment of the present application. Referring to fig. 7, with respect to any embodiment described in fig. 1 to 4, the identification method S103 provided in this embodiment includes S701 to S703, which are detailed as follows:
further, the determining a target dynamic trend corresponding to the dynamic gesture from a plurality of preset candidate dynamic trends according to all the coordinate information includes:
in S701, a fitting straight line corresponding to the dynamic gesture is determined according to all the coordinate information.
In this embodiment, the fitted straight line is used to represent a regression straight line of each target feature point, that is, the fitted straight line has the smallest sum of squared discrete errors with each target feature point compared to other straight lines, and the fitted straight line can be represented by a binary first order equation. For example, the above determining the fitted straight line corresponding to the dynamic gesture according to all the coordinate information, that is, calculating the binary first-order equation of the regression straight line according to the coordinates of each target feature point, may specifically refer to a least square method in the prior art for calculation, and is not described herein again.
In S702, deviation angles of the fitted straight line and each of the candidate dynamic trends are calculated, respectively.
In this embodiment, the fitted straight line represents the direction of the dynamic gesture, and therefore, the difference between the fitted straight line and each candidate dynamic trend needs to be compared, and the minimum difference is selected as the target dynamic trend.
In a possible implementation manner, taking any one candidate dynamic trend as an example for illustration, the deviation angle between the fitted straight line and the candidate dynamic trend is calculated, specifically, the deviation angle between the fitted straight line and the candidate dynamic trend is calculated according to the equation of the fitted straight line obtained in the above step S701 and the equation of the candidate dynamic trend. Illustratively, the equation for the fitted line is: a is0x+b0y+c00; the candidate dynamic trend is the jth candidate dynamic trend, and the equation of the candidate dynamic trend is as follows: a isjx+bjy+cj0; the formula for calculating the deviation angle θ is as follows:
Figure BDA0002758618640000161
wherein, a0And b0The equation parameters of the fitted straight line are alljAnd bjAnd theta is the deviation angle, and is the equation parameter of the jth candidate dynamic trend.
In S703, a target dynamic trend is determined from the candidate dynamic trends according to the deviation angle.
In this embodiment, in order to obtain a target dynamic trend that can better represent a user operation, the determining a target dynamic trend from the candidate dynamic trends according to the deviation angle may specifically include: the deviation angle is used for representing the difference between the dynamic gesture and the candidate dynamic trend, and the candidate dynamic trend closest to the dynamic gesture is selected through the deviation angle, namely the candidate dynamic trend with the minimum deviation angle is selected as the target dynamic trend.
It should be understood that when a plurality of candidate dynamic trends are preset, a plurality of directional regions may be preset at the same time, and specifically, 0 to pi may be averagely divided into a plurality of directional regions according to the number of candidate dynamic trends based on the positive direction of the x-axis being 0, the negative direction of the x-axis being pi, that is, an included angle with the positive direction of the x-axis. In this embodiment, a direction vector (which may also be replaced by a slope) of the fitting straight line is determined, and a corresponding target direction area is determined according to the direction vector (or the slope), so that a candidate dynamic trend corresponding to the target direction area is taken as the target dynamic trend. In this case, only the direction vector (or slope) of the fitting straight line needs to be calculated, and other parameters do not need to be calculated, so that the calculation amount is reduced compared with the methods provided in S701 to S703.
It should be understood that, in order to meet a certain accuracy requirement, when the candidate dynamic trend with the smallest deviation angle is taken as the target dynamic trend, the specific magnitude of the deviation angle should be considered. Therefore, a deviation angle threshold value can be preset, or the minimum deviation angle is larger than the deviation angle threshold value, the dynamic gesture is recognized as an invalid gesture, that is, the dynamic trend corresponding to the dynamic gesture is null, which means that the dynamic gesture does not play a role in human-computer interaction, that is, the terminal device does not generate a control instruction when receiving the dynamic gesture, or the terminal device generates an input invalid instruction when receiving the dynamic gesture, so as to remind a user that the dynamic gesture input is invalid. Similarly, the above-mentioned 0-pi is averagely divided into a plurality of directional regions, a plurality of directional regions of candidate dynamic trends should be set simultaneously, and an invalid directional region should be set between every two candidate dynamic trends to satisfy a certain precision requirement.
Fig. 8 shows a flow chart of an identification method according to a seventh embodiment of the present application. Referring to fig. 8, with respect to any embodiment described in fig. 1 to 4, the identification method S103 provided in this embodiment includes S801 to S802, which are detailed as follows:
further, the outputting a target dynamic trend corresponding to the dynamic gesture from a plurality of preset candidate dynamic trends according to the coordinate information includes:
in S801, the number of the plurality of candidate dynamic trends is acquired.
In this embodiment, in order to reduce the calculation amount when the target dynamic trend is output, the number of the candidate dynamic trends is obtained, so that different calculation methods are subsequently selected based on the number for calculation, and the target dynamic trend is output according to the calculation result.
And in S802, outputting a target dynamic trend corresponding to the dynamic gesture according to the number and the coordinate information.
In this embodiment, when outputting the target dynamic trend, calculation needs to be performed, and as the number of the target dynamic trends increases, the increase amounts of the calculation amounts of different calculation manners are different, and specifically, outputting the target dynamic trend corresponding to the dynamic gesture according to the number and the coordinate information includes S8021 to S8022
In S8021, if the number is smaller than a preset threshold value: determining a distance value between any candidate dynamic trend and each target feature point; summing all the distance values corresponding to the candidate dynamic trend to obtain a trend deviation amount between the dynamic gesture and the candidate dynamic trend; and selecting the candidate dynamic trend with the minimum trend deviation amount as the target dynamic trend.
In this embodiment, the specific implementation of S8021 may refer to the identification method provided in the fifth embodiment, and is not described herein again.
In S8022, if the number is greater than or equal to a preset threshold value: determining a fitting straight line corresponding to the dynamic gesture according to all coordinate information; respectively calculating deviation angles of the fitted straight line and each candidate dynamic trend; and selecting the candidate dynamic trend with the minimum deviation angle as a target dynamic trend.
In this embodiment, the specific implementation of S8022 may refer to the identification method provided in the sixth embodiment, which is not described herein again.
In the present embodiment, S8021 or S8022 is determined as a means for identifying the target dynamic trend by the number of the plurality of candidate dynamic trends. When the number is larger, the S8022 only needs to determine a fitting straight line and calculate a deviation angle between the fitting straight line and each candidate dynamic trend, and the S8021 needs to calculate a distance value between each target feature point and any candidate dynamic trend, so that the calculation amount of the S8021 is larger than that of the S8022; when the number is small, the S8021 only needs to calculate the distance value between each target feature point and any candidate dynamic trend, and does not need to determine a fitting straight line, so that the calculation amount of the S8021 is smaller than that of the S8022.
In an identification method provided in another embodiment of the present application, relative to any embodiment described in fig. 1 to 4, the identification method S103 provided in this embodiment includes steps a to C, which are detailed as follows:
further, the determining a target dynamic trend corresponding to the dynamic gesture from a plurality of preset candidate dynamic trends according to all the coordinate information includes:
step A: determining a distance value between any candidate dynamic trend and each target feature point; summing all the distance values corresponding to the candidate dynamic trend to obtain a trend deviation amount between the dynamic gesture and the candidate dynamic trend; and selecting the candidate dynamic trend with the minimum trend deviation amount as the first target dynamic trend.
In this embodiment, the specific implementation of step a may refer to the identification method provided in the fifth embodiment, and is not described herein again. It should be noted that the difference between the two trends is that the candidate dynamic trend with the smallest trend deviation amount is selected as the first target dynamic trend in step a, rather than the target dynamic trend.
And B: determining a fitting straight line corresponding to the dynamic gesture according to all coordinate information; respectively calculating deviation angles of the fitted straight line and each candidate dynamic trend; and selecting the candidate dynamic trend with the minimum deviation angle as a second target dynamic trend.
In this embodiment, the specific implementation of step B may refer to the identification method provided in the sixth embodiment, and is not described herein again. It should be noted that the difference between the two is that the candidate dynamic trend with the smallest deviation angle is selected as the second target dynamic trend in step B, instead of the target dynamic trend.
And C: and if the first target dynamic trend is the same as the second target dynamic trend, taking the first target dynamic trend or the second target dynamic trend as a target dynamic trend.
It should be understood that, if the first target dynamic trend is different from the second target dynamic trend, the dynamic gesture is recognized as an invalid gesture, that is, the dynamic trend corresponding to the dynamic gesture is null, which means that the dynamic gesture will not play a role in human-computer interaction, that is, the terminal device will not generate a control instruction when receiving the dynamic gesture, or the terminal device generates an input invalid instruction when receiving the dynamic gesture, so as to remind the user that the dynamic gesture input is invalid.
In this embodiment, the target dynamic trend is subjected to the identification method provided by the fifth embodiment and the identification method provided by the sixth embodiment, and the identification precision of the target dynamic trend is improved.
Fig. 9 shows a schematic structural diagram of an apparatus provided in an embodiment of the present application, corresponding to the method described in the above embodiment, and only shows a part related to the embodiment of the present application for convenience of description.
Referring to fig. 9, the motion tendency recognition apparatus includes: the identification image acquisition module is used for acquiring a plurality of identification images related to the dynamic gesture; each recognition image comprises at least one characteristic point related to the dynamic gesture; the target characteristic point coordinate information determining module is used for respectively selecting target characteristic points from the characteristic points contained in each identification image and determining the coordinate information of each target characteristic point; and the target dynamic trend determining module is used for determining a target dynamic trend corresponding to the dynamic gesture from a plurality of preset candidate dynamic trends according to all the coordinate information.
Optionally, the identification image obtaining module includes: the gesture image monitoring module is used for monitoring a gesture image related to the gesture operation of the user in real time; and if the gesture image is monitored to contain at least one characteristic point related to the dynamic gesture, acquiring a plurality of identification images related to the dynamic gesture in a preset time period.
Optionally, the target feature point coordinate information determining module includes: a static gesture determination module, configured to determine a static gesture corresponding to a first frame of recognition image in the multiple recognition images; the target characteristic point type determining module is used for determining the type of the target characteristic point based on the static gesture; and the target characteristic point selection module is used for selecting the characteristic points corresponding to the types of the target characteristic points from the characteristic points contained in each identification image as the target characteristic points.
Optionally, the identification apparatus further includes: the dynamic trend center determining module is used for determining a dynamic trend center based on the starting point coordinate information and the end point coordinate information in all the coordinate information; the candidate dynamic trend setting module is used for obtaining a plurality of candidate dynamic trends based on the dynamic trend center; wherein the plurality of candidate dynamic trends intersect at the dynamic trend center.
Optionally, the target dynamic trend determining module includes: the distance value calculation module is used for determining the distance value between any candidate dynamic trend and each target feature point; the trend deviation amount calculation module is used for summing all distance values corresponding to the candidate dynamic trend to obtain a trend deviation amount between the dynamic gesture and the candidate dynamic trend; and the trend deviation amount comparison module is used for selecting the candidate dynamic trend with the minimum trend deviation amount as the target dynamic trend.
Optionally, the target dynamic trend determining module includes: the fitting straight line determining module is used for determining a fitting straight line corresponding to the dynamic gesture according to all the coordinate information; the deviation angle calculation module is used for calculating deviation angles of the fitted straight line and the candidate dynamic trends respectively; and the deviation angle comparison module is used for selecting the candidate dynamic trend with the minimum deviation angle as the target dynamic trend.
Optionally, the target dynamic trend determining module includes: the candidate dynamic number determining module is used for acquiring the number of the candidate dynamic trends; a distance value calculation module, configured to determine a distance value between any one of the candidate dynamic trends and each of the target feature points if the number is smaller than a preset threshold; the trend deviation amount calculation module is used for summing all distance values corresponding to the candidate dynamic trend to obtain a trend deviation amount between the dynamic gesture and the candidate dynamic trend; the trend deviation amount comparison module is used for selecting the candidate dynamic trend with the minimum trend deviation amount as the target dynamic trend; the fitting straight line determining module is used for determining a fitting straight line corresponding to the dynamic gesture according to all the coordinate information if the number is greater than or equal to a preset threshold value; the deviation angle calculation module is used for calculating deviation angles of the fitted straight line and the candidate dynamic trends respectively; and the deviation angle comparison module is used for selecting the candidate dynamic trend with the minimum deviation angle as the target dynamic trend.
It should be noted that, for the information interaction and execution process between the above devices, the specific functions and technical effects thereof based on the same concept as those of the method embodiment of the present application can be specifically referred to the method embodiment part, and are not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 10 shows a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 10, the terminal device 10 of this embodiment includes: at least one processor 100 (only one shown in fig. 10), a memory 101, and a computer program 102 stored in the memory 101 and executable on the at least one processor 100, the processor 100 implementing the steps in any of the various method embodiments described above when executing the computer program 102.
The terminal device 10 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices, preferably a device controllable by a user, and may generate a control instruction based on a dynamic gesture of the user. The terminal device may include, but is not limited to, a processor 100, a memory 101. Those skilled in the art will appreciate that fig. 10 is merely an example of the terminal device 10, and does not constitute a limitation of the terminal device 10, and may include more or less components than those shown, or combine some of the components, or different components, such as an input-output device, a network access device, etc.
The Processor 100 may be a Central Processing Unit (CPU), and the Processor 100 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 101 may in some embodiments be an internal storage unit of the terminal device 10, such as a hard disk or a memory of the terminal device 10. The memory 101 may also be an external storage device of the terminal device 10 in other embodiments, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the terminal device 10. Further, the memory 101 may also include both an internal storage unit and an external storage device of the terminal device 10. The memory 101 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 101 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-drive, a removable hard drive, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (12)

1. A method for identifying a movement trend is characterized by comprising the following steps:
acquiring a plurality of identification images; each recognition image comprises at least one characteristic point related to the dynamic gesture;
respectively selecting target feature points from the feature points contained in each identification image, and determining the coordinate information of each target feature point;
and outputting a target dynamic trend corresponding to the dynamic gesture from a plurality of preset candidate dynamic trends according to the coordinate information.
2. The identification method of claim 1, wherein said obtaining a plurality of identification images comprises:
monitoring gesture images related to gesture operations of a user in real time;
and if the gesture image is monitored to contain at least one characteristic point related to the dynamic gesture, acquiring a plurality of identification images related to the dynamic gesture in a preset time period.
3. The recognition method according to claim 1, wherein said selecting target feature points from the feature points included in each of the recognition images, respectively, comprises:
determining a first frame identification image of the plurality of identification images;
acquiring a static gesture corresponding to the first frame of recognition image;
determining a target feature point type based on the static gesture;
and respectively selecting the feature points corresponding to the target feature point types from the feature points contained in the identification images as the target feature points.
4. The recognition method of claim 1, before determining the target dynamic trend corresponding to the dynamic gesture from the preset candidate dynamic trends according to all the coordinate information, further comprising:
determining a dynamic trend center based on the start point coordinate information and the end point coordinate information in all the coordinate information;
obtaining a plurality of candidate dynamic trends based on the dynamic trend center; wherein the plurality of candidate dynamic trends intersect at the dynamic trend center.
5. The identification method according to any one of claims 1 to 4, wherein the determining the target dynamic trend corresponding to the dynamic gesture from a plurality of preset candidate dynamic trends according to all the coordinate information comprises:
determining a distance value between any candidate dynamic trend and each target characteristic point;
summing all distance values corresponding to the candidate dynamic trend to obtain a trend deviation amount between the dynamic gesture and the candidate dynamic trend;
and determining a target dynamic trend from the candidate dynamic trends according to the trend deviation amount.
6. The identification method according to any one of claims 1 to 4, wherein the determining a target dynamic trend corresponding to the dynamic gesture from a plurality of preset candidate dynamic trends according to all the coordinate information further comprises:
determining a fitting straight line corresponding to the dynamic gesture according to all the coordinate information;
respectively calculating deviation angles of the fitted straight line and each candidate dynamic trend;
and determining a target dynamic trend from the candidate dynamic trends according to the deviation angle.
7. The recognition method according to any one of claims 1 to 4, wherein the outputting a target dynamic trend corresponding to the dynamic gesture from a plurality of preset candidate dynamic trends according to the coordinate information further includes:
acquiring the number of the candidate dynamic trends;
and outputting a target dynamic trend corresponding to the dynamic gesture according to the number and the coordinate information.
8. The recognition method according to claim 7, wherein the outputting of the target dynamic trend corresponding to the dynamic gesture according to the number and the coordinate information comprises:
if the number is smaller than a preset threshold value, then: determining a distance value between any candidate dynamic trend and each target characteristic point; summing all distance values corresponding to the candidate dynamic trend to obtain a trend deviation amount between the dynamic gesture and the candidate dynamic trend; and selecting the candidate dynamic trend with the minimum trend deviation amount as the target dynamic trend.
9. The recognition method according to claim 7, wherein the outputting a target dynamic trend corresponding to the dynamic gesture according to the number and the coordinate information further comprises:
if the number is greater than or equal to a preset threshold value, then: determining a fitting straight line corresponding to the dynamic gesture according to all the coordinate information; respectively calculating deviation angles of the fitted straight line and each candidate dynamic trend; and selecting the candidate dynamic trend with the minimum deviation angle as the target dynamic trend.
10. An apparatus for identifying a movement trend, comprising:
the identification image acquisition module is used for acquiring a plurality of identification images; each recognition image comprises at least one characteristic point related to the dynamic gesture;
the target characteristic point coordinate information determining module is used for respectively selecting target characteristic points from the characteristic points contained in each identification image and determining the coordinate information of each target characteristic point;
and the target dynamic trend determining module is used for outputting a target dynamic trend corresponding to the dynamic gesture from a plurality of preset candidate dynamic trends according to the coordinate information.
11. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor realizes the steps of the method according to any of claims 1 to 7 when executing the computer program.
12. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202011210353.6A 2020-11-03 2020-11-03 Motion trend identification method and device Pending CN114529978A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011210353.6A CN114529978A (en) 2020-11-03 2020-11-03 Motion trend identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011210353.6A CN114529978A (en) 2020-11-03 2020-11-03 Motion trend identification method and device

Publications (1)

Publication Number Publication Date
CN114529978A true CN114529978A (en) 2022-05-24

Family

ID=81619874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011210353.6A Pending CN114529978A (en) 2020-11-03 2020-11-03 Motion trend identification method and device

Country Status (1)

Country Link
CN (1) CN114529978A (en)

Similar Documents

Publication Publication Date Title
US11120254B2 (en) Methods and apparatuses for determining hand three-dimensional data
US10043308B2 (en) Image processing method and apparatus for three-dimensional reconstruction
CN110532984B (en) Key point detection method, gesture recognition method, device and system
WO2022027912A1 (en) Face pose recognition method and apparatus, terminal device, and storage medium.
US20180088677A1 (en) Performing operations based on gestures
CN108960163B (en) Gesture recognition method, device, equipment and storage medium
US11062124B2 (en) Face pose detection method, device and storage medium
US9122353B2 (en) Kind of multi-touch input device
CN111815754A (en) Three-dimensional information determination method, three-dimensional information determination device and terminal equipment
US20220198836A1 (en) Gesture recognition method, electronic device, computer-readable storage medium, and chip
WO2016165614A1 (en) Method for expression recognition in instant video and electronic equipment
CN107368181B (en) Gesture recognition method and device
CN111767965A (en) Image matching method and device, electronic equipment and storage medium
CN111199169A (en) Image processing method and device
CN111598149A (en) Loop detection method based on attention mechanism
CN111523387A (en) Method and device for detecting hand key points and computer device
CN109241942B (en) Image processing method and device, face recognition equipment and storage medium
CN112417985A (en) Face feature point tracking method, system, electronic equipment and storage medium
US20170085784A1 (en) Method for image capturing and an electronic device using the method
WO2023077665A1 (en) Palm position determination method and apparatus, and electronic device and storage medium
US20220050528A1 (en) Electronic device for simulating a mouse
CN114529978A (en) Motion trend identification method and device
KR20190132885A (en) Apparatus, method and computer program for detecting hand from video
CN114360047A (en) Hand-lifting gesture recognition method and device, electronic equipment and storage medium
CN114581535A (en) Method, device, storage medium and equipment for marking key points of user bones in image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination