CN115686187A - Gesture recognition method and device, electronic equipment and storage medium - Google Patents

Gesture recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115686187A
CN115686187A CN202110872033.5A CN202110872033A CN115686187A CN 115686187 A CN115686187 A CN 115686187A CN 202110872033 A CN202110872033 A CN 202110872033A CN 115686187 A CN115686187 A CN 115686187A
Authority
CN
China
Prior art keywords
gesture
information
motion
type
electromyographic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110872033.5A
Other languages
Chinese (zh)
Inventor
郝宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110872033.5A priority Critical patent/CN115686187A/en
Publication of CN115686187A publication Critical patent/CN115686187A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure discloses a gesture recognition method, a gesture recognition device, electronic equipment and a storage medium; the gesture recognition method comprises the following steps: acquiring image information based on the acquired images of the gesture actions; acquiring electromyographic information sent by second equipment, wherein the electromyographic information is determined by the second equipment based on acquiring an electromyographic signal corresponding to the gesture action; and obtaining a gesture recognition result of the gesture action based on the image information and the myoelectric information. The gesture recognition disclosed by the embodiment of the disclosure can accurately improve the accuracy of the gesture action.

Description

Gesture recognition method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to but not limited to the field of gesture recognition or control technologies, and in particular, to a gesture recognition method and apparatus, an electronic device, and a storage medium.
Background
In the related art, human-computer interaction mainly can be through contact interaction or non-contact interaction; wherein the contact interaction can be performed by using a keyboard, a mouse, a joystick, a touch screen, a microphone and other interaction devices. However, the existing contact interaction requires additional specific hardware devices, requires both hands of the user to be occupied, brings a bad experience to the user, and the interaction cannot be accurately detected.
Disclosure of Invention
The disclosure provides a gesture recognition method, a gesture recognition device, user equipment and a storage medium.
According to a first aspect of the present disclosure, there is provided a gesture recognition method, performed by a first device, the method comprising:
acquiring image information based on the acquired image of the gesture action;
acquiring electromyographic information sent by second equipment, wherein the electromyographic information is determined by the second equipment based on acquiring an electromyographic signal corresponding to the gesture action;
and acquiring a gesture recognition result of the gesture action based on the image information and the myoelectric information.
In some embodiments, the obtaining a gesture recognition result of the gesture action based on the image information and the electromyographic information includes:
acquiring first gesture information of a first type of motion in the gesture motions based on the image information;
acquiring second gesture information of a second type of motion in the gesture motions based on the myoelectric information;
obtaining a gesture recognition result of the gesture action based on the first gesture information and the second gesture information;
wherein the motion amplitude of the first type of motion is larger than the motion amplitude of the second type of motion.
In some embodiments, the first type of action includes at least one of: arm movements, wrist movements and finger movements;
the second type of action includes at least one of: wrist movements, and finger movements.
In some embodiments, the obtaining second gesture information of a second type of motion in the gesture motions based on the electromyographic information includes:
determining a second type of motion in the gesture motions based on the electromyographic information;
determining the second gesture information based on the second type of action.
In some embodiments, the determining a second type of motion in the gesture motions based on the electromyographic information includes one of:
if the electromyographic information indicates that the electromyographic signal of at least one first detection area is larger than a preset threshold value, determining the finger action of a finger corresponding to the at least one first detection area;
if the electromyographic information indicates that the electromyographic signals of at least one first detection area and one second detection area are larger than the preset threshold value, determining that the wrist rotates;
the first detection area is an area corresponding to flexors of the fingers; the second detection area is an area corresponding to a flexor retinaculum of the wrist.
In some embodiments, the method further comprises:
acquiring motion information sent by the second device, wherein the motion information is determined by the second device collecting motion signals of the gesture actions;
the determining a second type of motion in the gesture motions based on the electromyographic information comprises:
and if the electromyographic signals of the first detection area and the second detection area in the electromyographic information are smaller than or equal to the preset threshold value, determining the second type of action based on the motion information.
In some embodiments, the obtaining a gesture recognition result of the gesture action based on the image information and the myoelectric information includes:
acquiring first gesture information of a first type of action in the gesture actions based on the image information and the myoelectric information;
acquiring second gesture information of a second type of motion in the gesture motions based on the image information and the myoelectric information;
obtaining a gesture recognition result of the gesture action based on the first gesture information and the second gesture information;
wherein the motion amplitude of the first type of motion is greater than the motion amplitude of the second type of motion.
In some embodiments, the obtaining first gesture information of a first type of motion in the gesture motions based on the image information and the electromyographic information includes:
acquiring first sub-gesture information of a first type of motion in the gesture motions based on the image information;
acquiring second sub-gesture information of a first type of gesture in the gesture actions based on the electromyographic information;
obtaining the first gesture information based on the first sub-gesture information and the first proportion and the second sub-gesture information and the second proportion; wherein the first specific gravity is greater than the second specific gravity;
and/or the presence of a gas in the gas,
the acquiring of the second gesture information of the second type of motion in the gesture motions based on the image information and the myoelectric information includes:
acquiring third sub-gesture information of a second type of motion in the gesture motions based on the image information;
acquiring fourth sub-gesture information of a second type of gesture in the gesture actions based on the electromyographic information;
obtaining the second gesture information based on the third sub-gesture information and a third specific gravity and the fourth sub-gesture action and a fourth specific gravity; wherein the third specific gravity is less than the fourth specific gravity.
According to a second aspect of the embodiments of the present disclosure, there is provided a gesture recognition method, performed by a second device, including:
acquiring image information sent by first equipment, wherein the image information is determined by the first equipment based on an acquired image of a gesture action;
collecting myoelectric signals corresponding to the gesture actions to obtain myoelectric information;
and obtaining a gesture recognition result of the gesture action based on the image information and the myoelectric information.
According to a third aspect of the embodiments of the present disclosure, there is provided a gesture recognition method, performed by a third device, including:
acquiring image information sent by first equipment, wherein the image information is determined by the first equipment based on an acquired image of a gesture action;
acquiring electromyographic information sent by second equipment, wherein the electromyographic information is determined by the second equipment based on acquiring an electromyographic signal corresponding to the gesture action;
and obtaining a gesture recognition result of the gesture action based on the image information and the myoelectric information.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a gesture recognition apparatus applied to a first device, including:
the first acquisition module is used for acquiring image information based on the acquired image of the gesture action;
the first acquisition module is used for acquiring electromyographic information sent by second equipment, wherein the electromyographic information is determined by the second equipment based on acquiring an electromyographic signal corresponding to the gesture action;
and the first processing module is used for obtaining a gesture recognition result of the gesture action based on the image information and the myoelectric information.
In some embodiments, the first processing module is configured to obtain first gesture information of a first type of motion in the gesture motions based on the image information; acquiring second gesture information of a second type of motion in the gesture motions based on the myoelectric information;
the first processing module is used for obtaining a gesture recognition result of the gesture action based on the first gesture information and the second gesture information;
wherein the motion amplitude of the first type of motion is larger than the motion amplitude of the second type of motion.
In some embodiments, the first type of action includes at least one of: arm movements, wrist movements and finger movements;
the second type of action includes at least one of: wrist movements, and finger movements.
In some embodiments, the first obtaining module is configured to determine a second type of motion in the gesture motions based on the electromyographic information;
the first obtaining module is configured to determine the second gesture information based on the second type of action.
In some embodiments, the first obtaining module is configured to determine a finger movement of a finger corresponding to at least one first detection area if the electromyographic information indicates that an electromyographic signal of the at least one first detection area is greater than a predetermined threshold;
alternatively, the first and second liquid crystal display panels may be,
the first acquisition module is used for determining wrist rotation if the electromyographic information indicates that the electromyographic signals of at least one first detection area and one second detection area are larger than the preset threshold value;
the first detection area is an area corresponding to flexors of fingers; the second detection area is an area corresponding to a flexor retinaculum of the wrist.
In some embodiments, the apparatus further comprises:
the first obtaining module is configured to obtain motion information sent by the second device, where the motion information is determined by the second device collecting a motion signal of the gesture motion;
the first obtaining module is configured to determine the second type of motion based on the motion information if the electromyographic signals of the first detection area and the second detection area in the electromyographic information are less than or equal to the predetermined threshold.
In some embodiments, the first processing module is configured to obtain first gesture information of a first type of motion in the gesture motions based on the image information and the electromyographic information; acquiring second gesture information of a second type of gesture in the gesture actions based on the image information and the myoelectric information;
the first processing module is configured to obtain a gesture recognition result of the gesture action based on the first gesture information and the second gesture information;
wherein the motion amplitude of the first type of motion is greater than the motion amplitude of the second type of motion.
In some embodiments, the first processing module is configured to obtain, based on the image information, first sub-gesture information of a first type of motion in the gesture motions; acquiring second sub-gesture information of a first type of gesture in the gesture actions based on the electromyographic information;
the first processing module is used for obtaining the first gesture information based on the first sub-gesture information and the first proportion and the second sub-gesture information and the second proportion; wherein the first specific gravity is greater than the second specific gravity;
and/or the presence of a gas in the gas,
the first processing module is used for acquiring third sub-gesture information of a second type of motion in the gesture motions based on the image information; acquiring fourth sub-gesture information of a second type of motion in the gesture motions based on the electromyographic information;
the first processing module is configured to obtain the second gesture information based on the third sub-gesture information and a third specific gravity, and the fourth sub-gesture action and a fourth specific gravity; wherein the third specific gravity is less than the fourth specific gravity.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a gesture recognition apparatus, performed by a second device, including:
the second acquisition module is used for acquiring image information sent by the first equipment, wherein the image information is determined by the first equipment based on the acquired image of the gesture action;
the second acquisition module is used for acquiring the electromyographic signals corresponding to the gesture actions to obtain electromyographic information;
and the second processing module is used for obtaining a gesture recognition result of the gesture action based on the image information and the myoelectric information.
According to a sixth aspect of the embodiments of the present disclosure, there is provided a gesture recognition apparatus, performed by a third device, including:
the third acquisition module is used for acquiring image information sent by first equipment, wherein the image information is determined by the first equipment based on the acquired image of the gesture action;
the third acquisition module is used for acquiring electromyographic information sent by second equipment, wherein the electromyographic information is determined by the second equipment based on acquiring an electromyographic signal corresponding to the gesture action;
and the third processing module is used for obtaining a gesture recognition result of the gesture action based on the image information and the myoelectric information.
According to a seventh aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: when the executable instructions are executed, the gesture recognition method according to any embodiment of the disclosure is realized.
According to an eighth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing an executable program, wherein the executable program, when executed by a processor, implements the gesture recognition method according to any of the embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the embodiment of the disclosure, image information of a collected gesture action image can be obtained through first equipment, and electromyographic information obtained by collecting an electromyographic signal corresponding to a gesture action through second equipment is obtained through the first equipment; and acquiring a gesture recognition result of the gesture action based on the image information and the myoelectric information. Thus, in the embodiment of the present disclosure, the image information acquired by the first device and the myoelectric information acquired by the second device may be synchronized into the first device; the gesture motion is recognized by combining the image information and the myoelectric information. Therefore, on one hand, the cooperative processing of the gesture recognition actions of the first device and the second device can be realized, and on the other hand, the gesture recognition accuracy can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a schematic diagram illustrating a method of gesture recognition, according to an example embodiment.
FIG. 2 is a block diagram illustrating electromyography detection of a second device, according to an example embodiment.
FIG. 3 is a block diagram illustrating a first device image acquisition according to an example embodiment.
FIG. 4 is a schematic diagram illustrating a method of gesture recognition, according to an example embodiment.
FIG. 5 is a schematic diagram illustrating a method of gesture recognition, according to an example embodiment.
FIG. 6 is a schematic diagram illustrating a method of gesture recognition, according to an example embodiment.
FIG. 7 is a block diagram illustrating a gesture recognition apparatus according to an example embodiment.
FIG. 8 is a block diagram illustrating a gesture recognition apparatus according to an example embodiment.
FIG. 9 is a block diagram illustrating a gesture recognition apparatus according to an example embodiment.
FIG. 10 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
FIG. 1 is a diagram of a gesture recognition method in accordance with an exemplary embodiment; as shown in fig. 1, the gesture recognition method is performed by a first device, and includes the following steps:
step S11: acquiring image information based on the acquired image of the gesture action;
step S12: acquiring electromyographic information sent by second equipment, wherein the electromyographic information is determined by the second equipment based on acquiring an electromyographic signal corresponding to the gesture action;
step S13: and acquiring a gesture recognition result of the gesture action based on the image information and the myoelectric information.
In one embodiment, the first device may be a mobile device or a stationary device; for example, the first device may be a cell phone, a computer, a server, a tablet, etc.
In one embodiment, the first device has an image acquisition module; the image acquisition module can be any device, assembly or equipment capable of acquiring images. For example, the image capture module of the first device may be a camera. The image acquisition module comprises an image recognition sensor.
In one embodiment, the second device may be, but is not limited to being, a wearable device; for example, the second device may be a watch or bracelet, etc.
In one embodiment, the acquiring myoelectric information sent by the second device in step 12 includes: and acquiring myoelectric information sent by one or more second devices. For example, the second device is worn on the left hand and/or the right hand.
In one embodiment, the second device comprises: an electromyographic sensor. For example, as shown in fig. 2, the second device is worn on the arm of the left hand. The inner side of the second device is tightly attached to the skin of the arm, and the positions of main muscles and/or tendons of fingers, wrists and the like are tightly attached to the electromyographic sensor of the second device; such as the superficial flexor and tendon of the finger, the flexor longus and tendon of the thumb, and/or the flexor retinaculum of the finger, etc., are proximate to the second device electromyographic sensor.
The electromyographic sensor may here be a sensor comprising one electrode, which may cover the major muscles and tendons of the finger and wrist, etc. The electromyographic sensor may include a plurality of electrodes, wherein one electrode covers a major electromyography and tendon of a finger, or one electrode covers a major muscle and tendon of a wrist. In this way, the myoelectric sensor can detect the myoelectric signal of each finger when the superficial flexor digitorum is contracted and the myoelectric signal of the wrist flexor retinaculum.
In one embodiment, the main muscles and/or tendons of the finger correspond to the electrode areas of the electromyographic sensors: a first detection area; the main muscles and/or tendons of the wrist correspond to the electrode areas of the electromyographic sensors: a second detection area. For example, after the arm wears the bracelet, the area of the superficial flexor muscles of the fingers corresponding to the electrodes of the electromyographic sensor of the second device is: a first detection area; the area of the hallux flexor hallux of the thumb corresponding to the electrodes of the electromyographic sensor of the second device is: a first detection area; the flexor retinaculum of the wrist corresponds to the area of the electrodes of the electromyographic sensors of the second device: a second detection area.
In one embodiment, before the step S11, the method includes: and acquiring at least one frame of image of the gesture action. Illustratively, the camera of the first device captures at least one frame of image when the specific object performs the gesture motion. The particular object includes a person. For example, as shown in fig. 3, the first device is a mobile phone, and the mobile phone captures an image of a gesture motion by using a camera.
In one embodiment, the step S11 includes: based on the image of the gesture action of gathering, obtain image information, include: acquiring an image containing a hand from the acquired image of the gesture action; based on the image of the hand, the image information is determined. As such, embodiments of the present disclosure may only need to perform image recognition based on the image portion containing the hand, thereby saving overhead of the first device.
In one embodiment, the gesture actions include: a first type of action and/or a second type of action; wherein the magnitude of motion of the first type of motion is greater than the magnitude of motion of the second type of motion. For example, the first type of motion may be arm swing and the second type of motion may be wrist rotation or finger bending. As another example, a first type of motion may be a wrist rotation of 120 degrees; the second type of motion may be a 90 degree rotation of the wrist.
In other embodiments, the frequency of motion of the first type of motion is greater than the frequency of motion of the second type of motion.
In one embodiment, the gesture actions include, but are not limited to, at least one of: arm movements, wrist movements, and finger movements. Arm movements herein include, but are not limited to, arm swings and/or arm movements. Wrist motions herein include but facilitate wrist rotation. Finger actions herein include, but are not limited to: one finger bend, or multiple finger bends.
In one embodiment, the step S13 includes: and performing image recognition of the gesture action based on the image information, and performing electromyographic signal analysis of the gesture action based on the electromyographic information to obtain a gesture recognition result for recognizing the gesture action.
The image recognition of the gesture action on the image here may be: and comparing the gesture action contained in the image information with a preset gesture action, and determining the preset gesture action matched with the gesture action contained in the image information as a gesture recognition result.
Here, the electromyographic signal analysis for performing the gesture motion based on the electromyographic information may be: and determining a gesture recognition result of the gesture action according to the strength of the electromyographic signal and/or the detection area corresponding to the electromyographic signal.
The gesture recognition result of the gesture action herein may be, but is not limited to: the gesture motion is a gesture motion that controls the first device, the gesture motion is a sign language motion, and the gesture motion is an AR or VR virtual reality interaction. If the gesture motion is a motion of controlling a gesture of the first device, the gesture motion may be, but is not limited to: the method comprises the steps of starting or closing a first device, entering a predetermined application program (APP), and executing a predetermined operation in the predetermined APP. Here, the entering of the predetermined APP can be entering of a shopping APP, entering of a browser APP and the like; the predetermined operation performed in the predetermined APP may be to select an article in the shopping APP or to read an article in the browser APP, or the like.
In the embodiment of the disclosure, image information of a collected gesture action image can be obtained through first equipment, and electromyographic information obtained by collecting an electromyographic signal corresponding to a gesture action through second equipment is obtained through the first equipment; and obtaining a gesture recognition result of the gesture action based on the image information and the myoelectric information. Thus, in the embodiment of the disclosure, the image information acquired by the first device and the myoelectric information acquired by the second device may be synchronized into the first device; the gesture motion is recognized by combining the image information and the myoelectric information. Therefore, on one hand, the cooperative processing of the gesture recognition actions of the first device and the second device can be realized, and on the other hand, the gesture recognition accuracy can be improved.
As shown in fig. 4, in some embodiments, the step S13 includes:
step S131: acquiring first gesture information of a first type of motion in the gesture motions based on the image information;
step S132: acquiring second gesture information of a second type of gesture in the gesture actions based on the electromyographic information;
step S133: obtaining a gesture recognition result of the gesture action based on the first gesture information and the second gesture information;
wherein the motion amplitude of the first type of motion is larger than the motion amplitude of the second type of motion.
In one embodiment, the first type of action includes at least one of: arm movements, wrist movements and finger movements; the second type of action includes at least one of: wrist movements, and finger movements.
In one embodiment, the amplitude of motion refers to the distance or angle that a certain part of the body moves relative to a certain reference position. For example, the motion amplitude of the first type of motion is larger than the motion amplitude of the second type of motion, which may be: the distance that the first type of motion moves relative to a reference position is greater than the distance that the second type of motion moves relative to a reference position, and/or the angle that the first type of motion rotates relative to a reference position is greater than the angle that the second type of motion rotates relative to a reference position. For example, a first type of motion for right hand arm swing and a second type of motion for right hand wrist rotation; if the arm vertical direction is selected as the reference position, the first type of motion usually moves about 20 cm relative to the reference position, and the second type of motion moves 5 cm relative to the reference position; and/or when the parallel direction of the arm vertical direction is selected as a rotating shaft (at the moment, the rotating shaft is a reference position), the first type of motion usually rotates by an angle of 120 degrees relative to the rotating shaft, and the second type of motion usually rotates by an angle of 60 degrees relative to the rotating shaft; the amplitude of motion is therefore greater for the first type of action than for the second type of action.
Arm movements herein include, but are not limited to, at least one of: arm swing, or arm movement. Wrist actions herein include, but are not limited to: the wrist is rotated. The finger motion herein includes, but is not limited to: the finger is bent.
The arm swing here may be a swing toward an arbitrary angle; such as arm swing up, down, etc. The wrist rotation can also be any angle rotation; for example, from top to bottom as shown in fig. 1; but also a counterclockwise or clockwise rotation of the wrist, etc. The finger here can also be bent at any angle.
For example, the first device may acquire a first type of motion of arm swing through image information; the first device acquires the electromyographic information and determines a second type of movement of wrist rotation based on the electromyographic information. The first device may recognize the gesture motion in combination with the motions of arm swing and wrist rotation.
Illustratively, the first device can acquire a first type of motion of wrist rotation through image information; the first device acquires the electromyographic information and determines the second type of movement of finger bending based on the electromyographic information. The first device may recognize the gesture motion in combination with the motion of wrist rotation and finger bending.
The first gesture information here is information for describing the first type of motion. For example, if the first type of motion is an arm swing motion, the first gesture information is related information describing the arm swing motion; for example, the first gesture information describes information that the arm swings from left to right and the swing angle is 60 degrees.
The second gesture information here is information for describing a second type of action. For example, if the second type of motion is a finger bending motion, the second gesture information is related information describing the finger bending motion; for example, the second gesture information describes information such as the bending of the index finger of the right hand, the degree of the complete top-down movement of the index finger and the bending of the index finger.
In the embodiment of the disclosure, a first type of motion with relatively large motion amplitude of an arm, a wrist, a finger and the like can be determined through image information, and a second type of motion with relatively small motion amplitude of the wrist, the finger and the like can be determined through myoelectric information; therefore, more detailed gesture actions can be recognized by the embodiment of the disclosure, so that the accuracy of gesture recognition is further improved.
In some embodiments, the step S131 includes:
determining a first type of motion in the gesture motions based on the image information;
determining the first gesture information based on the first type of action.
In some embodiments, the step S132 includes:
determining a second type of motion in the gesture motions based on the electromyographic information;
determining the second gesture information based on the second type of action.
In some embodiments, the determining a second type of motion in the gesture motions based on the electromyographic information includes one of:
if the electromyographic information indicates that the electromyographic signal of at least one first detection area is larger than a preset threshold value, determining the finger action of a finger corresponding to the at least one first detection area;
if the electromyographic information indicates that the electromyographic signals of at least one first detection area and one second detection area are larger than the preset threshold value, the wrist rotation is determined;
the first detection area is an area corresponding to flexors of fingers; the second detection area is an area corresponding to a flexor retinaculum of the wrist.
The gesture information here is information for describing a gesture motion. The first gesture information is used for describing information of a first type of action; the second gesture information is information for describing a second type of action.
The first detection area here may be a detection patch of one electrode of the myoelectric sensor; the second detection area may also be a detection patch area of one electrode of the myoelectric sensor.
The flexors of the fingers here include: the superficial flexor digitorum muscle of the fingers and/or the flexor hallucis longus muscle of the fingers. For example, the flexor of a finger may be the superficial flexor of the index finger, or may be the flexor hallucis longus of the thumb.
The electromyographic signal of a first detection area corresponding to one finger is larger than a preset threshold value, and the bending of the one finger is indicated; myoelectric signals of the first detection areas corresponding to the plurality of fingers are larger than a preset threshold value, and the plurality of fingers are indicated to be bent. And the electromyographic signals of the second detection area corresponding to the wrist and the first detection area corresponding to at least one finger are both greater than a preset threshold value, and the wrist is indicated to rotate.
Here the size of the electromyographic signals is proportional to the degree of flexion of the finger; the magnitude of the electromyographic signals here is proportional to the amplitude of the rotation of the wrist.
In the embodiment of the present disclosure, it may be accurately determined that one or more fingers are bent or the wrist rotates based on the signal strength of the electromyographic signal and/or the position of the detection area corresponding to the electromyographic signal. Therefore, the gesture actions of the second type of actions such as fingers and wrists with smaller motion amplitude can be accurately determined.
In some embodiments, the method further comprises:
acquiring motion information sent by the second device, wherein the motion information is determined by the second device collecting motion signals of the gesture actions;
the determining of the second type of motion in the gesture motions based on the electromyographic information comprises:
and if the electromyographic signals of the first detection area and the second detection area in the electromyographic information are smaller than or equal to the preset threshold value, determining the second type of action based on the motion information.
In one embodiment, a motion sensor is also provided in the second device. The motion sensor includes, but is not limited to, at least one of: an acceleration sensor, and an angular velocity sensor. The motion sensor herein is used to detect the moving speed and/or moving direction and the like of the arm, wrist and the like; for example, gesture motions for detecting arm movements, gesture motions for detecting wrist movements, and the like.
In the embodiment of the disclosure, if the electromyographic signals of the first detection area and the second detection area in the electromyographic information are less than or equal to a predetermined threshold, determining a second type of action based on the motion information; for example, when it is determined that the wrist is not rotated based on the electromyographic information, if the motion information determines that the wrist is moving, the gesture motion of the movement of the wrist may be determined based on the motion information. In this way, the detection of the second type of action can be assisted by the received motion information; therefore, the myoelectric sensor can be prevented from assisting the detection of the second type of action when the myoelectric sensor fails to detect the myoelectric signal due to factors such as external environment and the like, so that the probability of error detection is reduced.
Of course, in other embodiments, the motion information may also be used to determine the first type of motion. For example, based on the motion data of the arm movement in the operation information, a first type of motion of the arm movement is determined, and the like. Therefore, the motion information can also assist the detection of the first type of motion so as to further improve the accuracy of gesture motion recognition.
In some embodiments, the step S13 includes:
acquiring first gesture information of a first type of action in the gesture actions based on the image information and the myoelectric information;
acquiring second gesture information of a second type of gesture in the gesture actions based on the image information and the myoelectric information;
obtaining a gesture recognition result of the gesture action based on the first gesture information and the second gesture information;
wherein the motion amplitude of the first type of motion is greater than the motion amplitude of the second type of motion.
In the embodiment of the present disclosure, the normal image information and the myoelectric information each include first gesture information of a first type of motion and second gesture information of a second type of motion. Therefore, the components of the first type of action and the second type of action can be respectively determined based on the image information and the myoelectric information, and the recognition result of the gesture action can be accurately determined based on the components of the first type of action and the second type of action in the image information and the myoelectric information.
The first gesture information here includes: first sub-gesture information and/or second sub-gesture information; the second gesture information here includes: third sub-gesture information and/or fourth sub-gesture information. The first sub-gesture information is information describing a first type of action in the image information; the second sub-gesture information here is information describing the first type of motion in the myoelectric information. The third sub-gesture information here is information describing the second type of motion in the image information; the fourth sub-gesture information here is information describing the second type of motion in the image information.
In some embodiments, the obtaining first gesture information of a first type of motion in the gesture motions based on the image information and the electromyographic information includes:
acquiring first sub-gesture information of a first type of motion in the gesture motions based on the image information;
acquiring second sub-gesture information of a first type of gesture in the gesture actions based on the electromyographic information;
obtaining the first gesture information based on the first sub-gesture information and the first proportion and the second sub-gesture information and the second proportion; wherein the first specific gravity is greater than the second specific gravity;
the acquiring second gesture information of a second type of gesture in the gesture actions based on the image information and the myoelectric information comprises:
acquiring third sub-gesture information of a second type of motion in the gesture motions based on the image information;
acquiring fourth sub-gesture information of a second type of motion in the gesture motions based on the electromyographic information;
obtaining the second gesture information based on the third sub-gesture information and the third proportion and the fourth sub-gesture action and the fourth proportion; wherein the third specific gravity is less than the fourth specific gravity.
Here, the specific gravity is a ratio representing image information or myoelectric information in the entire image information; the specific gravity may be used to indicate the degree of importance of the first type of action or the second type of action. For example, in an application scenario, if a first type of motion with a relatively large motion amplitude needs to be determined, a first proportion corresponding to first sub-gesture information in image information may be determined to be greater than a second proportion corresponding to second sub-gesture information in electromyographic information; for example, the first specific gravity may be 0.8, 0.9 or 1, and the corresponding second specific gravity may be 0.2, 0.1 or 0. For another example, in an application scenario, if a second type of motion with a relatively small motion amplitude needs to be determined, a third proportion corresponding to the third sub-gesture information in the image information may be determined to be smaller than a fourth proportion corresponding to the fourth sub-gesture information in the electromyographic information; if the third specific gravity is determined to be 0.2, 0.1 or 0, the corresponding fourth specific gravity is 0.8, 0.9 or 1.
The specific gravity here may be determined based on history information. For example, in the case where the history determination first type action is stored in the electronic apparatus, the degree of possibility determined by the image information is 80; then it can be determined that the first specific gravity corresponding to the first sub-gesture information in the image information is 0.8, and the second specific gravity corresponding to the second sub-gesture information in the electromyography information is 0.2.
The specific gravity here may be set in advance. For example, for the first type of motion, the first ratio corresponding to the first sub-gesture information in the image information is set to 0.9, and the second specific gravity corresponding to the second sub-gesture information in the electromyographic information is set to 0.1.
In one embodiment, the obtaining the first gesture information based on the first sub-gesture information and a first weight and the second sub-gesture information and a second weight comprises:
determining a first numerical value based on a product of the first sub-gesture information and the first ratio;
determining a second numerical value based on a product of the second sub-gesture information and the second specific gravity;
determining the first gesture information based on a sum of the first numerical value and the second numerical value.
In one embodiment, the first specific gravity may be 1 and the second specific gravity may be 0. In this embodiment, the image information is used to determine the first type of motion and the electromyographic information is not used to determine the first type of motion.
Exemplarily, if based on the image information, determining first sub-gesture information of wrist rotation; but determines the second sub-gesture information of which wrist does not rotate based on the electromyographic information. And determining the gesture action without wrist rotation based on the first sub-gesture information and the first proportion and the second sub-gesture information and the second proportion.
Of course, in other embodiments, the first specific gravity may be greater than the second specific gravity. In this way, the image information can more accurately determine the first type of motion than the myoelectric information.
In one embodiment, the obtaining the second gesture information based on the third sub-gesture information and the third specific gravity and the fourth sub-gesture action and the fourth specific gravity comprises:
determining a third numerical value based on a product of the third sub-gesture information and the third specific gravity;
determining a fourth numerical value based on a product of the fourth sub-specific gravity information and the fourth specific gravity;
determining the second gesture information based on a sum of the third numerical value and the fourth numerical value.
In some embodiments, the third specific gravity may be 0 and the fourth specific gravity may be 1. In this embodiment, the image information is not used for determining the second type of motion and the electromyographic information is used for the second type of motion.
For example, if based on the image information, determining third sub-gesture information of the bending of the index finger; but determines the fourth sub-gesture information that the index finger is not bent and the middle finger is bent based on the electromyographic information. And determining the gesture action of not bending the index finger and bending the middle finger based on the third sub-gesture information and the third specific gravity and the fourth sub-gesture information and the fourth specific gravity.
Of course, in other embodiments, it is only necessary that the third specific gravity be less than the fourth specific gravity. In this way, the myoelectric information can more accurately determine the second type of motion than the image information.
In the embodiment of the present disclosure, components of a first type of motion and components of a second type of motion in image information may be determined, and components of the first type of motion and components of the second type of motion in electromyographic information may be determined; and then corresponding different proportions are determined according to the importance degrees of the first type of action and the second type of action in the image information, and corresponding different proportions are determined according to the importance degrees of the first type of action and the second type of action in the electromyographic information. In this way, the embodiment of the disclosure can extract the components of the first type of motion and the second type of motion from the image information and the myoelectric information and match the appropriate specific gravity to perform gesture motion recognition based on the components of the first type of motion and the components of the second type of motion in the image information and the myoelectric information and the corresponding specific gravity, so that the recognition result of the gesture motion can be accurately determined.
In addition, considering that the identification of the first type of action can be more accurately determined by the image information relative to the myoelectric information, a first proportion of the first type of action of the image information is determined to be larger than a second proportion of the first type of action of the myoelectric information; and determining that the fourth weight of the second type of action of the electromyographic information is greater than the third weight of the second type of action of the electromyographic information in consideration of the fact that the electromyographic information can more accurately determine the recognition of the second type of action with respect to the image information. Therefore, the appropriate proportion can be distributed to various actions in the image information and the myoelectric information, and the gesture recognition result of the determined gesture action is more accurate.
As shown in fig. 5, an embodiment of the present disclosure provides a gesture recognition method, which is performed by a second device, and includes:
step S21: acquiring image information sent by first equipment, wherein the image information is determined by the first equipment based on an acquired image of a gesture action;
step S22: collecting myoelectric signals corresponding to the gesture actions to obtain myoelectric information;
step S23: and obtaining a gesture recognition result of the gesture action based on the image information and the myoelectric information.
In some embodiments of the present disclosure, any of the embodiments included in step S13 performed by the first device may be performed by the second device.
In the embodiment of the disclosure, the image information of the image of the gesture action acquired by the first device can be acquired by the second device, and the electromyographic signal corresponding to the gesture action is acquired to obtain the electromyographic information; and acquiring a gesture recognition result of the gesture action based on the image information and the myoelectric information. Thus, in the embodiment of the disclosure, the image information acquired by the first device and the myoelectric information acquired by the second device may be synchronized into the second device; the gesture motion is recognized by combining the image information and the myoelectric information. Therefore, on one hand, the cooperative processing of the gesture recognition actions of the first device and the second device can be realized, and on the other hand, the gesture recognition accuracy can be improved.
As shown in fig. 6, an embodiment of the present disclosure provides a gesture recognition method, performed by a third device, including:
step S31: acquiring image information sent by first equipment, wherein the image information is determined by the first equipment based on an acquired image of a gesture action;
step S32: acquiring electromyographic information sent by second equipment, wherein the electromyographic information is determined by the second equipment based on acquiring an electromyographic signal corresponding to the gesture action;
step S33: and obtaining a gesture recognition result of the gesture action based on the image information and the myoelectric information.
In some embodiments of the present disclosure, any of the embodiments included in step S13 performed by the first device may be performed by the third device.
In one embodiment, the third device may be any mobile device or fixed device other than the first device and the second device; for example, the third device may be a cell phone, a computer, a server, a tablet, etc.
In the embodiment of the disclosure, image information of an image of a gesture action acquired by first equipment can be acquired through second equipment, and electromyographic information obtained by acquiring an electromyographic signal corresponding to the gesture action by the second equipment is acquired through the first equipment; and obtaining a gesture recognition result of the gesture action based on the image information and the myoelectric information. Thus, in the embodiment of the disclosure, the image information acquired by the first device and the myoelectric information acquired by the second device may be synchronized into the second device; the gesture motion is recognized by combining the image information and the myoelectric information. Therefore, on one hand, the cooperative processing of the gesture recognition actions of the first device and the second device can be realized, and on the other hand, the gesture recognition accuracy can be improved.
Fig. 7 provides a gesture recognition apparatus according to an exemplary embodiment, which is applied to a first device, and includes:
a first acquisition module 41, configured to obtain image information based on an acquired image of the gesture motion;
the first obtaining module 42 is configured to obtain electromyographic information sent by a second device, where the electromyographic information is determined by the second device based on acquiring an electromyographic signal corresponding to the gesture motion;
the first processing module 43 is configured to obtain a gesture recognition result of the gesture motion based on the image information and the myoelectric information.
In some embodiments, the first processing module 43 is configured to obtain first gesture information of a first type of motion in the gesture motions based on the image information; acquiring second gesture information of a second type of gesture in the gesture actions based on the electromyographic information;
the first processing module 43 is configured to obtain a gesture recognition result of the gesture action based on the first gesture information and the second gesture information;
wherein the motion amplitude of the first type of motion is greater than the motion amplitude of the second type of motion.
In some embodiments, the first type of action comprises at least one of: arm movements, wrist movements and finger movements;
the second type of action includes at least one of: wrist movements, and finger movements.
In some embodiments, the first obtaining module 42 is configured to determine a second type of motion in the gesture motions based on the electromyographic information;
the first obtaining module 42 is configured to determine the second gesture information based on the second type of motion.
In some embodiments, the first obtaining module 42 is configured to determine a finger movement of a finger corresponding to at least one first detection area if the electromyographic information indicates that an electromyographic signal of the at least one first detection area is greater than a predetermined threshold;
alternatively, the first and second electrodes may be,
the first obtaining module 42 is configured to determine that the wrist rotates if the electromyographic information indicates that an electromyographic signal of at least one of the first detection area and the second detection area is greater than the predetermined threshold;
the first detection area is an area corresponding to flexors of fingers; the second detection area is an area corresponding to a flexor retinaculum of the wrist.
In some embodiments, the apparatus further comprises:
the first obtaining module 42 is configured to obtain motion information sent by the second device, where the motion information is determined by the second device collecting a motion signal of the gesture motion;
the first obtaining module 42 is configured to determine the second type of action based on the motion information if the electromyographic signals of the first detection area and the second detection area in the electromyographic information are less than or equal to the predetermined threshold.
In some embodiments, the first processing module 43 is configured to obtain first gesture information of a first type of motion in the gesture motions based on the image information and the electromyographic information; acquiring second gesture information of a second type of motion in the gesture motions based on the image information and the myoelectric information;
the first processing module 43 is configured to obtain a gesture recognition result of the gesture action based on the first gesture information and the second gesture information;
wherein the motion amplitude of the first type of motion is greater than the motion amplitude of the second type of motion.
In some embodiments, the first processing module 43 is configured to obtain, based on the image information, first sub-gesture information of a first type of motion in the gesture motions; acquiring second sub-gesture information of a first type of gesture in the gesture actions based on the electromyographic information;
the first processing module 43 is configured to obtain the first gesture information based on the first sub-gesture information and the first specific gravity, and the second sub-gesture information and the second specific gravity; wherein the first specific gravity is greater than the second specific gravity.
In some embodiments, the first processing module 43 is configured to obtain, based on the image information, third sub-gesture information of a second type of motion in the gesture motions; acquiring fourth sub-gesture information of a second type of gesture in the gesture actions based on the electromyographic information;
the first processing module 43 is configured to obtain the second gesture information based on the third sub-gesture information and the third specific gravity, and the fourth sub-gesture action and the fourth specific gravity; wherein the third specific gravity is less than the fourth specific gravity.
Fig. 8 provides a gesture recognition apparatus according to an exemplary embodiment, which is applied to a second device, and includes:
a second obtaining module 51, configured to obtain image information sent by a first device, where the image information is determined by the first device based on an acquired image of a gesture motion;
the second acquisition module 52 is configured to acquire an electromyographic signal corresponding to the gesture motion to obtain electromyographic information;
and the second processing module 53 is configured to obtain a gesture recognition result of the gesture action based on the image information and the myoelectric information.
In some embodiments of the disclosure, the second processing module is capable of performing all of the methods that the first processing module is capable of performing; for example, the second processing module 53 is configured to obtain, based on the image information, first gesture information of a first type of motion in the gesture motions; acquiring second gesture information of a second type of motion in the gesture motions based on the myoelectric information; the second processing module 53 is configured to obtain a gesture recognition result of the gesture action based on the first gesture information and the second gesture information.
Fig. 9 provides a gesture recognition apparatus according to an exemplary embodiment, which is applied to a third device, and includes:
a third obtaining module 61, configured to obtain image information sent by a first device, where the image information is determined by the first device based on an acquired image of a gesture motion;
the third obtaining module 61 is configured to obtain electromyographic information sent by a second device, where the electromyographic information is determined by the second device based on acquiring an electromyographic signal corresponding to the gesture motion;
and a third processing module 62, configured to obtain a gesture recognition result of the gesture action based on the image information and the myoelectric information.
In some embodiments of the disclosure, the third processing module is capable of performing all of the methods that the first processing module is capable of performing; for example, the third processing module 62 is configured to obtain first gesture information of a first type of motion in the gesture motions based on the image information; acquiring second gesture information of a second type of gesture in the gesture actions based on the electromyographic information; the third processing module 63 is configured to obtain a gesture recognition result of the gesture action based on the first gesture information and the second gesture information.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An embodiment of the present disclosure further provides an electronic device, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: when the executable instructions are executed, the gesture recognition method according to any embodiment of the disclosure is realized.
In one embodiment, the electronic device is the first device, the second device, or the third device of the above embodiments.
The memory may include various types of storage media, which are non-transitory computer storage media capable of continuing to remember the information stored thereon after a communication device has been powered down.
The processor may be connected to the memory via a bus or the like for reading the executable program stored on the memory, for example, for implementing at least one of the methods as shown in fig. 1, 4 to 6.
Embodiments of the present disclosure also provide a computer-readable storage medium, which stores an executable program, where the executable program, when executed by a processor, implements the gesture recognition method according to any embodiment of the present disclosure. For example, at least one of the methods shown in fig. 1, 4 to 6 is implemented.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
Fig. 10 is a block diagram illustrating an electronic device 800 according to an example embodiment. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 10, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communications component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 can detect the open/closed state of the device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 can also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the electronic device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (22)

1. A gesture recognition method, performed by a first device, comprising:
acquiring image information based on the acquired images of the gesture actions;
acquiring electromyographic information sent by second equipment, wherein the electromyographic information is determined by the second equipment based on acquiring an electromyographic signal corresponding to the gesture action;
and acquiring a gesture recognition result of the gesture action based on the image information and the myoelectric information.
2. The method according to claim 1, wherein the obtaining a gesture recognition result of the gesture action based on the image information and the myoelectric information comprises:
acquiring first gesture information of a first type of motion in the gesture motions based on the image information;
acquiring second gesture information of a second type of gesture in the gesture actions based on the electromyographic information;
obtaining a gesture recognition result of the gesture action based on the first gesture information and the second gesture information;
wherein the motion amplitude of the first type of motion is greater than the motion amplitude of the second type of motion.
3. The method of claim 2,
the first type of action comprises at least one of: arm movements, wrist movements and finger movements;
the second type of action includes at least one of: wrist movements, and finger movements.
4. The method according to claim 2 or 3, wherein the obtaining of the second gesture information of the second type of gesture actions based on the electromyographic information comprises:
determining a second type of motion in the gesture motions based on the electromyographic information;
determining the second gesture information based on the second type of action.
5. The method according to claim 4, wherein the determining of the second type of gesture actions based on the electromyographic information comprises one of:
if the electromyographic information indicates that the electromyographic signal of at least one first detection area is larger than a preset threshold value, determining the finger action of a finger corresponding to the at least one first detection area;
if the electromyographic information indicates that the electromyographic signals of at least one first detection area and one second detection area are larger than the preset threshold value, the wrist rotation is determined;
the first detection area is an area corresponding to flexors of the fingers; the second detection area is an area corresponding to a flexor retinaculum of the wrist.
6. The method of claim 5, further comprising:
acquiring motion information sent by the second device, wherein the motion information is determined by the second device collecting motion signals of the gesture actions;
the determining a second type of motion in the gesture motions based on the electromyographic information comprises:
and if the electromyographic signals of the first detection area and the second detection area in the electromyographic information are smaller than or equal to the preset threshold value, determining the second type of action based on the motion information.
7. The method according to claim 1, wherein the obtaining a gesture recognition result of the gesture action based on the image information and the myoelectric information comprises:
acquiring first gesture information of a first type of action in the gesture actions based on the image information and the myoelectric information;
acquiring second gesture information of a second type of motion in the gesture motions based on the image information and the myoelectric information;
obtaining a gesture recognition result of the gesture action based on the first gesture information and the second gesture information;
wherein the motion amplitude of the first type of motion is larger than the motion amplitude of the second type of motion.
8. The method according to claim 7, wherein the obtaining first gesture information of a first type of motion in the gesture motions based on the image information and the electromyographic information comprises:
acquiring first sub-gesture information of a first type of motion in the gesture motions based on the image information;
acquiring second sub-gesture information of a first type of gesture in the gesture actions based on the electromyographic information;
obtaining the first gesture information based on the first sub-gesture information and a first proportion and the second sub-gesture information and a second proportion; wherein the first specific gravity is greater than the second specific gravity;
and/or the presence of a gas in the gas,
the acquiring of the second gesture information of the second type of motion in the gesture motions based on the image information and the myoelectric information includes:
acquiring third sub-gesture information of a second type of motion in the gesture motions based on the image information;
acquiring fourth sub-gesture information of a second type of motion in the gesture motions based on the electromyographic information;
obtaining the second gesture information based on the third sub-gesture information and a third specific gravity and the fourth sub-gesture action and a fourth specific gravity; wherein the third specific gravity is less than the fourth specific gravity.
9. A gesture recognition method, performed by a second device, comprising:
acquiring image information sent by first equipment, wherein the image information is determined by the first equipment based on an acquired image of a gesture action;
collecting myoelectric signals corresponding to the gesture actions to obtain myoelectric information;
and obtaining a gesture recognition result of the gesture action based on the image information and the myoelectric information.
10. A gesture recognition method, performed by a third device, comprising:
acquiring image information sent by first equipment, wherein the image information is determined by the first equipment based on an acquired image of a gesture action;
acquiring electromyographic information sent by second equipment, wherein the electromyographic information is determined by the second equipment based on acquiring an electromyographic signal corresponding to the gesture action;
and acquiring a gesture recognition result of the gesture action based on the image information and the myoelectric information.
11. A gesture recognition device applied to a first device comprises:
the first acquisition module is used for acquiring image information based on the acquired image of the gesture action;
the first acquisition module is used for acquiring electromyographic information sent by second equipment, wherein the electromyographic information is determined by the second equipment based on acquiring an electromyographic signal corresponding to the gesture action;
and the first processing module is used for obtaining a gesture recognition result of the gesture action based on the image information and the myoelectric information.
12. The apparatus of claim 11,
the first processing module is used for acquiring first gesture information of a first type of motion in the gesture motions based on the image information; acquiring second gesture information of a second type of gesture in the gesture actions based on the electromyographic information;
the first processing module is configured to obtain a gesture recognition result of the gesture action based on the first gesture information and the second gesture information;
wherein the motion amplitude of the first type of motion is greater than the motion amplitude of the second type of motion.
13. The apparatus of claim 12,
the first type of action comprises at least one of: arm movements, wrist movements and finger movements;
the second type of action includes at least one of: wrist movements, and finger movements.
14. The apparatus of claim 12 or 13,
the first acquisition module is used for determining a second type of motion in the gesture motions based on the electromyographic information;
the first obtaining module is configured to determine the second gesture information based on the second type of action.
15. The apparatus of claim 14,
the first obtaining module is configured to determine a finger action of a finger corresponding to at least one first detection area if the electromyographic information indicates that an electromyographic signal of the at least one first detection area is greater than a predetermined threshold;
alternatively, the first and second electrodes may be,
the first obtaining module is configured to determine that the wrist rotates if the electromyographic information indicates that an electromyographic signal of at least one of the first detection area and the second detection area is greater than the predetermined threshold;
the first detection area is an area corresponding to flexors of the fingers; the second detection area is an area corresponding to a flexor retinaculum of the wrist.
16. The apparatus of claim 15, further comprising:
the first obtaining module is configured to obtain motion information sent by the second device, where the motion information is determined by the second device collecting a motion signal of the gesture motion;
the first obtaining module is configured to determine the second type of action based on the motion information if the electromyographic signals of the first detection area and the second detection area in the electromyographic information are less than or equal to the predetermined threshold.
17. The apparatus of claim 11,
the first processing module is used for acquiring first gesture information of a first type of action in the gesture actions based on the image information and the myoelectric information; acquiring second gesture information of a second type of motion in the gesture motions based on the image information and the myoelectric information;
the first processing module is used for obtaining a gesture recognition result of the gesture action based on the first gesture information and the second gesture information;
wherein the motion amplitude of the first type of motion is larger than the motion amplitude of the second type of motion.
18. The apparatus of claim 17,
the first processing module is used for acquiring first sub-gesture information of a first type of motion in the gesture motions based on the image information; acquiring second sub-gesture information of a first type of gesture in the gesture actions based on the electromyographic information;
the first processing module is used for obtaining the first gesture information based on the first sub-gesture information and the first proportion and the second sub-gesture information and the second proportion; wherein the first specific gravity is greater than the second specific gravity;
and/or the presence of a gas in the atmosphere,
the first processing module is used for acquiring third sub-gesture information of a second type of motion in the gesture motions based on the image information; acquiring fourth sub-gesture information of a second type of motion in the gesture motions based on the electromyographic information;
the first processing module is configured to obtain the second gesture information based on the third sub-gesture information and a third specific gravity, and the fourth sub-gesture action and a fourth specific gravity; wherein the third specific gravity is less than the fourth specific gravity.
19. A gesture recognition device applied to a second device comprises:
the second acquisition module is used for acquiring image information sent by first equipment, wherein the image information is determined by the first equipment based on the acquired image of the gesture action;
the second acquisition module is used for acquiring the electromyographic signals corresponding to the gesture actions to obtain electromyographic information;
and the second processing module is used for obtaining a gesture recognition result of the gesture action based on the image information and the myoelectric information.
20. A gesture recognition device applied to a third device comprises:
the third acquisition module is used for acquiring image information sent by the first equipment, wherein the image information is determined by the first equipment based on the acquired image of the gesture action;
the third acquisition module is used for acquiring electromyographic information sent by second equipment, wherein the electromyographic information is determined by the second equipment based on acquiring an electromyographic signal corresponding to the gesture action;
and the third processing module is used for obtaining a gesture recognition result of the gesture action based on the image information and the myoelectric information.
21. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: when the executable instructions are executed, the gesture recognition method of any one of claims 1-10 is realized.
22. A computer-readable storage medium, wherein the readable storage medium stores an executable program, wherein the executable program, when executed by a processor, implements the gesture recognition method of any one of claims 1-10.
CN202110872033.5A 2021-07-30 2021-07-30 Gesture recognition method and device, electronic equipment and storage medium Pending CN115686187A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110872033.5A CN115686187A (en) 2021-07-30 2021-07-30 Gesture recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110872033.5A CN115686187A (en) 2021-07-30 2021-07-30 Gesture recognition method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115686187A true CN115686187A (en) 2023-02-03

Family

ID=85058347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110872033.5A Pending CN115686187A (en) 2021-07-30 2021-07-30 Gesture recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115686187A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116449967A (en) * 2023-06-20 2023-07-18 浙江强脑科技有限公司 Bionic hand teaching aid, control method thereof and main control equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116449967A (en) * 2023-06-20 2023-07-18 浙江强脑科技有限公司 Bionic hand teaching aid, control method thereof and main control equipment

Similar Documents

Publication Publication Date Title
US10191564B2 (en) Screen control method and device
EP3933552B1 (en) Method and device for determining gaze position of user, storage medium, and electronic apparatus
US9189072B2 (en) Display device and control method thereof
CN110908513B (en) Data processing method and electronic equipment
CN112817443A (en) Display interface control method, device and equipment based on gestures and storage medium
CN112364799A (en) Gesture recognition method and device
CN111178298A (en) Human body key point detection method and device, electronic equipment and storage medium
CN112486394A (en) Information processing method and device, electronic equipment and readable storage medium
CN113253908A (en) Key function execution method, device, equipment and storage medium
CN109938722B (en) Data acquisition method and device, intelligent wearable device and storage medium
US9958946B2 (en) Switching input rails without a release command in a natural user interface
CN113918258B (en) Page scrolling processing method, device, terminal and storage medium
CN115686187A (en) Gesture recognition method and device, electronic equipment and storage medium
KR102163996B1 (en) Apparatus and Method for improving performance of non-contact type recognition function in a user device
CN112346597A (en) Touch processing method and device and electronic equipment
CN113642551A (en) Nail key point detection method and device, electronic equipment and storage medium
EP3200127B1 (en) Method and device for fingerprint recognition
CN113657173B (en) Data processing method and device for data processing
CN114063876A (en) Virtual keyboard setting method, device and storage medium
CN107392161B (en) Biological characteristic information identification method and device and computer readable storage medium
CN117555413A (en) Interaction method, interaction device, electronic equipment and storage medium
CN117555414A (en) Interaction method, interaction device, electronic equipment and storage medium
CN114764293A (en) Control method and device of wearable equipment, wearable equipment and storage medium
CN117795939A (en) Method, device, equipment, storage medium and chip for recognizing holding gesture
CN117008730A (en) Control method, electronic device, intelligent finger ring, control system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination