CN109409236A - Three-dimensional static gesture identification method and device - Google Patents

Three-dimensional static gesture identification method and device Download PDF

Info

Publication number
CN109409236A
CN109409236A CN201811135368.3A CN201811135368A CN109409236A CN 109409236 A CN109409236 A CN 109409236A CN 201811135368 A CN201811135368 A CN 201811135368A CN 109409236 A CN109409236 A CN 109409236A
Authority
CN
China
Prior art keywords
gesture
dimension
model
finger joint
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811135368.3A
Other languages
Chinese (zh)
Other versions
CN109409236B (en
Inventor
贲唯
贲唯一
罗印升
宋伟
孙奔奔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Technology
Original Assignee
Jiangsu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Technology filed Critical Jiangsu University of Technology
Priority to CN201811135368.3A priority Critical patent/CN109409236B/en
Publication of CN109409236A publication Critical patent/CN109409236A/en
Application granted granted Critical
Publication of CN109409236B publication Critical patent/CN109409236B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种三维静态手势识别方法和装置,三维静态手势识别方法包括:获取手势正面图像;获取手势侧面图像;根据手势正面图像和手势侧面图像生成三维手势模型;对三维手势模型进行降维处理;根据降维后的三维手势模型进行手势识别。本发明不仅能够提高手势识别的精度和成功率,还能够降低手势识别的运算时间和功耗。

The invention provides a three-dimensional static gesture recognition method and device. The three-dimensional static gesture recognition method includes: acquiring a frontal image of the gesture; acquiring a side image of the gesture; generating a three-dimensional gesture model according to the frontal image of the gesture and the side image of the gesture; reducing the three-dimensional gesture model Dimensional processing; gesture recognition is performed according to the dimensionality-reduced 3D gesture model. The present invention can not only improve the accuracy and success rate of gesture recognition, but also reduce the operation time and power consumption of gesture recognition.

Description

Three-dimensional static gesture identification method and device
Technical field
The present invention relates to human-computer interaction technique fields, and in particular to a kind of three-dimensional static gesture identification method and a kind of three-dimensional Static gesture identification device.
Background technique
With the extensive use of computer, human-computer interaction has become the pith in people's daily life.Man-machine friendship Mutual final goal is to realize that people exchanges naturally with machine, and wherein limbs Shaping language such as gesture, body posture and expression is people Common exchange way, and gesture has the characteristics that convenient, lively, image and intuitive in human-computer interaction, therefore gesture is known It is other to study the needs for having complied with development in science and technology.Currently, gesture identification mainly has two-dimentional gesture identification and three-dimension gesture identification etc. Direction.With being gradually improved for Gesture Recognition, which is gradually used by various industries, greatly enhances people The convenience and practicability of machine interaction.
But since manpower itself has the characteristics that diversity, ambiguity, variability and spatio-temporal difference, background environment Complicated and changeable, unpredictable, gesture freedom degree is high and flexible in addition, chooses so that the gesture interaction of view-based access control model is rich in as one The research topic of the multi-crossed disciplines of war property.The research of the gesture interaction technology and methods of view-based access control model, not only artificial intelligence, There is important theory significance in the subjects such as pattern-recognition, machine learning, also have in intelligentized study, work and life There is very extensive application.The gesture interaction technology of view-based access control model is to realize the indispensable Xiang Guanjian of human-computer interaction of new generation Technology.And how to improve the precision and success rate of gesture identification, and reduce gesture identification operation time and power consumption become need It solves the problems, such as.
Summary of the invention
The present invention be solve how to improve the precision and success rate of gesture identification, and reduce gesture identification operation time and The technical issues of power consumption, provides a kind of three-dimensional static gesture identification method and device.
The technical solution adopted by the invention is as follows:
A kind of three-dimensional static gesture identification method, comprising: obtain gesture direct picture;Obtain gesture side image;According to The gesture direct picture and the gesture side image generate three-dimension gesture model;Dimensionality reduction is carried out to the three-dimension gesture model Processing;Gesture identification is carried out according to the three-dimension gesture model after dimensionality reduction.
The gesture direct picture is acquired by the first image acquisition units of corresponding target hand front setting, and is passed through The second image acquisition units moved along the guide rail of corresponding target hand side setting are from multiple and different multiple hands of angle acquisition Gesture side image.
The guide rail is arc-shaped guide rail, and the plane where the arc-shaped guide rail is perpendicular to where the palm of the target hand Plane.
The first image acquisition unit acquires the primary gesture direct picture, and second image acquisition units are along institute Arc-shaped guide rail is stated to move to the multiple gesture side images of second end acquisition from first end or move to first end acquisition from second end Multiple gesture side images complete the synchronous acquisition of gesture direct picture and gesture side image.
Three-dimension gesture model is generated according to the gesture direct picture and the gesture side image, is specifically included: reference The constraint relationship of three-dimensional hand skeleton model obtains three-dimension gesture model finger joint parameter according to the gesture direct picture;It calculates The area of hand in each gesture side image, and the smallest gesture side image of area for choosing hand is as selected hand Gesture side image;In conjunction with three-dimension gesture model finger joint parameter, three-dimension gesture model is obtained according to the selected gesture side image Hold the angle parameter of finger joint and the plane of delineation;Refer in conjunction with the three-dimension gesture model finger joint parameter and the three-dimension gesture model end The angle parameter of section and the plane of delineation generates the three-dimension gesture model.
The three-dimension gesture model finger joint parameter includes the length and root finger joint, middle finger joint, end finger joint of each finger finger joint With the angle of the plane of delineation.
Dimension-reduction treatment is carried out to the three-dimension gesture model, is specifically included: three-dimension gesture model parameter vector is tieed up from 27 It is down to 12 dimensions.
Gesture identification is carried out according to the three-dimension gesture model after dimensionality reduction, is specifically included: with the three-dimension gesture model after dimensionality reduction For input, three-dimension gesture model feature vector is obtained with the gesture identification deep learning model pre-established;To the three-dimensional Gesture model feature vector is quantified, and obtains single gesture discrete features vector, and according to the single gesture discrete features Vector is trained, and obtains static gesture classifier parameters;Static gesture classifier after training is used for the single hand Gesture discrete features vector carries out classification judgement, exports gesture identification result.
A kind of three-dimensional static gesture identifying device, comprising: the first image acquisition units, the first image acquisition unit are used In acquisition gesture direct picture;Second image acquisition units, second image acquisition units are for acquiring gesture side image; Terminal, the terminal pass through with the first image acquisition unit and second image acquisition units respectively Communication interface is connected, and the terminal is used to be generated according to the gesture direct picture and the gesture side image three-dimensional Gesture model, and dimension-reduction treatment is carried out to the three-dimension gesture model, and hand is carried out according to the three-dimension gesture model after dimensionality reduction Gesture identification.
The communication interface is USB (Universal Serial Bus, universal serial bus) data transmission interface or API (Application Programming Interface, application programming interface) interface.
Beneficial effects of the present invention:
The present invention generates three-dimension gesture model by obtaining gesture direct picture and gesture side image, then to three It ties up gesture model and carries out dimension-reduction treatment, and gesture identification is carried out according to the three-dimension gesture model after dimensionality reduction, hand can not only be improved The precision and success rate of gesture identification, additionally it is possible to reduce operation time and the power consumption of gesture identification.
Detailed description of the invention
Fig. 1 is the flow chart of the three-dimensional static gesture identification method of the embodiment of the present invention;
Fig. 2 is the setting position view of the image acquisition units of one embodiment of the invention;
Fig. 3 is the schematic diagram that gesture side image is acquired with interval angles of one embodiment of the invention;
Fig. 4 is the schematic diagram that the image acquisition units of one embodiment of the invention move along the rail;
Fig. 5 is the flow chart of the three-dimensional static gesture identification method of a specific embodiment of the invention;
Fig. 6 is the block diagram of the three-dimensional static gesture identifying device of the embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
As shown in Figure 1, the three-dimensional static gesture identification method of the embodiment of the present invention the following steps are included:
S1 obtains gesture direct picture.
In one embodiment of the invention, as shown in Fig. 2, the first image of corresponding target hand front setting can be passed through Acquisition unit 1 acquires gesture direct picture.
S2 obtains gesture side image.
In one embodiment of the invention, as shown in Fig. 2, the guide rail 3 along corresponding target hand side setting can be passed through Second image acquisition units 2 of movement are from multiple and different multiple gesture side images of angle acquisition.Wherein, guide rail 3 is arc Guide rail, the plane where arc-shaped guide rail is perpendicular to the plane where the palm of target hand.It should be noted that target in Fig. 2 Positional relationship between hand and the first image acquisition units 1, guide rail 3 is only as signal, in a particular embodiment with the first figure As the plane where the palm of 1 face target hand of acquisition unit one side, guide rail 3 is perpendicular to flat where the palm of target hand Subject to face.
As shown in figure 3, when the second image acquisition units 2 are moved along guide rail 3, it can be according to preset angle acquisition hand Gesture side image.N value is the angle number of acquisition, and N is integer and N > 1, it should be understood that in the certain situation of arc-shaped guide rail angle Under, N value is bigger, and interval angles are smaller, and collected gesture side image is more;N value is smaller, and interval angles are bigger, collects Gesture side image it is fewer.M value is any acquisition angles number, and M is integer and 1 < M < N.
It in one embodiment of the invention, can a synchronous acquisition gesture direct picture when carrying out gesture identification every time With gesture side image.Specifically, the first image acquisition units acquire a gesture direct picture, the second image acquisition units edge Arc-shaped guide rail moves to that second end acquires multiple gesture side images or to move to first end acquisition from second end more from first end A gesture side image completes the synchronous acquisition of gesture direct picture and gesture side image.Second image acquisition units are completed After the acquisition of multi-angle gesture side image, stop movement, when carrying out that images of gestures acquires next time, along guide rail to negative side To movement.As shown in figure 4, the second image acquisition units 2 first time images of gestures moves clockwise when acquiring along guide rail 3, Second end is moved to from first end, is moved for the second time along 3 counter clockwise direction of guide rail, moves to first end, third from second end It is secondary to move clockwise along guide rail 3, and so on.
S3 generates three-dimension gesture model according to gesture direct picture and gesture side image.
S4 carries out dimension-reduction treatment to three-dimension gesture model.
Specifically, step S3 includes: the constraint relationship with reference to three-dimensional hand skeleton model, is obtained according to gesture direct picture Three-dimension gesture model finger joint parameter;The area of hand in each gesture side image is calculated, and the area for choosing hand is the smallest Gesture side image is as selected gesture side image;In conjunction with three-dimension gesture model finger joint parameter, according to selected gesture side view As obtaining the angle parameter of three-dimension gesture model end finger joint and the plane of delineation;In conjunction with three-dimension gesture model finger joint parameter and three-dimensional hand Potential model end finger joint and the angle parameter of the plane of delineation generate three-dimension gesture model.
Wherein, the smallest gesture side image of the area of hand, as most accurate gesture side image are more by acquiring A gesture side image simultaneously determines most accurate gesture side image, can be improved the accuracy of gesture side image acquisition, from And improve subsequent gesture identification precision.
In one embodiment of the invention, three-dimension gesture model finger joint parameter includes the length and root of each finger finger joint Finger joint, middle finger joint, the angle for holding finger joint and the plane of delineation.
Further, hand Segmentation can be carried out to gesture direct picture and gesture side image, obtains normalized hand Image.
The length and width of each finger finger joint can be obtained according to the collected gesture direct picture of the first image acquisition units Degree, i.e., each finger are indicated that an angle indicates the angle of root finger joint and the plane of delineation by 2 angles, another angle indicates The angle of middle finger joint and the plane of delineation, according to the linear restriction between root finger joint, middle finger joint angle and middle finger joint, end finger joint angle Relation derivation obtains the angle of end finger joint and the plane of delineation, and manpower skeleton pattern is 27 freedom degrees.In addition each finger movement angle The dynamic constrained between the static constraint and finger movement finger joint of range is spent, three-dimension gesture model parameter vector is down to from 27 dimensions 12 dimensions.Wherein, thumb freedom degree is reduced to 2 by 5, and middle finger freedom degree is reduced to 1 by 4, other three finger freedom degrees are reduced to 2 by 4, The rotation angle of manpower indicates 3 freedom degrees, so, finally obtained three-dimension gesture model parameter vector is 12 dimensions.
S5 carries out gesture identification according to the three-dimension gesture model after dimensionality reduction.
It specifically, can be input with the three-dimension gesture model after dimensionality reduction, with the gesture identification deep learning pre-established Model obtains three-dimension gesture model feature vector, wherein gesture identification deep learning model is the three-dimension gesture model of 12 dimensions.Again Three-dimension gesture model feature vector is quantified, obtains single gesture discrete features vector, and according to the discrete spy of single gesture Sign vector is trained, and obtains static gesture classifier parameters.Then, the static gesture classifier after training is used for single Gesture discrete features vector carries out classification judgement, exports gesture identification result.
In one embodiment of the invention, above-mentioned step S3 to S5 can be executed by terminal.
In one particular embodiment of the present invention, as shown in figure 5, three-dimensional static gesture identification method includes following step It is rapid:
S101, the first image acquisition units acquire gesture direct picture.
S102, the second image acquisition units do one-way movement along guide rail and case set angle acquires the hand of multiple and different angles Gesture direct picture.
S103, terminal receive the first image acquisition units and the second image acquisition units acquired image.
S104 obtains three-dimension gesture model according to gesture direct picture with reference to the constraint relationship of three-dimensional hand skeleton model Finger joint parameter.
S105 chooses the smallest gesture side image of area of hand as selected gesture side image.
S106 obtains three-dimension gesture model end according to selected gesture side image in conjunction with three-dimension gesture model finger joint parameter The angle parameter of finger joint and the plane of delineation.
S107, in conjunction with the angle parameter of three-dimension gesture model finger joint parameter and three-dimension gesture model end finger joint and the plane of delineation Generate three-dimension gesture model.
Three-dimension gesture model parameter vector is down to 12 dimensions from 27 dimensions by S108.
S109 carries out gesture identification according to the three-dimension gesture model after dimensionality reduction.
In conclusion three-dimensional static gesture identification method according to an embodiment of the present invention, by obtaining gesture direct picture With gesture side image, and generate three-dimension gesture model, dimension-reduction treatment then carried out to three-dimension gesture model, and according to dimensionality reduction after Three-dimension gesture model carry out gesture identification, the precision and success rate of gesture identification can not only be improved, additionally it is possible to reduce gesture The operation time of identification and power consumption.
The three-dimensional static gesture identification method of corresponding above-described embodiment, the present invention also propose a kind of three-dimensional static gesture identification Device.
As shown in fig. 6, the three-dimensional static gesture identifying device of the embodiment of the present invention, including the first image acquisition units 1, Two image acquisition units 2 and terminal 4.Wherein, the first image acquisition units 1 are for acquiring gesture direct picture;Second Image acquisition units 2 are for acquiring gesture side image;Terminal 4 respectively with the first image acquisition units 1 and the second figure Picture acquisition unit 2 is connected by communication interface, and terminal 4 is used to be generated according to gesture direct picture and gesture side image Three-dimension gesture model, and dimension-reduction treatment is carried out to three-dimension gesture model, and hand is carried out according to the three-dimension gesture model after dimensionality reduction Gesture identification.
In one embodiment of the invention, communication interface is USB data transfer interface or api interface.
In one embodiment of the invention, terminal 4 includes master control and the human-computer interaction module being connected with master control, Human-computer interaction module can be used for being arranged the value of above-mentioned N, check gesture identification result etc..
The more specific embodiment of three-dimensional static gesture identifying device can refer to above-mentioned three-dimensional static gesture identification method Embodiment, details are not described herein.
Three-dimensional static gesture identifying device according to an embodiment of the present invention is acquiring gesture just by the first image acquisition units Face image acquires gesture side image by the second image acquisition units, by terminal according to gesture direct picture and Gesture side image generates three-dimension gesture model, and carries out dimension-reduction treatment to three-dimension gesture model, and according to three after dimensionality reduction It ties up gesture model and carries out gesture identification, the precision and success rate of gesture identification can not only be improved, additionally it is possible to reduce gesture identification Operation time and power consumption.
In the description of the present invention, the meaning of " plurality " is two or more, unless otherwise specifically defined.
In the present invention unless specifically defined or limited otherwise, term " installation ", " connected ", " connection ", " fixation " etc. Term shall be understood in a broad sense, for example, it may be being fixedly connected, may be a detachable connection, or integral;It can be mechanical connect It connects, is also possible to be electrically connected;It can be directly connected, can also can be in two elements indirectly connected through an intermediary The interaction relationship of the connection in portion or two elements.It for the ordinary skill in the art, can be according to specific feelings Condition understands the concrete meaning of above-mentioned term in the present invention.
In the present invention unless specifically defined or limited otherwise, fisrt feature in the second feature " on " or " down " can be with It is that the first and second features directly contact or the first and second features pass through intermediary mediate contact.Moreover, fisrt feature exists Second feature " on ", " top " and " above " but fisrt feature be directly above or diagonally above the second feature, or be merely representative of First feature horizontal height is higher than second feature.Fisrt feature can be under the second feature " below ", " below " and " below " One feature is directly under or diagonally below the second feature, or is merely representative of first feature horizontal height less than second feature.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example Point is included at least one embodiment or example of the invention.In the present specification, schematic expression of the above terms are not It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office It can be combined in any suitable manner in one or more embodiment or examples.In addition, without conflicting with each other, the skill of this field Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples It closes and combines.
It although an embodiment of the present invention has been shown and described, for the ordinary skill in the art, can be with A variety of variations, modification, replacement can be carried out to these embodiments without departing from the principles and spirit of the present invention by understanding And modification, the scope of the present invention is defined by the appended.

Claims (10)

1. a kind of three-dimensional static gesture identification method characterized by comprising
Obtain gesture direct picture;
Obtain gesture side image;
Three-dimension gesture model is generated according to the gesture direct picture and the gesture side image;
Dimension-reduction treatment is carried out to the three-dimension gesture model;
Gesture identification is carried out according to the three-dimension gesture model after dimensionality reduction.
2. three-dimensional static gesture identification method according to claim 1, which is characterized in that pass through corresponding target hand front The first image acquisition units being arranged acquire the gesture direct picture, and pass through the guide rail along corresponding target hand side setting Second image acquisition units of movement are from multiple and different multiple gesture side images of angle acquisition.
3. three-dimensional static gesture identification method according to claim 2, which is characterized in that the guide rail is arc-shaped guide rail, Plane where the arc-shaped guide rail is perpendicular to the plane where the palm of the target hand.
4. three-dimensional static gesture identification method according to claim 3, which is characterized in that the first image acquisition unit The primary gesture direct picture is acquired, second image acquisition units move to second from first end along the arc-shaped guide rail End, which acquires multiple gesture side images or moves to first end from second end, acquires multiple gesture side images, completes gesture front The synchronous acquisition of image and gesture side image.
5. three-dimensional static gesture identification method according to any one of claim 1-3, which is characterized in that according to the hand Gesture direct picture and the gesture side image generate three-dimension gesture model, specifically include:
With reference to the constraint relationship of three-dimensional hand skeleton model, three-dimension gesture model finger joint ginseng is obtained according to the gesture direct picture Number;
The area of hand in each gesture side image is calculated, and the smallest gesture side image of the area for choosing hand is made To select gesture side image;
In conjunction with three-dimension gesture model finger joint parameter, according to the selected gesture side image obtain three-dimension gesture model end finger joint with The angle parameter of the plane of delineation;
In conjunction with the angle parameter of the three-dimension gesture model finger joint parameter and three-dimension gesture model end finger joint and the plane of delineation Generate the three-dimension gesture model.
6. three-dimensional static gesture identification method according to claim 5, which is characterized in that the three-dimension gesture model finger joint Parameter includes the length and root finger joint, middle finger joint, the angle for holding finger joint and the plane of delineation of each finger finger joint.
7. three-dimensional static gesture identification method according to claim 6, which is characterized in that the three-dimension gesture model into Row dimension-reduction treatment, specifically includes:
Three-dimension gesture model parameter vector is down to 12 dimensions from 27 dimensions.
8. three-dimensional static gesture identification method according to claim 7, which is characterized in that according to the three-dimension gesture after dimensionality reduction Model carries out gesture identification, specifically includes:
It is input with the three-dimension gesture model after dimensionality reduction, obtains three-dimensional hand with the gesture identification deep learning model pre-established Potential model feature vector;
The three-dimension gesture model feature vector is quantified, obtains single gesture discrete features vector, and according to the list A gesture discrete features vector is trained, and obtains static gesture classifier parameters;
Static gesture classifier after training is used to carry out the single gesture discrete features vector classification judgement, exports hand Gesture recognition result.
9. a kind of three-dimensional static gesture identifying device characterized by comprising
First image acquisition units, the first image acquisition unit is for acquiring gesture direct picture;
Second image acquisition units, second image acquisition units are for acquiring gesture side image;
Terminal, the terminal respectively with the first image acquisition unit and second image acquisition units It is connected by communication interface, the terminal is used to be generated according to the gesture direct picture and the gesture side image Three-dimension gesture model, and to the three-dimension gesture model carry out dimension-reduction treatment, and according to the three-dimension gesture model after dimensionality reduction into Row gesture identification.
10. three-dimensional static gesture identifying device according to claim 9, which is characterized in that the communication interface is USB number According to coffret or api interface.
CN201811135368.3A 2018-09-28 2018-09-28 Three-dimensional static gesture recognition method and device Active CN109409236B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811135368.3A CN109409236B (en) 2018-09-28 2018-09-28 Three-dimensional static gesture recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811135368.3A CN109409236B (en) 2018-09-28 2018-09-28 Three-dimensional static gesture recognition method and device

Publications (2)

Publication Number Publication Date
CN109409236A true CN109409236A (en) 2019-03-01
CN109409236B CN109409236B (en) 2020-12-08

Family

ID=65465408

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811135368.3A Active CN109409236B (en) 2018-09-28 2018-09-28 Three-dimensional static gesture recognition method and device

Country Status (1)

Country Link
CN (1) CN109409236B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111123986A (en) * 2019-12-25 2020-05-08 四川云盾光电科技有限公司 Control device for controlling two-degree-of-freedom turntable based on gestures
WO2021000327A1 (en) * 2019-07-04 2021-01-07 深圳市瑞立视多媒体科技有限公司 Hand model generation method, apparatus, terminal device, and hand motion capture method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101383055A (en) * 2008-09-18 2009-03-11 北京中星微电子有限公司 Three-dimensional human face constructing method and system
CN102034097A (en) * 2010-12-21 2011-04-27 中国科学院半导体研究所 Method for recognizing human face by comprehensively utilizing front and lateral images
US20130088426A1 (en) * 2010-06-15 2013-04-11 Osamu Shigeta Gesture recognition device, gesture recognition method, and program
CN107688391A (en) * 2017-09-01 2018-02-13 广州大学 A kind of gesture identification method and device based on monocular vision

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101383055A (en) * 2008-09-18 2009-03-11 北京中星微电子有限公司 Three-dimensional human face constructing method and system
US20130088426A1 (en) * 2010-06-15 2013-04-11 Osamu Shigeta Gesture recognition device, gesture recognition method, and program
CN102034097A (en) * 2010-12-21 2011-04-27 中国科学院半导体研究所 Method for recognizing human face by comprehensively utilizing front and lateral images
CN107688391A (en) * 2017-09-01 2018-02-13 广州大学 A kind of gesture identification method and device based on monocular vision

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021000327A1 (en) * 2019-07-04 2021-01-07 深圳市瑞立视多媒体科技有限公司 Hand model generation method, apparatus, terminal device, and hand motion capture method
CN111123986A (en) * 2019-12-25 2020-05-08 四川云盾光电科技有限公司 Control device for controlling two-degree-of-freedom turntable based on gestures

Also Published As

Publication number Publication date
CN109409236B (en) 2020-12-08

Similar Documents

Publication Publication Date Title
Kuch et al. Vision based hand modeling and tracking for virtual teleconferencing and telecollaboration
Erol et al. Vision-based hand pose estimation: A review
Xu et al. A review: Point cloud-based 3d human joints estimation
CN113362452B (en) Hand posture three-dimensional reconstruction method and device and storage medium
CN104035557B (en) Kinect action identification method based on joint activeness
CN101833788A (en) A 3D human body modeling method using hand-drawn sketches
CN105045496B (en) A kind of gesture interaction method based on joint point transformation
WO2021000327A1 (en) Hand model generation method, apparatus, terminal device, and hand motion capture method
CN110490959A (en) Three dimensional image processing method and device, virtual image generation method and electronic equipment
CN109960403A (en) Visual presentation and interaction methods for medical images in an immersive environment
CN109409236A (en) Three-dimensional static gesture identification method and device
CN107633551A (en) A display method and device for a virtual keyboard
CN110866468A (en) A gesture recognition system and method based on passive RFID
He et al. A New Kinect‐Based Posture Recognition Method in Physical Sports Training Based on Urban Data
CN112183316B (en) Athlete human body posture measuring method
CN208569551U (en) A learnable data acquisition system based on gesture recognition gloves
Jabalameli et al. From single 2D depth image to gripper 6D pose estimation: A fast and robust algorithm for grabbing objects in cluttered scenes
Prasad et al. A wireless dynamic gesture user interface for HCI using hand data glove
Wang et al. Design of a four-axis robot arm system based on machine vision
CN113569775B (en) Mobile terminal real-time 3D human motion capturing method and system based on monocular RGB input, electronic equipment and storage medium
CN109032355A (en) Various gestures correspond to the flexible mapping interactive algorithm of same interactive command
Yan et al. AGRMTS: A virtual aircraft maintenance training system using gesture recognition based on PSO‐BPNN model
CN110517338A (en) A method of reusable maneuver library is constructed based on two sufficient role&#39;s substance features
Chien et al. Robotic calligraphy system using delta-like robot manipulator and virtual brush model
Ugolotti et al. Differential evolution based human body pose estimation from point clouds

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant