CN104298358B - A kind of dynamic 3D gesture identification methods based on joint space position data - Google Patents

A kind of dynamic 3D gesture identification methods based on joint space position data Download PDF

Info

Publication number
CN104298358B
CN104298358B CN201410589578.5A CN201410589578A CN104298358B CN 104298358 B CN104298358 B CN 104298358B CN 201410589578 A CN201410589578 A CN 201410589578A CN 104298358 B CN104298358 B CN 104298358B
Authority
CN
China
Prior art keywords
gesture
point
position data
human body
joint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410589578.5A
Other languages
Chinese (zh)
Other versions
CN104298358A (en
Inventor
曾子辕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Conductor (xiamen) Technology Co Ltd
Original Assignee
Conductor (xiamen) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Conductor (xiamen) Technology Co Ltd filed Critical Conductor (xiamen) Technology Co Ltd
Priority to CN201410589578.5A priority Critical patent/CN104298358B/en
Publication of CN104298358A publication Critical patent/CN104298358A/en
Application granted granted Critical
Publication of CN104298358B publication Critical patent/CN104298358B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of dynamic 3D gesture identification methods based on joint space position data, it is characterised in that comprises the following steps:Gathered by the body-sensing camera being connected with computer and identify skeleton positional information, by the spatial position data and body-sensing camera itself attitude information of each joint reference point of human body, positiveization and normalized are carried out to skeleton positional information;According to different gesture motions, different joint reference points is selected;For the motor pattern feature of each gesture, judged according to the time requirement between criterion order and criterion order;When the spatial position data of one section newest in caching each joint reference point of continuous reliable human body meets the time requirement between the criterion order and criterion order of the gesture, it is determined as that human body makes the gesture.The present invention enters the identification of Mobile state 3D gestures based on body-sensing camera, and method simple practical is accurate.

Description

A kind of dynamic 3D gesture identification methods based on joint space position data
Technical field
The present invention relates to 3D gesture identification fields, specifically a kind of dynamic 3D gestures based on joint space position data Recognition methods.
Background technology
With the continuous popularization of touch screen technology, user is already adapted to and has become familiar with the interaction with machine.Now, people Machine interaction technique steps and has gone up higher step, enters the gesture identification epoch, but this is not successful yet.Gesture Identification occurs in amusement and Game Market, but what kind of influence is this technology will produce to our daily life It imaginary might as well once take on sofa, only need with a flouriss with regard to that can manipulate near light and TV, or automobile automatic detection Whether pedestrian is had.As Gesture Recognition supports the development that deepens continuously of human-computer interaction, these and other function will obtain quickly To realize.Gesture Recognition is studied using 2D visions always for a long time, but with the appearance of 3D sensor technologies, 3D Gesture Recognitions will more be developed.
The content of the invention
It is an object of the invention to provide a kind of dynamic 3D gesture identification methods based on joint space position data.
Technical program of the present invention lies in:A kind of dynamic 3D gesture identification methods based on joint space position data, its It is characterised by, comprises the following steps:
S01:Gathered by the body-sensing camera being connected with computer and identify skeleton positional information, will gathered And the skeleton positional information identified is stored in caching in the lump together with the time is obtained, the skeleton positional information includes people The spatial position data of each joint reference point of body;
S02:The skeleton positional information of deposit is located in advance based on existing skeleton positional information in caching Reason;
S03:By the spatial position data and body-sensing camera itself attitude information of each joint reference point of human body, to people Body bone site information carries out positiveization and normalized;
S04:According to different gesture motions, different joint reference points is selected;
S05:The spatial position data of each joint reference point of the continuous reliable human body of newest in caching one section is chosen, for Different target gestures, sets following criterion:Gesture motion region, gesture motion time, impact point holdup time, target point fortune Dynamic distance, impact point direction of motio;For the motor pattern feature of each gesture, further according between criterion order and criterion order Time requirement is judged;
S06:When the spatial position data of one section newest in caching each joint reference point of continuous reliable human body meets this Gesture criterion order and criterion order between time requirement when, be determined as that human body makes the gesture.
In an embodiment of the present invention, step S07 is also included after the step S06:Pass through what is stored in this section of caching Skeleton positional information, the spatial position data for choosing each joint reference point of human body calculate the action generation point position of each frame Put;The motion path of target joint point is obtained for different gesture motion, the characteristics of based on different gestures, by motion path and The purpose direction and the confidence band in purpose direction that action generation point is acted.
In an embodiment of the present invention, in the step S07, as the right hand waves to act, choose the right hand and appear in left and right area Ultra-Left, ultra-Right position and period right elbow space average position, calculate 3 formed planes vector as action purpose Direction, the space average position of shoulder midpoint is chosen as action generation point, the confidence band of purpose is action generation point and the right side The fan-shaped range that the ultra-Left ultra-Right position of hand is sketched the contours of.
In an embodiment of the present invention, the body-sensing camera is Kinect or Asus Asus Xtion.
In an embodiment of the present invention, described pre-process is specially:By closing on a few frame skeleton positional informations in sky Between the variance that is distributed judge whether the skeleton positional information of deposit reliable, and to each joint reference point of human body of authentic data Spatial position data carries out moving average processing.
In an embodiment of the present invention, each joint reference point of the human body includes:Shoulder midpoint, left point of shoulder point, right point of shoulder Point, belly midpoint, left finesse, left elbow, right finesse, right elbow, head.
In an embodiment of the present invention, in the step S04, wave to act such as the right hand and choose right finesse, right elbow, right point of shoulder Point, shoulder midpoint, head, belly midpoint are as a reference point.
The present invention enters the identification of Mobile state 3D gestures based on body-sensing camera, and method simple practical is accurate.
Brief description of the drawings
Fig. 1 is the method flow diagram of one embodiment of the invention.
Fig. 2 is the method flow diagram of another embodiment of the present invention.
Embodiment
For features described above of the invention and advantage can be become apparent, special embodiment below, and coordinate accompanying drawing, make detailed Carefully it is described as follows, but the present invention is not limited thereto.
As shown in figure 1, the present invention provides a kind of dynamic 3D gesture identification methods based on joint space position data, including Following steps:
S01:Gathered by the body-sensing camera being connected with computer and identify skeleton positional information, will gathered And the skeleton positional information identified is stored in Computer Cache, the skeleton positional information in the lump together with the time is obtained Include the spatial position data of each joint reference point of human body;
S02:The skeleton positional information of deposit is entered based on existing skeleton positional information in Computer Cache Row pretreatment;
S03:Pass through the spatial position data and body-sensing camera itself attitude information of each joint reference point of human body(Body-sensing Itself attitude information such as " upright ", " level ", " transverse direction " measured by camera gravity sensor), to skeleton position Confidence breath carries out positiveization and normalized;
S04:According to different gesture motions, different joint reference points is selected;
S05:The spatial position data of each joint reference point of the continuous reliable human body of newest in caching one section is chosen, for Different target gestures, set following criterion and admissible error:When gesture motion region, gesture motion time, target point are detained Between, impact point move distance, impact point direction of motio etc.;For the motor pattern feature of each gesture, further according to criterion order And the time requirement between criterion order is judged;
S06:When the spatial position data of one section newest in caching each joint reference point of continuous reliable human body meets this Gesture criterion order and criterion order between time requirement when, be determined as that human body makes the gesture.
As shown in Fig. 2 in another embodiment of the present invention, also include step S07 after the step S06:Delayed by this section The skeleton positional information of middle storage is deposited, the spatial position data for choosing each joint reference point of human body calculates the action of each frame Generation point position;The motion path of target joint point is obtained for different gesture motion, the characteristics of based on different gestures, by transporting The purpose direction and the confidence band in purpose direction that dynamic path and action generation point are acted.
In the step S07, as the right hand waves to act, choose the right hand appear in left and right area ultra-Left, ultra-Right position and The space average position of period right elbow, the vector of 3 formed planes is calculated as action purpose direction, chooses shoulder midpoint Sketched the contours as action generation point, the confidence band of purpose by action generation point and the ultra-Left ultra-Right position of the right hand space average position The fan-shaped range gone out.
The body-sensing camera model Kinect body-sensing cameras or Asus's Asus Xtion body-sensing cameras.
The pretreatment is specially:Judge deposit in the variance of spatial distribution by closing on a few frame skeleton positional informations Skeleton positional information it is whether reliable, and line slip is entered to each joint reference point spatial position data of human body of authentic data Average treatment.
Each joint reference point of human body includes:Shoulder midpoint, left point of shoulder point, right point of shoulder point, belly midpoint, left finesse, Left elbow, right finesse, right elbow, head etc..
In the step S04, wave to act such as the right hand and choose right finesse, right elbow, right point of shoulder point, shoulder midpoint, head, abdomen Portion midpoint is as a reference point.
In the step S05 and step S06, as the right hand waves to act, the judgement of gesture motion region is right finesse on the right side 1/3 arm length degree above elbow, so that straight up for symmetry axis, left area is 8 centimeters to the left of right elbow abscissa, right area is that right elbow is horizontal 8 centimeters to the right of coordinate, when right hand abscissa appears in left area-right area-left area or right area-left area-right area, and switch every time when Between be spaced within 1 second, then be determined as that the right hand makes gesture of waving;Ergonomics feature is considered simultaneously, except straight up Outside symmetry axis, several symmetry axis of increase deflection to the right to adapt to user action, predicting relation under any one symmetry axis into It is vertical then represent the right hand make gesture of waving.
The foregoing is only presently preferred embodiments of the present invention, all equivalent changes done according to scope of the present invention patent with Modification, it should all belong to the covering scope of the present invention.

Claims (6)

1. a kind of dynamic 3D gesture identification methods based on joint space position data, it is characterised in that comprise the following steps:
S01:Gathered by the body-sensing camera being connected with computer and identify skeleton positional information, will gathered and know The skeleton positional information not gone out is stored in caching in the lump together with the time is obtained, and it is each that the skeleton positional information includes human body The spatial position data of joint reference point;
S02:The skeleton positional information of deposit is pre-processed based on existing skeleton positional information in caching;
S03:By the spatial position data and body-sensing camera itself attitude information of each joint reference point of human body, to human body bone Bone positional information carries out positiveization and normalized;
S04:According to different gesture motions, different joint reference points is selected;
S05:The spatial position data of each joint reference point of the continuous reliable human body of newest in caching one section is chosen, for difference Target gesture, set following criterion:Gesture motion region, gesture motion time, impact point holdup time, target point motion away from From, impact point direction of motio;For the motor pattern feature of each gesture, further according to the time between criterion order and criterion order It is required that judged;
S06:When the spatial position data of one section newest in caching each joint reference point of continuous reliable human body meets the gesture Criterion order and criterion order between time requirement when, be determined as that human body makes the gesture;
S07:Pass through the skeleton positional information stored in this section of caching, the locus number of selection each joint reference point of human body According to the action generation point position for calculating each frame;The motion path of target joint point is obtained for different gesture motions, is based on The characteristics of different gestures, the purpose direction acted by motion path and action generation point and the reliable model in purpose direction Enclose.
2. a kind of dynamic 3D gesture identification methods based on joint space position data according to claim 1, its feature It is:
In the step S07, when the right hand waves action, ultra-Left, ultra-Right position and phase that the right hand appears in left and right area are chosen Between right elbow space average position, calculate the vector of 3 formed planes as action purpose direction, choose the sky of shoulder midpoint Between mean place sketched the contours of as action generation point, the confidence band of purpose by action generation point and the ultra-Left ultra-Right position of the right hand Fan-shaped range.
3. a kind of dynamic 3D gesture identification methods based on joint space position data according to claim 1, its feature It is:The body-sensing camera is Kinect or Asus AsusXtion.
4. a kind of dynamic 3D gesture identification methods based on joint space position data according to claim 1, its feature It is, the pretreatment is specially:Judge deposit in the variance of spatial distribution by closing on a few frame skeleton positional informations Whether skeleton positional information is reliable, and enters line slip to each joint reference point spatial position data of human body of authentic data and put down Handle.
5. a kind of dynamic 3D gesture identification methods based on joint space position data according to claim 1, its feature It is:Each joint reference point of human body includes:Shoulder midpoint, left point of shoulder point, right point of shoulder point, belly midpoint, left finesse, a left side Elbow, right finesse, right elbow, head.
6. a kind of dynamic 3D gesture identification methods based on joint space position data according to claim 1, its feature It is:In the step S04, when the right hand waves action, right finesse, right elbow, right point of shoulder point, shoulder midpoint, head, abdomen are chosen Portion midpoint is as a reference point.
CN201410589578.5A 2014-10-29 2014-10-29 A kind of dynamic 3D gesture identification methods based on joint space position data Active CN104298358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410589578.5A CN104298358B (en) 2014-10-29 2014-10-29 A kind of dynamic 3D gesture identification methods based on joint space position data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410589578.5A CN104298358B (en) 2014-10-29 2014-10-29 A kind of dynamic 3D gesture identification methods based on joint space position data

Publications (2)

Publication Number Publication Date
CN104298358A CN104298358A (en) 2015-01-21
CN104298358B true CN104298358B (en) 2017-11-21

Family

ID=52318120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410589578.5A Active CN104298358B (en) 2014-10-29 2014-10-29 A kind of dynamic 3D gesture identification methods based on joint space position data

Country Status (1)

Country Link
CN (1) CN104298358B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105302310B (en) * 2015-11-12 2018-08-31 姚焕根 A kind of gesture identifying device, system and method
CN106914016B (en) * 2015-12-25 2020-12-04 北京奇虎科技有限公司 Game player determination method and device
CN108108709B (en) * 2017-12-29 2020-10-16 纳恩博(北京)科技有限公司 Identification method and device and computer storage medium
CN108399367B (en) * 2018-01-31 2020-06-23 深圳市阿西莫夫科技有限公司 Hand motion recognition method and device, computer equipment and readable storage medium
CN109213189A (en) * 2018-07-19 2019-01-15 安徽共生物流科技有限公司 A kind of unmanned plane inspection system and movement determination method
CN112257639A (en) * 2020-10-30 2021-01-22 福州大学 Student learning behavior identification method based on human skeleton
CN115775347A (en) * 2021-11-04 2023-03-10 中国科学院深圳先进技术研究院 Taijiquan identification method based on fusion information, terminal device and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509092A (en) * 2011-12-12 2012-06-20 北京华达诺科技有限公司 Spatial gesture identification method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101483713B1 (en) * 2008-06-30 2015-01-16 삼성전자 주식회사 Apparatus and Method for capturing a motion of human

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509092A (en) * 2011-12-12 2012-06-20 北京华达诺科技有限公司 Spatial gesture identification method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《面向多方式人际交互的肢体动作识别研究》;曹雏清;《中国博士学位论文全文数据库信息科技辑》;20140315(第3期);第29-34页 *

Also Published As

Publication number Publication date
CN104298358A (en) 2015-01-21

Similar Documents

Publication Publication Date Title
CN104298358B (en) A kind of dynamic 3D gesture identification methods based on joint space position data
EP2674913B1 (en) Three-dimensional object modelling fitting & tracking.
US9354709B1 (en) Tilt gesture detection
CN107688342B (en) The obstruction-avoiding control system and method for robot
CN108475439B (en) Three-dimensional model generation system, three-dimensional model generation method, and recording medium
CA2804902C (en) A method circuit and system for human to machine interfacing by hand gestures
KR101738569B1 (en) Method and system for gesture recognition
US9135502B2 (en) Method for the real-time-capable, computer-assisted analysis of an image sequence containing a variable pose
CN105362048B (en) Obstacle information reminding method, device and mobile device based on mobile device
CN103999126B (en) Method and apparatus for estimating pose
CA2784554C (en) Head recognition method
CN111095164B (en) Method and apparatus for detecting user input in accordance with gestures
CN104515992B (en) A kind of method and device that spacescan positioning is carried out using ultrasonic wave
CN108931983A (en) Map constructing method and its robot
US20120163723A1 (en) Classification of posture states
CN106846403A (en) The method of hand positioning, device and smart machine in a kind of three dimensions
CN105849673A (en) Human-to-computer natural three-dimensional hand gesture based navigation method
US20100328319A1 (en) Information processor and information processing method for performing process adapted to user motion
CN109760627A (en) Exempt to stretch out one's hand formula access method
CN101996311A (en) Yoga stance recognition method and system
JP6320016B2 (en) Object detection apparatus, object detection method and program
CN116416518A (en) Intelligent obstacle avoidance method and device
KR102346294B1 (en) Method, system and non-transitory computer-readable recording medium for estimating user's gesture from 2d images
CN108108709A (en) A kind of recognition methods and device, computer storage media
KR101394279B1 (en) Method for recognition of user's motion by analysis of depth image and apparatus for analyzing user's motion using the same

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 361000 Fujian province Xiamen Software Park Road No. 31 205 unit two expected

Applicant after: The conductor (Xiamen) Technology Co. Ltd.

Address before: 361009 Fujian province Xiamen software park two sunrise Road No. 22 Room 102, unit I19

Applicant before: CONDUCTOR (XIAMEN) INTELLIGENT TECHNOLOGY CO., LTD.

COR Change of bibliographic data
GR01 Patent grant
GR01 Patent grant