CN104298358A - Dynamic 3D gesture recognition method based on joint space position data - Google Patents

Dynamic 3D gesture recognition method based on joint space position data Download PDF

Info

Publication number
CN104298358A
CN104298358A CN201410589578.5A CN201410589578A CN104298358A CN 104298358 A CN104298358 A CN 104298358A CN 201410589578 A CN201410589578 A CN 201410589578A CN 104298358 A CN104298358 A CN 104298358A
Authority
CN
China
Prior art keywords
point
position data
gesture
joint
positional information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410589578.5A
Other languages
Chinese (zh)
Other versions
CN104298358B (en
Inventor
曾子辕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CONDUCTOR (XIAMEN) INTELLIGENT TECHNOLOGY Co Ltd
Original Assignee
CONDUCTOR (XIAMEN) INTELLIGENT TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CONDUCTOR (XIAMEN) INTELLIGENT TECHNOLOGY Co Ltd filed Critical CONDUCTOR (XIAMEN) INTELLIGENT TECHNOLOGY Co Ltd
Priority to CN201410589578.5A priority Critical patent/CN104298358B/en
Publication of CN104298358A publication Critical patent/CN104298358A/en
Application granted granted Critical
Publication of CN104298358B publication Critical patent/CN104298358B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Abstract

The invention discloses a dynamic 3D gesture recognition method based on joint space position data. The method is characterized by comprising the following steps that human skeleton position information is collected and recognized through a body sensing camera connected with a computer, and forward processing and normalization processing are carried out on the human skeleton position information through the space position data of all human joint reference points and the posture information of the body sensing camera; different joint reference points are selected according to different gesture actions; judgment is carried out according to the criterion sequence and the time requirement in the criterion sequence based on the movement mode characteristic of each gesture; when a piece of buffered newest continuous and reliable space position data of the human joint reference points meets the criterion sequence of the gesture and the time requirement in the criterion sequence, it is judged that the human body conducts the gesture. Dynamic 3D gesture recognition is carried out based on the body sensing camera, and the method is simple, practical and accurate.

Description

A kind of dynamic 3D gesture identification method based on joint space position data
Technical field
The present invention relates to 3D gesture identification field, is a kind of dynamic 3D gesture identification method based on joint space position data specifically.
Background technology
Along with the continuous popularization of touch screen technology, user has adapted to and has been familiar with the interaction with machine gradually.Now, human-computer interaction technology has stepped and has gone up higher step, enter the gesture identification epoch, but this is not successful yet.Gesture identification now occurs in amusement and Game Market, what kind of impact but this technology will produce on our daily life? there is no harm in imagination once, take on sofa, only need just can manipulate light and TV with a flouriss, or automobile automatically detect near whether have pedestrian.Along with Gesture Recognition supports the development that deepens continuously of human-computer interaction, these and other function will be achieved very soon.Gesture Recognition adopts 2D vision to study for a long time always, but along with the appearance of 3D sensor technology, 3D Gesture Recognition will more be developed.
Summary of the invention
The object of the present invention is to provide a kind of dynamic 3D gesture identification method based on joint space position data.
Technical program of the present invention lies in: a kind of dynamic 3D gesture identification method based on joint space position data, is characterized in that, comprise the following steps:
S01: identify skeleton positional information by the body sense camera collection that is connected with computing machine, to gather and the skeleton positional information identified together with acquisition time in the lump stored in buffer memory, described skeleton positional information comprises the spatial position data of human body each joint reference point;
S02: based on skeleton positional information existing in buffer memory to stored in skeleton positional information carry out pre-service;
S03: by spatial position data and body sense camera self attitude information of human body each joint reference point, forward and normalized are carried out to skeleton positional information;
S04: according to different gesture motion, selected different joint reference point;
S05: the spatial position data choosing one section of continuous human body each joint reference point reliably up-to-date in buffer memory, for different target gestures, be set as follows criterion: gesture motion region, gesture motion time, impact point hold-up time, impact point move distance, impact point direction of motion; For the motor pattern feature of each gesture, then judge according to the time requirement between criterion order and criterion order;
S06: when the spatial position data of one section up-to-date in buffer memory continuous reliably human body each joint reference point meets the time requirement between the criterion order of this gesture and criterion order, be judged to be that human body makes this gesture.
In an embodiment of the present invention, also comprise step S07 after described step S06: by the skeleton positional information stored in this section of buffer memory, the spatial position data choosing human body each joint reference point calculates the action origination point position of each frame; Obtain the motion path of target joint point for different gesture motion, based on the feature of different gesture, obtain the purpose direction of action and the confidence band in purpose direction by motion path and action origination point.
In an embodiment of the present invention, in described step S07, as the right hand is waved action, choose the right hand appear at left and right region ultra-Left, ultra-Right position and period right elbow space average position, calculate the vector of 3 formed planes as action purpose direction, choose the space average position of shoulder mid point as action origination point, the fan-shaped range that the confidence band of purpose is sketched the contours of for action origination point and the ultra-Left ultra-Right position of the right hand.
In an embodiment of the present invention, described body sense camera is Kinect or Asus Asus Xtion.
In an embodiment of the present invention, described pre-service is specially: by close on a few frame skeleton positional information the variance of space distribution judge stored in skeleton positional information whether reliable, and running mean process is carried out to the human body of authentic data each joint reference point spatial position data.
In an embodiment of the present invention, described human body each joint reference point comprises: shoulder mid point, left point of shoulder point, right point of shoulder point, belly mid point, left finesse, left elbow, right finesse, right elbow, head.
In an embodiment of the present invention, in described step S04, as the right hand waves that right finesse is chosen in action, right elbow, right point of shoulder point, shoulder mid point, head, belly mid point are as a reference point.
The present invention is based on the identification that body sense camera carries out dynamic 3D gesture, method simple practical is accurate.
Accompanying drawing explanation
Fig. 1 is the method flow diagram of one embodiment of the invention.
Fig. 2 is the method flow diagram of another embodiment of the present invention.
Embodiment
For above-mentioned feature and advantage of the present invention can be become apparent, special embodiment below, and coordinate accompanying drawing, be described in detail below, but the present invention is not limited to this.
As shown in Figure 1, the invention provides a kind of dynamic 3D gesture identification method based on joint space position data, comprise the following steps:
S01: identify skeleton positional information by the body sense camera collection that is connected with computing machine, to gather and the skeleton positional information identified together with acquisition time in the lump stored in Computer Cache, described skeleton positional information comprises the spatial position data of human body each joint reference point;
S02: based on skeleton positional information existing in Computer Cache to stored in skeleton positional information carry out pre-service;
S03: by spatial position data and body sense camera self attitude information (self attitude information measured by body sense camera gravity sensor is as " uprightly ", " level ", " transverse direction " etc.) of human body each joint reference point, forward and normalized are carried out to skeleton positional information;
S04: according to different gesture motion, selected different joint reference point;
S05: the spatial position data choosing one section of continuous human body each joint reference point reliably up-to-date in buffer memory, for different target gestures, be set as follows criterion and allowable error: gesture motion region, gesture motion time, impact point hold-up time, impact point move distance, impact point direction of motion etc.; For the motor pattern feature of each gesture, then judge according to the time requirement between criterion order and criterion order;
S06: when the spatial position data of one section up-to-date in buffer memory continuous reliably human body each joint reference point meets the time requirement between the criterion order of this gesture and criterion order, be judged to be that human body makes this gesture.
As shown in Figure 2, in another embodiment of the present invention, also comprise step S07 after described step S06: by the skeleton positional information stored in this section of buffer memory, the spatial position data choosing human body each joint reference point calculates the action origination point position of each frame; Obtain the motion path of target joint point for different gesture motion, based on the feature of different gesture, obtain the purpose direction of action and the confidence band in purpose direction by motion path and action origination point.
In described step S07, as the right hand is waved action, choose the right hand appear at left and right region ultra-Left, ultra-Right position and period right elbow space average position, calculate the vector of 3 formed planes as action purpose direction, choose the space average position of shoulder mid point as action origination point, the fan-shaped range that the confidence band of purpose is sketched the contours of for action origination point and the ultra-Left ultra-Right position of the right hand.
Described body sense camera model is Kinect body sense camera or Asus Asus Xtion body sense camera.
Described pre-service is specially: by close on a few frame skeleton positional information the variance of space distribution judge stored in skeleton positional information whether reliable, and running mean process is carried out to the human body of authentic data each joint reference point spatial position data.
Described human body each joint reference point comprises: shoulder mid point, left point of shoulder point, right point of shoulder point, belly mid point, left finesse, left elbow, right finesse, right elbow, head etc.
In described step S04, as the right hand waves that right finesse is chosen in action, right elbow, right point of shoulder point, shoulder mid point, head, belly mid point are as a reference point.
In described step S05 and step S06, as the right hand is waved action, the judgement in gesture motion region is right finesse 1/3 arm length degree above right elbow, with straight up for axis of symmetry, left district is right elbow horizontal ordinate 8 centimeters left, and right district is right elbow horizontal ordinate 8 centimeters to the right, when right hand horizontal ordinate appears at left district-right district-left district or right district-left district-right district, and each time interval switched is within 1 second, be then judged to be that the right hand makes gesture of waving; Consider ergonomics feature simultaneously, except axis of symmetry straight up, increase several axis of symmetry of deflection to the right to adapt to user action, the predicting relation under any one axis of symmetry is set up and is then represented that the right hand makes gesture of waving.
The foregoing is only preferred embodiment of the present invention, all equalizations done according to the present patent application the scope of the claims change and modify, and all should belong to covering scope of the present invention.

Claims (7)

1., based on a dynamic 3D gesture identification method for joint space position data, it is characterized in that, comprise the following steps:
S01: identify skeleton positional information by the body sense camera collection that is connected with computing machine, to gather and the skeleton positional information identified together with acquisition time in the lump stored in buffer memory, described skeleton positional information comprises the spatial position data of human body each joint reference point;
S02: based on skeleton positional information existing in buffer memory to stored in skeleton positional information carry out pre-service;
S03: by spatial position data and body sense camera self attitude information of human body each joint reference point, forward and normalized are carried out to skeleton positional information;
S04: according to different gesture motion, selected different joint reference point;
S05: the spatial position data choosing one section of continuous human body each joint reference point reliably up-to-date in buffer memory, for different target gestures, be set as follows criterion: gesture motion region, gesture motion time, impact point hold-up time, impact point move distance, impact point direction of motion; For the motor pattern feature of each gesture, then judge according to the time requirement between criterion order and criterion order;
S06: when the spatial position data of one section up-to-date in buffer memory continuous reliably human body each joint reference point meets the time requirement between the criterion order of this gesture and criterion order, be judged to be that human body makes this gesture.
2. a kind of dynamic 3D gesture identification method based on joint space position data according to claim 1, it is characterized in that: also comprise step S07 after described step S06: by the skeleton positional information stored in this section of buffer memory, the spatial position data choosing human body each joint reference point calculates the action origination point position of each frame; Obtain the motion path of target joint point for different gesture motion, based on the feature of different gesture, obtain the purpose direction of action and the confidence band in purpose direction by motion path and action origination point.
3. a kind of dynamic 3D gesture identification method based on joint space position data according to claim 2, it is characterized in that: in described step S07, as the right hand is waved action, choose the right hand appear at left and right region ultra-Left, ultra-Right position and period right elbow space average position, calculate the vector of 3 formed planes as action purpose direction, choose the space average position of shoulder mid point as action origination point, the fan-shaped range that the confidence band of purpose is sketched the contours of for action origination point and the ultra-Left ultra-Right position of the right hand.
4. a kind of dynamic 3D gesture identification method based on joint space position data according to claim 1, is characterized in that: described body sense camera is Kinect or Asus Asus Xtion.
5. a kind of dynamic 3D gesture identification method based on joint space position data according to claim 1, it is characterized in that, described pre-service is specially: by close on a few frame skeleton positional information the variance of space distribution judge stored in skeleton positional information whether reliable, and running mean process is carried out to the human body of authentic data each joint reference point spatial position data.
6. a kind of dynamic 3D gesture identification method based on joint space position data according to claim 1, is characterized in that: described human body each joint reference point comprises: shoulder mid point, left point of shoulder point, right point of shoulder point, belly mid point, left finesse, left elbow, right finesse, right elbow, head.
7. a kind of dynamic 3D gesture identification method based on joint space position data according to claim 1, it is characterized in that: in described step S04, as the right hand waves that right finesse is chosen in action, right elbow, right point of shoulder point, shoulder mid point, head, belly mid point are as a reference point.
CN201410589578.5A 2014-10-29 2014-10-29 A kind of dynamic 3D gesture identification methods based on joint space position data Active CN104298358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410589578.5A CN104298358B (en) 2014-10-29 2014-10-29 A kind of dynamic 3D gesture identification methods based on joint space position data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410589578.5A CN104298358B (en) 2014-10-29 2014-10-29 A kind of dynamic 3D gesture identification methods based on joint space position data

Publications (2)

Publication Number Publication Date
CN104298358A true CN104298358A (en) 2015-01-21
CN104298358B CN104298358B (en) 2017-11-21

Family

ID=52318120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410589578.5A Active CN104298358B (en) 2014-10-29 2014-10-29 A kind of dynamic 3D gesture identification methods based on joint space position data

Country Status (1)

Country Link
CN (1) CN104298358B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105302310A (en) * 2015-11-12 2016-02-03 姚焕根 Gesture recognition device, system and method
CN106914016A (en) * 2015-12-25 2017-07-04 北京奇虎科技有限公司 Performer determines method and device
CN108108709A (en) * 2017-12-29 2018-06-01 纳恩博(北京)科技有限公司 A kind of recognition methods and device, computer storage media
CN108399367A (en) * 2018-01-31 2018-08-14 深圳市阿西莫夫科技有限公司 Hand motion recognition method, apparatus, computer equipment and readable storage medium storing program for executing
CN109213189A (en) * 2018-07-19 2019-01-15 安徽共生物流科技有限公司 A kind of unmanned plane inspection system and movement determination method
CN112257639A (en) * 2020-10-30 2021-01-22 福州大学 Student learning behavior identification method based on human skeleton
WO2023077659A1 (en) * 2021-11-04 2023-05-11 中国科学院深圳先进技术研究院 Fusion information-based tai chi recognition method, terminal device, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090322763A1 (en) * 2008-06-30 2009-12-31 Samsung Electronics Co., Ltd. Motion Capture Apparatus and Method
CN102509092A (en) * 2011-12-12 2012-06-20 北京华达诺科技有限公司 Spatial gesture identification method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090322763A1 (en) * 2008-06-30 2009-12-31 Samsung Electronics Co., Ltd. Motion Capture Apparatus and Method
CN102509092A (en) * 2011-12-12 2012-06-20 北京华达诺科技有限公司 Spatial gesture identification method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曹雏清: "《面向多方式人际交互的肢体动作识别研究》", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105302310A (en) * 2015-11-12 2016-02-03 姚焕根 Gesture recognition device, system and method
CN105302310B (en) * 2015-11-12 2018-08-31 姚焕根 A kind of gesture identifying device, system and method
CN106914016A (en) * 2015-12-25 2017-07-04 北京奇虎科技有限公司 Performer determines method and device
CN108108709A (en) * 2017-12-29 2018-06-01 纳恩博(北京)科技有限公司 A kind of recognition methods and device, computer storage media
CN108399367A (en) * 2018-01-31 2018-08-14 深圳市阿西莫夫科技有限公司 Hand motion recognition method, apparatus, computer equipment and readable storage medium storing program for executing
CN108399367B (en) * 2018-01-31 2020-06-23 深圳市阿西莫夫科技有限公司 Hand motion recognition method and device, computer equipment and readable storage medium
CN109213189A (en) * 2018-07-19 2019-01-15 安徽共生物流科技有限公司 A kind of unmanned plane inspection system and movement determination method
CN112257639A (en) * 2020-10-30 2021-01-22 福州大学 Student learning behavior identification method based on human skeleton
WO2023077659A1 (en) * 2021-11-04 2023-05-11 中国科学院深圳先进技术研究院 Fusion information-based tai chi recognition method, terminal device, and storage medium

Also Published As

Publication number Publication date
CN104298358B (en) 2017-11-21

Similar Documents

Publication Publication Date Title
CN104298358A (en) Dynamic 3D gesture recognition method based on joint space position data
US11030237B2 (en) Method and apparatus for identifying input features for later recognition
CN106846403B (en) Method and device for positioning hand in three-dimensional space and intelligent equipment
KR102437456B1 (en) Event camera-based deformable object tracking
EP2904472B1 (en) Wearable sensor for tracking articulated body-parts
CN107003721B (en) Improved calibration for eye tracking systems
CN108475439B (en) Three-dimensional model generation system, three-dimensional model generation method, and recording medium
JP5855751B2 (en) Modeling, fitting, and tracking of 3D objects
US8933882B2 (en) User centric interface for interaction with visual display that recognizes user intentions
US9652043B2 (en) Recognizing commands with a depth sensor
CN107787497B (en) Method and apparatus for detecting gestures in a user-based spatial coordinate system
WO2023000119A1 (en) Gesture recognition method and apparatus, system, and vehicle
CN112926423B (en) Pinch gesture detection and recognition method, device and system
TW201120681A (en) Method and system for operating electric apparatus
WO2013008236A1 (en) System and method for computer vision based hand gesture identification
CN102830797A (en) Man-machine interaction method and system based on sight judgment
CN108108709A (en) A kind of recognition methods and device, computer storage media
KR102227494B1 (en) Apparatus and method for processing an user input using movement of an object
KR101394279B1 (en) Method for recognition of user's motion by analysis of depth image and apparatus for analyzing user's motion using the same
Vieriu et al. Background invariant static hand gesture recognition based on Hidden Markov Models
CN105204630A (en) Method and system for garment design through motion sensing
US11887257B2 (en) Method and apparatus for virtual training based on tangible interaction
KR101860138B1 (en) Apparatus for sharing data and providing reward in accordance with shared data
KR101909326B1 (en) User interface control method and system using triangular mesh model according to the change in facial motion
JP2017227687A (en) Camera assembly, finger shape detection system using camera assembly, finger shape detection method using camera assembly, program implementing detection method, and recording medium of program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 361000 Fujian province Xiamen Software Park Road No. 31 205 unit two expected

Applicant after: The conductor (Xiamen) Technology Co. Ltd.

Address before: 361009 Fujian province Xiamen software park two sunrise Road No. 22 Room 102, unit I19

Applicant before: CONDUCTOR (XIAMEN) INTELLIGENT TECHNOLOGY CO., LTD.

COR Change of bibliographic data
GR01 Patent grant
GR01 Patent grant