CN108089715A - A kind of demonstration auxiliary system based on depth camera - Google Patents

A kind of demonstration auxiliary system based on depth camera Download PDF

Info

Publication number
CN108089715A
CN108089715A CN201810054910.6A CN201810054910A CN108089715A CN 108089715 A CN108089715 A CN 108089715A CN 201810054910 A CN201810054910 A CN 201810054910A CN 108089715 A CN108089715 A CN 108089715A
Authority
CN
China
Prior art keywords
gesture
dimensional hand
hand model
state
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810054910.6A
Other languages
Chinese (zh)
Inventor
赵然
赵一然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201810054910.6A priority Critical patent/CN108089715A/en
Publication of CN108089715A publication Critical patent/CN108089715A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Abstract

The present invention provides a kind of demonstration auxiliary system based on depth camera, including:Data acquisition module, position control module, gesture recognition module, demo function control module, wherein:Data acquisition module, including depth camera, depth camera is placed on holder and controls holder rotation angle by motor, for gathering the gesture depth image of user;Position control module identifies that user gesture position manipulates the cloud platform rotation according to the gesture depth image of acquisition, realizes real-time tracking;Gesture recognition module identifies user gesture according to the gesture depth image of acquisition, and is converted into corresponding operational order;Demo function control module, for performing demonstration miscellaneous function according to the operational order.The present invention accurately identifies gesture by gathering the images of gestures of speaker, and different gestures is changed into different execution orders, and demo system is operated, simple, convenient.

Description

A kind of demonstration auxiliary system based on depth camera
Technical field
The present invention relates to electronic technology fields, particularly a kind of demonstration auxiliary system based on depth camera.
Background technology
At present, in the demonstrations task such as convention, teaching, the file for needing to demonstrate in computer is usually passed through into projecting apparatus It launches on large screen, and speechmaker generally stands at and is demonstrated beside large screen, it has not been convenient to while computer is manipulated, in this feelings Under condition, speechmaker usually requires other personnel help auxiliary control computer, to control the content shown on large screen.
In the prior art, speechmaker can realize using laser point to prompt the weight that audience gives a lecture by using light pen Point carries out the functions such as simple page turning, but the function that such light pen can be realized is simple, single, can not meet present demonstration The demand of ancillary equipment.
The content of the invention
In view of the above-mentioned problems, a kind of the present invention is intended to provide demonstration auxiliary system based on depth camera.
The purpose of the present invention is realized using following technical scheme:
A kind of demonstration auxiliary system based on depth camera, including:Data acquisition module, position control module, gesture Identification module, demo function control module, wherein:
Data acquisition module, including depth camera, depth camera, which is placed on holder and passes through motor, controls holder Rotation angle, for gathering the gesture depth image of user;
Position control module identifies that user gesture position manipulates the holder and turns according to the gesture depth image of acquisition It is dynamic, realize real-time tracking;
Gesture recognition module identifies user gesture according to the gesture depth image of acquisition, and is converted into corresponding operation Instruction;
Demo function control module, for performing demonstration miscellaneous function according to the operational order.
Beneficial effects of the present invention are:The system carries out accurately gesture by gathering the images of gestures of speaker Identification, and different gestures is changed into different execution orders, demo system is operated, it is simple, convenient;It can be with It sets according to actual needs and performs different demonstration miscellaneous functions, functional meets speaker during actual presentation Required operation requirement;It controls controlling depth camera that speaker is followed to move by position, makes speaker from camera model The limitation enclosed, flexibility are strong;The system compatibility is strong, can be suitably used for existing demo system.
Description of the drawings
Using attached drawing, the invention will be further described, but the embodiment in attached drawing does not form any limit to the present invention System, for those of ordinary skill in the art, without creative efforts, can also obtain according to the following drawings Other attached drawings.
Fig. 1 is the frame construction drawing of the present invention;
Fig. 2 is the frame construction drawing of gesture recognition module of the present invention
Fig. 3 is displaying figure in key monitoring region in position control module of the present invention.
Reference numeral:
Data acquisition module 1, position control module 2, gesture recognition module 3, demo function control module 4, model foundation Unit 30, gesture tracking unit 31, matching unit 32, gesture identification unit 33 and instruction morphing unit 34
Specific embodiment
With reference to following application scenarios, the invention will be further described.
Referring to Fig. 1, a kind of demonstration auxiliary system based on depth camera, which is characterized in that including:Data acquisition module 1st, position control module 2, gesture identification mould 3, demo function control module 4, wherein:
Data acquisition module 1, including depth camera, depth camera, which is placed on holder and passes through motor, controls holder Rotation angle, for gathering the gesture depth image of user;
Position control module 2 identifies that user gesture position manipulates the holder and turns according to the gesture depth image of acquisition It is dynamic, it realizes real-time tracking, ensures in the image-capture field that the hand of speaker gathers always in camera.
Gesture recognition module 3 identifies user gesture according to the gesture depth image of acquisition, and is converted into corresponding operation Instruction;
Demo function control module 4, for performing demonstration miscellaneous function according to the operational order.
Wherein, the position control module specifically includes:Weight is set at the images of gestures center that depth camera collects Point monitoring area (referring specifically to Fig. 3), the key monitoring region are the rectangular area of picture centre, when detecting that gesture leaves Behind the key monitoring region, cloud platform rotation is controlled, by adjusting cloud platform rotation, speaker's gesture is repositioned at described The center in key monitoring region, when speaker's gesture only moves in key monitoring region, the position control module is not Control cloud platform rotation.
The above embodiment of the present invention is accurately identified by gathering the images of gestures of speaker, and to gesture, and is incited somebody to action Different gestures changes into different execution orders, and demo system is operated, simple, convenient;It can be according to reality It needs to set and performs different demonstration miscellaneous functions, functional meets speaker in behaviour needed for actual presentation process It is required;It controls controlling depth camera that speaker is followed to move by position, makes limitation of the speaker from camera scope, Flexibility is strong;The system compatibility is strong, can be suitably used for existing demo system.
Preferably, the demonstration miscellaneous function includes but are not limited to paintbrush annotation, page turning, picture presentation and video and broadcasts It puts.
This preferred embodiment, the system can set different demonstration miscellaneous functions, different work(according to practical application request It can be realized by speaker using different gestures, functional meets speaker needed for actual presentation process Operation requirement.
Preferably, referring to Fig. 2, the gesture recognition module includes model foundation unit 30, gesture tracking unit 31, matching Unit 32, gesture identification unit 33 and instruction morphing unit 34, wherein:
Model foundation unit 30, for establishing three-dimensional hand model and state characteristic model in Virtual Space;
Matching unit 31 carries out real-time for the gesture in the gesture depth image that will obtain and the three-dimensional hand model Matching projects user gesture variation in three-dimensional hand model;
Gesture tracking unit 32, for into line trace, obtaining the state of three-dimensional hand model to the three-dimensional hand model Variation;
Gesture identification unit 33 for the state change according to the three-dimensional hand model, is expressed three-dimensional hand model Gesture be identified, export user gesture recognition result;
Instruction morphing unit 34, for according to the user gesture recognition result, exporting corresponding operational order.
This preferred embodiment, by establishing three-dimensional hand model, by the three-dimensional hand in gesture depth image and Virtual Space Portion's model is matched, and when speaker's gesture changes, synchronous variation occurs for three-dimensional hand model, leads in Virtual Space It crosses to three-dimensional hand model into line trace, the gesture for obtaining three-dimensional hand model changes, and it is identified, and obtains final Recognition result, by identifying speaker's gesture, strong robustness, accuracy height to three-dimensional hand model.
Preferably, the state characteristic model includes:Three-dimensional hand model state featureIncluding 26 degree of freedom, wherein 6 global degree of freedom and 20 isolated degree of freedom, 6 global degree of freedom include 3 translations and 3 rotary freedoms, by three-dimensional One fixed point in the hand model centre of the palm represents that the movement of 5 fingers corresponds to 20 isolated degree of freedom, in addition to thumb, each finger MCP joints comprising 1 bend and stretch with 1 outreach adduction degree of freedom, and degree of freedom is only bent and stretched in the MCP joints of thumb comprising 1, Degree of freedom is respectively bent and stretched in the IP joints of thumb, PIP the and DIP joints of remaining 4 finger comprising 1, and the TM joints of thumb include 2 A degree of freedom.
Wherein, MCP (Metacarpophalangeal point) represents metacarpophalangeal joints, PIP (proximal Interphalangeal point) represent proximal phalangeal joints, DIP (distal interphalangeal point) is represented Distal interphalangeal joint, IP (interphalangeal point) represent interphalangeal joint, and TM (trapezio-metacarpal) is closed Section represents angle Bones and joints.
This preferred embodiment establishes three-dimensional hand state characteristic model using aforesaid way, using special designing 26 from By degree model, the complexity that can be well adapted for the precision of model and calculate improves system performance.
Preferably, the matching unit 32, specifically includes:When user gesture changes in the gesture depth image When, corresponding state change is made to three-dimensional hand model it is assumed that calculating hand in the state parameter and image of three-dimensional hand model State matching error between gesture chooses optimal solution of the state parameter of matching error minimum as three-dimensional hand model, and root The state of three-dimensional hand model is updated according to the optimal solution, keeps the simultaneously match of gesture in three-dimensional hand model and image,
Wherein, the state matching error function used for:
In formula,Represent that hand images γ assumes with three-dimensional hand model state featureBetween matching error,Represent outline characteristic item, whereinIt represents Outline characteristic matching degree, γ2(γ) represents hand images sketch figure,Represent that three-dimensional hand model renders sketch figure,Expression belongs to γ2(γ) region and be not belonging toThe elemental area in region,It represents Belong toRegion and be not belonging to γ2The elemental area in (γ) region,Represent depth characteristic item, whereinRepresent hand depth image γ1(γ) and three-dimensional hand model state are special Sign is assumedCorrespondence renders depth mapBetween depth offset, T1Represent the depth capacity deviation constant of setting,The smooth item of expression state, wherein Represent the three-dimensional hand model state of present frame Feature it is assumed thatRepresent the three-dimensional hand model state feature of previous frame selection it is assumed that ω1、ω2And ω3It represents respectively deep Spend the weight factor of characteristic item, outline characteristic item and the smooth item of state.
Preferably, the optimal solution that matching unit 32 is obtained by using three-dimensional hand model previous frame (laststate) is come It predicts the state parameter of next frame (NextState), effectively reduces the search space of state parameter.
This preferred embodiment is adopted and gesture depth image and three-dimensional hand model is matched with the aforedescribed process, passes through With reference to outline feature, depth characteristic and the smooth state of three-dimensional hand model, matching degree is searched for most on the basis of laststate Characteristic parameter of the high state as current state, and the state feature of three-dimensional hand model is updated according to this characteristic parameter, Realize the simultaneously match of three-dimensional hand model and gesture depth image, adaptable, accuracy is high, and real-time is good, is system Subsequently lay a good foundation to the Tracking Recognition of three-dimensional hand model gesture.
Preferably, gesture tracking unit 32, for, into line trace, obtaining three-dimensional hand model to the three-dimensional hand model State change, specifically include:
Gesture tracking cell 32 is initialized:From prior distributionSample I particle, weights areTable It is shown asWhereinRepresent i-th of particle of three-dimensional hand model state feature in t=0;
Particle initial position is obtained, newest observation is introduced into object function to be optimized, the particle specifically used is initial Changing function is:
In formula,Represent the particle initial position of i-th of particle of t moment,Represent the t-1 moment with iterations K Until, optimal location that i-th of particle is undergone, i.e., individual history optimal solution, r~N (0, δ) represents the multivariable that average is 0 Gaussian noise, δ represent its covariance matrix, and the diagonal entry of δ is determined by the maximum interframe angle or displacement difference of sequence to be tracked It is fixed;
Be iterated evolution to particle, driving particle to high likelihood probability regional movement, the iteration function specifically used for:
In formula,Until representing current iteration frequency n, optimal location that i-th of particle is undergone, i.e. individual history is optimal Solution, gnRepresent the optimal solution that entire particle group obtains until current iteration frequency n, i.e. globally optimal solution,Expression changes The particle rapidity of i-th of particle when generation number is n+1,Represent the particle position that iterations is i-th of particle of n+1, | R Represent the random number that result is positive Gaussian Profile;
Using observing likelihood more new particle weightsAnd weights are normalized, with Maximum a posteriori criterion output system state estimation, whereinRepresent observation likelihood function, λ is constant standardizing factor,Represent matching error value;
According to weights size to sample setResampling is carried out, new waiting is obtained and weighs sample set
Change the state change of the three-dimensional hand model of output according to sample set, lasting iteration is carried out to particle and is developed, directly Terminate to tracking.
Preferably, in the above-described embodiments, in order to avoid particle Premature Convergence, cause effect bad, each particle iteration The diversity of particle is improved after evolution using lower array function:
In formula,Until representing current iteration frequency n+1, optimal location that i-th of particle is undergone,Expression changes Generation number be i-th of particle of n+1 particle position, Tn+1Represent the Simulated annealing of (n+1)th iteration, wherein Tn+1=α Tn, α represents coefficient of temperature drop and α ∈ (0,1), D ' represent particle distortion variations,WithPoint Not Biao Shi new particle adaptive value and old particle individual adaptive optimal control value, μ ()=k (z |), k (z |) represent observation likelihood letter Number, riRepresent the decision threshold of setting.
This preferred embodiment under particle filter frame, is adopted with the aforedescribed process to the state feature of three-dimensional hand model Change into line trace, the state changing features of three-dimensional hand model can be obtained exactly, adaptable, accuracy is high;Pass through The variation of particle sample set is recorded, the state change of three-dimensional hand model can be obtained exactly, is the subsequently identification to gesture It lays a good foundation.
Preferably, the state characteristic model is also not limited to only establish three-dimensional hand model, moreover it is possible to such as sphere into Row modeling, subsequent matching unit and gesture tracking unit three stem models and sphere model can be carried out simultaneously matching and Tracking obtains human hand and the Tracking Recognition of object interaction.
Finally it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention rather than the present invention is protected The limitation of scope is protected, although being explained in detail with reference to preferred embodiment to the present invention, those of ordinary skill in the art should Work as analysis, technical scheme can be modified or replaced equivalently, without departing from the reality of technical solution of the present invention Matter and scope.

Claims (5)

1. a kind of demonstration auxiliary system based on depth camera, which is characterized in that including:Data acquisition module, position control Module, gesture recognition module, demo function control module, wherein:
Data acquisition module, including depth camera, depth camera is placed on holder and by motor holder is controlled to rotate Angle, for gathering the gesture depth image of user;
Position control module identifies that user gesture position manipulates the cloud platform rotation according to the gesture depth image of acquisition, real Existing real-time tracking;
Gesture recognition module identifies user gesture according to the gesture depth image of acquisition, and is converted into corresponding operational order;
Demo function control module, for performing demonstration miscellaneous function according to the operational order.
A kind of 2. demonstration auxiliary system based on depth camera according to claim 1, which is characterized in that the demonstration Miscellaneous function includes page turning, picture presentation and video playing.
A kind of 3. demonstration auxiliary system based on depth camera according to claim 1, which is characterized in that the gesture Identification module includes model foundation unit, gesture tracking unit, matching unit and gesture identification unit, wherein:
Model foundation unit, for establishing three-dimensional hand model and state characteristic model in Virtual Space;
Matching unit carries out real-time matching for the gesture in the gesture depth image that will obtain and the three-dimensional hand model, User gesture variation is projected in three-dimensional hand model;
Gesture tracking unit, for into line trace, obtaining the state change of three-dimensional hand model to the three-dimensional hand model;
Gesture identification unit, for the state change according to the three-dimensional hand model, to the gesture of three-dimensional hand model expression It is identified, exports user gesture recognition result;
Instruction morphing unit, for according to the user gesture recognition result, exporting corresponding operational order.
A kind of 4. demonstration auxiliary system based on depth camera according to claim 3, which is characterized in that the state Characteristic model includes:Three-dimensional hand model state feature xhIncluding 26 degree of freedom, wherein 6 global degree of freedom and 20 parts Degree of freedom, 6 global degree of freedom include 3 translations and 3 rotary freedoms, by a fixed point in the three-dimensional hand model centre of the palm It represents, the movement of 5 fingers corresponds to 20 isolated degree of freedom, and in addition to thumb, the MCP joints of each finger are bent and stretched and 1 comprising 1 A outreach adduction degree of freedom, and degree of freedom, the IP joints of thumb, remaining 4 finger are only bent and stretched in the MCP joints of thumb comprising 1 PIP and DIP joints respectively bend and stretch degree of freedom comprising 1, the TM joints of thumb include 2 degree of freedom.
A kind of 5. demonstration auxiliary system based on depth camera according to claim 4, which is characterized in that the matching Unit specifically includes:When user gesture changes in the gesture depth image, three-dimensional hand model is made accordingly State change is chosen it is assumed that calculate the state matching error in the state parameter and image of three-dimensional hand model between gesture Optimal solution of the state parameter with error minimum as three-dimensional hand model, and three-dimensional hand model is updated according to the optimal solution State, keep the simultaneously match of gesture in three-dimensional hand model and image,
Wherein, the state matching error function used for:
In formula,Represent that hand images γ assumes with three-dimensional hand model state featureBetween matching error,Represent outline characteristic item, whereinExpression is cut Shadow characteristic matching degree, γ2(γ) represents hand images sketch figure,Represent that three-dimensional hand model renders sketch figure,Expression belongs to γ2(γ) region and be not belonging toThe elemental area in region,It represents Belong toRegion and be not belonging to γ2The elemental area in (γ) region,Represent depth characteristic item, whereinRepresent hand depth image γ1(γ) and three-dimensional hand model state are special Sign is assumedCorrespondence renders depth mapBetween depth offset, T1Represent the depth capacity deviation constant of setting,The smooth item of expression state, wherein Represent the three-dimensional hand model state of present frame Feature it is assumed thatRepresent the three-dimensional hand model state feature of previous frame selection it is assumed that ω1、ω2And ω3It represents respectively deep Spend the weight factor of characteristic item, outline characteristic item and the smooth item of state.
CN201810054910.6A 2018-01-19 2018-01-19 A kind of demonstration auxiliary system based on depth camera Pending CN108089715A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810054910.6A CN108089715A (en) 2018-01-19 2018-01-19 A kind of demonstration auxiliary system based on depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810054910.6A CN108089715A (en) 2018-01-19 2018-01-19 A kind of demonstration auxiliary system based on depth camera

Publications (1)

Publication Number Publication Date
CN108089715A true CN108089715A (en) 2018-05-29

Family

ID=62181693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810054910.6A Pending CN108089715A (en) 2018-01-19 2018-01-19 A kind of demonstration auxiliary system based on depth camera

Country Status (1)

Country Link
CN (1) CN108089715A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070058A (en) * 2019-04-25 2019-07-30 信利光电股份有限公司 A kind of vehicle-mounted gesture identifying device and system
CN110347266A (en) * 2019-07-23 2019-10-18 哈尔滨拓博科技有限公司 A kind of space gesture control device based on machine vision
CN111901518A (en) * 2020-06-23 2020-11-06 维沃移动通信有限公司 Display method and device and electronic equipment
CN114095648A (en) * 2020-11-30 2022-02-25 深圳卡多希科技有限公司 Method and device for controlling camera to rotate through gestures

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102262783A (en) * 2011-08-16 2011-11-30 清华大学 Method and system for restructuring motion of three-dimensional gesture
CN102323859A (en) * 2011-09-08 2012-01-18 昆山市工业技术研究院有限责任公司 Teaching materials Play System and method based on gesture control
CN103279188A (en) * 2013-05-29 2013-09-04 山东大学 Method for operating and controlling PPT in non-contact mode based on Kinect
CN104156063A (en) * 2014-07-14 2014-11-19 济南大学 Gesture speed estimating method for three-dimensional interaction interface
CN204129723U (en) * 2014-09-25 2015-01-28 广州大学 A kind of classroom multimedia teaching apparatus mutual based on Kinect somatosensory
CN105589553A (en) * 2014-09-23 2016-05-18 上海影创信息科技有限公司 Gesture control method and system for intelligent equipment
CN106055091A (en) * 2016-05-16 2016-10-26 电子科技大学 Hand posture estimation method based on depth information and calibration method
CN106125928A (en) * 2016-06-24 2016-11-16 同济大学 PPT based on Kinect demonstrates aid system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102262783A (en) * 2011-08-16 2011-11-30 清华大学 Method and system for restructuring motion of three-dimensional gesture
CN102323859A (en) * 2011-09-08 2012-01-18 昆山市工业技术研究院有限责任公司 Teaching materials Play System and method based on gesture control
CN103279188A (en) * 2013-05-29 2013-09-04 山东大学 Method for operating and controlling PPT in non-contact mode based on Kinect
CN104156063A (en) * 2014-07-14 2014-11-19 济南大学 Gesture speed estimating method for three-dimensional interaction interface
CN105589553A (en) * 2014-09-23 2016-05-18 上海影创信息科技有限公司 Gesture control method and system for intelligent equipment
CN204129723U (en) * 2014-09-25 2015-01-28 广州大学 A kind of classroom multimedia teaching apparatus mutual based on Kinect somatosensory
CN106055091A (en) * 2016-05-16 2016-10-26 电子科技大学 Hand posture estimation method based on depth information and calibration method
CN106125928A (en) * 2016-06-24 2016-11-16 同济大学 PPT based on Kinect demonstrates aid system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李东年: "基于深度图像序列的三维人手运动跟踪技术研究", 《中国博士学位论文全文数据库》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070058A (en) * 2019-04-25 2019-07-30 信利光电股份有限公司 A kind of vehicle-mounted gesture identifying device and system
CN110347266A (en) * 2019-07-23 2019-10-18 哈尔滨拓博科技有限公司 A kind of space gesture control device based on machine vision
CN111901518A (en) * 2020-06-23 2020-11-06 维沃移动通信有限公司 Display method and device and electronic equipment
CN114095648A (en) * 2020-11-30 2022-02-25 深圳卡多希科技有限公司 Method and device for controlling camera to rotate through gestures

Similar Documents

Publication Publication Date Title
CN108777081B (en) Virtual dance teaching method and system
CN108089715A (en) A kind of demonstration auxiliary system based on depth camera
CN108161882B (en) Robot teaching reproduction method and device based on augmented reality
CN109034397A (en) Model training method, device, computer equipment and storage medium
CN106139564A (en) Image processing method and device
JPH10149445A (en) Device for visualizing physical operation analysis
CN110675453B (en) Self-positioning method for moving target in known scene
CN115933868B (en) Three-dimensional comprehensive teaching field system of turnover platform and working method thereof
CN109446952A (en) A kind of piano measure of supervision, device, computer equipment and storage medium
CN108256461A (en) A kind of gesture identifying device for virtual reality device
CN110232727A (en) A kind of continuous posture movement assessment intelligent algorithm
CN105243375A (en) Motion characteristics extraction method and device
CN109116984A (en) A kind of tool box for three-dimension interaction scene
CN110553650B (en) Mobile robot repositioning method based on small sample learning
Liu et al. Dynamic hand gesture recognition using LMC for flower and plant interaction
CN115576426A (en) Hand interaction method for mixed reality flight simulator
CN115761787A (en) Hand gesture measuring method with fusion constraints
CN116528016A (en) Audio/video synthesis method, server and readable storage medium
CN110858328B (en) Data acquisition method and device for simulating learning and storage medium
CN111104964B (en) Method, equipment and computer storage medium for matching music with action
CN116248920A (en) Virtual character live broadcast processing method, device and system
CN116749168A (en) Rehabilitation track acquisition method based on gesture teaching
CN113326751B (en) Hand 3D key point labeling method
CN115018962A (en) Human motion attitude data set generation method based on virtual character model
CN110321008B (en) Interaction method, device, equipment and storage medium based on AR model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180529