CN102622591B - 3D (three-dimensional) human posture capturing and simulating system - Google Patents

3D (three-dimensional) human posture capturing and simulating system Download PDF

Info

Publication number
CN102622591B
CN102622591B CN 201210009368 CN201210009368A CN102622591B CN 102622591 B CN102622591 B CN 102622591B CN 201210009368 CN201210009368 CN 201210009368 CN 201210009368 A CN201210009368 A CN 201210009368A CN 102622591 B CN102622591 B CN 102622591B
Authority
CN
China
Prior art keywords
human body
human
body attitude
joint points
processing module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201210009368
Other languages
Chinese (zh)
Other versions
CN102622591A (en
Inventor
杨毅
郑晓
吕宁一
曹沛晴
王若尧
左亮
王伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN 201210009368 priority Critical patent/CN102622591B/en
Publication of CN102622591A publication Critical patent/CN102622591A/en
Application granted granted Critical
Publication of CN102622591B publication Critical patent/CN102622591B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a 3D (three-dimensional) human posture capturing and simulating system, which comprises a human action capturing module, a human joint point recognizing and human posture processing module and a human posture display module. The human action capturing module comprises an RGB (red, green, blue) camera, an infrared transmitter and an infrared receiver; the RGB camera is used for acquiring environmental information of the camera, generated data frames are transmitted to the human joint point recognizing and human posture processing module via a communication module to beprocessed, and accordingly a visible light image is generated; the infrared transmitter is used for emitting laser speckles in a fixed mode; and the infrared receiver is used for receiving infrared light, generated infrared image data frames are transmitted to the human joint point recognizing and human posture processing module via the communication module to be processed, and accordingly an infrared light image is generated. The 3D human posture capturing and simulating system can realize real-time human action capturing, displays human actions in a three-dimensional display, and simulates human postures.

Description

The 3D human body attitude catches mimicking system
Technical field
The present invention relates to a kind of 3D human body attitude and catch mimicking system.
Background technology
In a lot of application scenarios, for example physical culture teaching, behavior monitoring, motion analysis etc., human body attitude wishes and can be presented exactly that this is difficult to accomplish in the 2D space, only could realize nondestructively showing at 3d space.
People's action is rich and varied, different actions often represents different implications, if can identify people's action in real time, and then in 3d space, imitate demonstration according to recognition result, can more accurately vivo show, reduce human body attitude, greatly convenient cognition and study to the human action implication.
The action that extracts human body from a width of cloth scene in real time is the focus of domestic and international academia research always, great majority research is by making up the human skeleton geometric model, the human joint points position is estimated, utilized the error minimum principle to mate acquisition then.The human body attitude that obtains like this can reflect people's activity to a great extent, but few people show the human body attitude obtain at three dimensions, and the human body attitude that causes obtaining lacks stereoscopic sensation, can not be vividly, apish action visually.At the problems referred to above, the present invention has designed a kind of human body attitude that collects and has caught the system that is shown in one with the imitation of 3D human body attitude, has reflected people's activity more realistically, and bandwagon effect is lively, vivid.
This 3D human body attitude catches mimicking system and can be applicable to a lot of fields, and for example: can be installed in the driver behavior that obtains the driver on the vehicle, intelligent transportation system is served in obtaining of auxiliary vehicle drive behavior detail data better; Can be installed in limbs patient rehabilitation center, the patient can do some basic recovery actions, observes the human body attitude that shows in the three-dimensional monitor then, plays the effect of feedback, thereby nonstandard action is corrected, can recover healthy again can the amusement body and mind; Can also in three-dimensional monitor, show sequences such as gymnastics, wushu, dancing, allow people see that from three-dimensional monitor attitude instructs accurately, reach standard the destination of study.
Summary of the invention
The objective of the invention is the demand at prior art, provide a kind of 3D human body attitude to catch mimicking system, can carry out real-time human action and catch, and in three-dimensional monitor, show to imitate human body attitude; Simultaneously can also show various human body attitudes on request, be used for human body attitude and show.
3D human body attitude of the present invention catches mimicking system, comprises human action capture module, human joint points identification and human body attitude processing module, human body attitude display module; Wherein the human action capture module comprises RGB camera, infrared transmitter, infrared remote receiver; Described RGB camera is responsible for the acquisition camera environmental information, and the generation Frame sends human joint points identification to by communication module and the human body attitude processing module is handled, and produces visible images; Described infrared transmitter is responsible for sending fixed mode laser speckle; Described infrared remote receiver is responsible for receiving infrared light, and generation infrared light image Frame passes to human joint points identification by communication module and the human body attitude processing module is handled, and produces infrared light image; The speckle pattern of each pixel of infrared light image and the speckle pattern of the infrared light image respective pixel of demarcating usefulness are carried out computing cross-correlation, get demarcation with the depth value of the highest pixel of the correlativity depth value as this pixel, namely obtain scene depth figure; In visible images, utilize face recognition technology to identify people's face position, utilize the human skeleton geometric model then, estimate position of human body, obtain the human body attitude profile diagram; Described human body attitude display module is connected with the human body attitude processing module with human joint points identification by communication module, obtain the data command of human joint points identification and human body attitude processing module, identify and the instruction of human body attitude processing module shows human body attitude and action in real time according to human joint points.
Wherein human joint points identification and human body attitude processing module comprise data storage cell, mode of operation selected cell, data processing unit; Comprise in the mode of operation selected cell and catch imitation human body attitude mode of operation and imitation demonstration human body attitude mode of operation, human body attitude processing mode difference under every kind of mode of operation:
Catching imitation human body attitude mode of operation is: the human joint points identification module obtains the human joint points space distribution information, the coordinate of articulation point is the actual physics volume coordinate, be the volume coordinate of coordinate system in the three-dimensional monitor with the human joint points coordinate transformation in the actual physics coordinate system at first, the articulation point after the conversion is coupled together to form complete human skeleton; After obtaining human skeleton, human skeleton is modified, proofreaied and correct, make human body attitude more vivid; Last controller merges data, presses three-dimensional monitor and shows that required form sends the instruction of human body attitude data by communication module.
Imitation shows that the human body attitude mode of operation is: the requirement that the human body attitude that will show or action can show according to three-dimensional monitor is stored in storage space or the data text, and human joint points identification and human body attitude processing module are called these data and sent to three-dimensional monitor by communication module and show.
Wherein fixed mode laser speckle is that light obtains after by one group of diffraction grating, also can carry out mark to whole space even guaranteed that any speckle pattern in space is inequality at 2; Record whole space with speckle pattern, namely carry out light source and demarcate;
Wherein the method for light source demarcation is: every a segment distance, get reference planes, generate a width of cloth infrared light image, each pixel has speckle pattern on the image, simultaneously generate a width of cloth depth map accordingly, each pixel depth value of depth map is corresponding with the speckle pattern of infrared light image respective pixel.
The human body attitude display module comprises three-dimensional monitor, and this display is made up of M * N * P LED lamp, and each LED lamp is divided into to be lighted and extinguish two states, forms pattern by lighting of LED lamp and shows human body attitude.
Description of drawings
Fig. 1 catches the control flow block diagram of mimicking system for 3D human body attitude of the present invention;
Fig. 2 catches the camera mounting arrangement synoptic diagram of mimicking system for 3D human body attitude of the present invention;
Fig. 3 catches a kind of human joint points distribution schematic diagram of mimicking system for 3D human body attitude of the present invention;
Fig. 4 catches a kind of structural drawing of the three-dimensional monitor of mimicking system for 3D human body attitude of the present invention;
Among the figure, 1-infrared transmitter, 2-RGB camera, 3-infrared remote receiver, 4-human joint points, first LED lamp of 5-, second LED lamp of 6-, the 3rd LED lamp of 7-, the 4th LED lamp of 8-, the 5th LED lamp of 9-, the 6th LED lamp of 10-, the 7th LED lamp of 11-, the 8th LED lamp of 12-, the power interface of 13-three-dimensional monitor, the data-interface of 14-three-dimensional monitor, 15-three-dimensional monitor.
Embodiment
Below in conjunction with accompanying drawing, specify preferred implementation of the present invention.
It is a kind of action that can catch the people in real time that 3D human body attitude of the present invention catches mimicking system, and can in three-dimensional monitor, show to imitate the instrument of human body attitude, have functions such as real-time monitoring human articulation point position, displaying human body attitude.Simultaneously, for adapting to the occasion that other needs carry out part human body gesture recognition, can adjust articulation point quantity and the position of human body in the human skeleton geometric model, be used for the seizure of human upper limb action and imitation etc.
The control flow block diagram of 3D human body attitude seizure mimicking system comprises controller, infrared transmitter 1, RGB camera 2, infrared remote receiver 3, communication module and three-dimensional monitor 15 as shown in Figure 1 in the embodiment of the invention.Described controller links to each other with RGB camera 2, infrared remote receiver 3 respectively by communication module.Described controller links to each other with three-dimensional monitor 15 by communication module, and the pixel of described RGB camera 2 is 640 * 480, and the pixel of described infrared remote receiver 3 is 320 * 240.
Install camera and three-dimensional monitor 15, open power switch device, described RGB camera 2 acquisition camera environmental informations generate Frame, send to controller by communication module, generate visible images;
Described infrared transmitter 1 sends the laser speckle of checkerboard pattern, shines in the scene on the object.In order to use the speckle pattern record space, need demarcate the space, realize just that here the space of 3~4 meters of distance R GB cameras is demarcated, in the process that test is carried out, ask for help stand within this scope.In the reference planes of getting 100 five equilibriums from the space of 3~4 meters of RGB cameras, note infrared light image and the depth image of each reference planes, wherein, each pixel of infrared light image has speckle pattern, and the depth value of each pixel of depth image is corresponding with the speckle pattern of each pixel of infrared light image.
Described infrared remote receiver 3 receives infrared light, generates Frame, sends to controller by communication module, generates infrared light image.Secondly, the speckle pattern that each pixel of infrared light image is represented is carried out computing cross-correlation with the speckle pattern of the infrared light image respective pixel of demarcating usefulness, get the demarcation of correlativity maximum and use the depth value of pixel as the depth value of this pixel, thereby obtained scene depth figure.
In visible images, utilize face recognition technology, identify the position of people's face, utilize the human skeleton geometric model then, estimate position of human body, obtain the human body attitude profile diagram.
The camera mounting arrangement synoptic diagram that the 3D human body attitude catches mimicking system as shown in Figure 2, infrared transmitter 1, infrared reception (3 and RGB camera 2 to share one be the three dimensions three-dimensional coordinate system O-XYZ of initial point with RGB camera center.
A kind of human joint points distribution schematic diagram that the 3D human body attitude catches mimicking system includes 20 articulation points as shown in Figure 3, and relevant articulation point couples together the composition human skeleton.Infrared remote receiver 3 and RGB camera 2 share a three-dimensional space coordinate system O-XYZ, and the pixel of the pixel of RGB camera 2 and infrared remote receiver 3 is relations of 2 * 2, therefore, can obtain human body contour outline figure among the scene depth figure by human body contour outline figure in the visible images.Utilize human skeleton geometric model, articulation point position and quantity to obtain the position of human joint points in scene depth figure, thereby obtained distribution and the physical coordinates in three-dimensional space coordinate system O-XYZ of human joint points.
A kind of structural drawing of the three-dimensional monitor of 3D human body attitude seizure imitation instrument as shown in Figure 4.Wherein, three-dimensional monitor is made up of M * N * P LED lamp, and each LED lamp is lighted and extinguished two states, and the control sequence of LED lamp is that LED lamp 5 is to LED lamp 12.For convenience each LED lamp in the three-dimensional monitor is carried out index, set up solid space coordinate system O-X ' Y ' Z ', each coordinate axis on round numbers represents the position of LED lamp.Because the coordinate of the human joint points that obtains is the physical space coordinate, in order in three-dimensional monitor, to show, need to establish and concern from coordinate system O-XYZ to coordinate system O-X ' Y ' Z ' coordinate transform, wherein X is consistent to the conversion proportion requirement of Z ' to Y ', Z with Y to X ', just can make the human body attitude of demonstration indeformable.With the coordinate transform of the human joint points that the obtains coordinate to the point that can in three-dimensional monitor, show, then relevant articulation point is coupled together to form complete human skeleton, afterwards human skeleton is modified and proofreaied and correct, make the human body attitude that shows in the three-dimensional monitor more lively, vivid.Three-dimensional monitor 15 obtains the data command of controller, lights or extinguishes the LED lamp according to controller instruction, thereby show human body attitude and action in real time.

Claims (6)

1.3D human body attitude catches mimicking system, it is characterized in that: comprise human action capture module, human joint points identification and human body attitude processing module, human body attitude display module; Wherein the human action capture module comprises RGB camera, infrared transmitter, infrared remote receiver; Described RGB camera is responsible for the acquisition camera environmental information, and the generation Frame sends human joint points identification to by communication module and the human body attitude processing module is handled, and produces visible images; Described infrared transmitter is responsible for sending fixed mode laser speckle; Described infrared remote receiver is responsible for receiving infrared light, and generation infrared light image Frame passes to human joint points identification by communication module and the human body attitude processing module is handled, and produces infrared light image; The speckle pattern of each pixel of infrared light image and the speckle pattern of the infrared light image respective pixel of demarcating usefulness are carried out computing cross-correlation, get demarcation with the depth value of the highest pixel of the correlativity depth value as this pixel, namely obtain scene depth figure; In visible images, utilize face recognition technology to identify people's face position, utilize the human skeleton geometric model then, estimate position of human body, obtain the human body attitude profile diagram; Described human body attitude display module is connected with the human body attitude processing module with human joint points identification by communication module, obtain the data command of human joint points identification and human body attitude processing module, identify and the instruction of human body attitude processing module shows human body attitude and action in real time according to human joint points;
Wherein human joint points identification and human body attitude processing module comprise data storage cell, mode of operation selected cell, data processing unit; Comprise in the mode of operation selected cell and catch imitation human body attitude mode of operation and imitation demonstration human body attitude mode of operation, human body attitude processing mode difference under every kind of mode of operation:
Catching imitation human body attitude mode of operation is: human joint points identification and human body attitude processing module obtain the human joint points space distribution information, the coordinate of articulation point is the actual physics volume coordinate, be the volume coordinate of coordinate system in the three-dimensional monitor with the human joint points coordinate transformation in the actual physics coordinate system at first, the articulation point after the conversion is coupled together to form complete human skeleton; After obtaining human skeleton, human skeleton is modified, proofreaied and correct, make human body attitude more vivid; Last controller merges data, presses three-dimensional monitor and shows that required form sends the instruction of human body attitude data by communication module;
Imitation shows that the human body attitude mode of operation is: the requirement that the human body attitude that will show or action can show according to three-dimensional monitor is stored in storage space or the data text, and human joint points identification and human body attitude processing module are called these data and sent to three-dimensional monitor by communication module and show.
2. the 3D human body attitude shown in claim 1 catches mimicking system, it is characterized in that: wherein fixed mode laser speckle is that light obtains after by one group of diffraction grating, also can carry out mark to whole space even guaranteed that any speckle pattern in space is inequality at 2; Record whole space with speckle pattern, namely carry out light source and demarcate.
3. the 3D human body attitude shown in claim 2 catches mimicking system, it is characterized in that: wherein the method for light source demarcation is: every a segment distance, get reference planes, generate a width of cloth infrared light image, each pixel has speckle pattern on the image, simultaneously generate a width of cloth depth map accordingly, each pixel depth value of depth map is corresponding with the speckle pattern of infrared light image respective pixel.
4. the 3D human body attitude shown in claim 1 catches mimicking system, it is characterized in that: the human body attitude display module comprises three-dimensional monitor, this display is made up of M * N * P LED lamp, each LED lamp is divided into to be lighted and extinguishes two states, forms pattern by lighting of LED lamp and shows human body attitude.
5. the 3D human body attitude shown in claim 1 catches mimicking system, it is characterized in that: infrared transmitter, RGB camera, infrared remote receiver are packaged together, share a three-dimensional space coordinate system O-XYZ, the human synovial point coordinate that obtains like this is the physical coordinates of actual object space O-XYZ.
6. the 3D human body attitude shown in claim 4 catches mimicking system, and it is characterized in that: each LED lights and represents an articulation point, guarantees the real-time update that human body attitude shows.
CN 201210009368 2012-01-12 2012-01-12 3D (three-dimensional) human posture capturing and simulating system Expired - Fee Related CN102622591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201210009368 CN102622591B (en) 2012-01-12 2012-01-12 3D (three-dimensional) human posture capturing and simulating system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201210009368 CN102622591B (en) 2012-01-12 2012-01-12 3D (three-dimensional) human posture capturing and simulating system

Publications (2)

Publication Number Publication Date
CN102622591A CN102622591A (en) 2012-08-01
CN102622591B true CN102622591B (en) 2013-09-25

Family

ID=46562501

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201210009368 Expired - Fee Related CN102622591B (en) 2012-01-12 2012-01-12 3D (three-dimensional) human posture capturing and simulating system

Country Status (1)

Country Link
CN (1) CN102622591B (en)

Families Citing this family (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102377634B1 (en) * 2012-09-10 2022-03-22 에이매스, 아이엔씨. Multi-dimensional data capture of an environment using plural devices
CN103148788B (en) * 2013-03-29 2015-06-17 宁凯 Motion-sensing peripheral equipment for remote recognition and height recognition method for human body
CN103345625B (en) * 2013-07-16 2016-08-17 江苏云知智能科技有限公司 A kind of three-dimensional image analysis method and system
CN103473530B (en) * 2013-08-30 2016-06-15 天津理工大学 Self adaptation action identification method based on multi views and multi-modal feature
EP3063496A4 (en) * 2013-10-24 2017-06-07 Ali Kord Motion capture system
CN104667510B (en) * 2015-02-09 2017-02-01 深圳泰山体育科技股份有限公司 Human motion test system
CN105023280A (en) * 2015-08-21 2015-11-04 武汉汇思聚创信息技术有限公司 Detection method based on 3D skeleton modeling
JP6688990B2 (en) * 2016-04-28 2020-04-28 パナソニックIpマネジメント株式会社 Identification device, identification method, identification program, and recording medium
CN106210474A (en) * 2016-08-12 2016-12-07 信利光电股份有限公司 A kind of image capture device, virtual reality device
CN108280807A (en) * 2017-01-05 2018-07-13 浙江舜宇智能光学技术有限公司 Monocular depth image collecting device and system and its image processing method
CN107229925A (en) * 2017-07-12 2017-10-03 深圳奥比中光科技有限公司 Conversed using ear recognition
CN107341481A (en) * 2017-07-12 2017-11-10 深圳奥比中光科技有限公司 It is identified using structure light image
CN107292283A (en) * 2017-07-12 2017-10-24 深圳奥比中光科技有限公司 Mix face identification method
CN107180238A (en) * 2017-07-27 2017-09-19 深圳市泰衡诺科技有限公司 A kind of image preview device and method of intelligent terminal
CN107491744B (en) * 2017-07-31 2021-03-02 Oppo广东移动通信有限公司 Human body identity recognition method and device, mobile terminal and storage medium
CN107480613B (en) * 2017-07-31 2021-03-02 Oppo广东移动通信有限公司 Face recognition method and device, mobile terminal and computer readable storage medium
CN107551551B (en) * 2017-08-09 2021-03-26 Oppo广东移动通信有限公司 Game effect construction method and device
CN109961454A (en) * 2017-12-22 2019-07-02 北京中科华正电气有限公司 Human-computer interaction device and processing method in a kind of embedded intelligence machine
CN108234874B (en) 2018-01-10 2020-07-21 南京华捷艾米软件科技有限公司 Method and device for adjusting imaging precision of somatosensory camera
CN108294731A (en) * 2018-01-19 2018-07-20 深圳禾思众成科技有限公司 A kind of thermal imaging physiology-detecting system
CN110191266B (en) * 2018-04-28 2021-08-31 Oppo广东移动通信有限公司 Data processing method and device, electronic equipment and computer readable storage medium
ES2938471T3 (en) 2018-04-28 2023-04-11 Guangdong Oppo Mobile Telecommunications Corp Ltd Data processing method, electronic device and computer-readable storage medium
CN108594865A (en) * 2018-05-17 2018-09-28 广州悦享环球文化科技有限公司 A kind of control robot imaging system and method
CN109077731B (en) * 2018-06-11 2022-06-14 上海大学 Human body detection method and device based on vision
CN110659543B (en) * 2018-06-29 2023-07-14 比亚迪股份有限公司 Gesture recognition-based vehicle control method and system and vehicle
CN109269483B (en) * 2018-09-20 2020-12-15 国家体育总局体育科学研究所 Calibration method, calibration system and calibration base station for motion capture node
CN109743809A (en) * 2018-12-19 2019-05-10 重庆秉为科技有限公司 A method of LED is controlled according to human body attitude and is lighted
CN109621401A (en) * 2018-12-29 2019-04-16 广州明朝互动科技股份有限公司 A kind of interaction game system and control method
CN109833608B (en) * 2018-12-29 2021-06-22 南京华捷艾米软件科技有限公司 Dance action teaching and assisting method and system based on 3D motion sensing camera
CN109829422A (en) * 2019-01-28 2019-05-31 哈尔滨工业大学 A kind of video frequency identifying method based on the movement of impulsive neural networks falling over of human body
CN110009717B (en) * 2019-04-01 2020-11-03 江南大学 Animation figure binding recording system based on monocular depth map
CN110084214B (en) * 2019-04-30 2021-09-28 内江师范学院 Action identification method based on induction type deep learning
CN110427864B (en) * 2019-07-29 2023-04-21 腾讯科技(深圳)有限公司 Image processing method and device and electronic equipment
CN110821379A (en) * 2019-11-27 2020-02-21 江苏智动装备技术研究院有限公司 Self-induction type intelligent electric shutter
CN111067597B (en) * 2019-12-10 2021-04-16 山东大学 System for determining puncture path according to human body posture in tumor puncture
CN111182181A (en) * 2019-12-26 2020-05-19 武汉西山艺创文化有限公司 Wireless virtual device and system based on 5G communication
CN111427697B (en) * 2020-03-18 2023-06-06 深圳市瑞立视多媒体科技有限公司 Multi-thread-based motion capture method, device, equipment and storage medium
CN113495490A (en) * 2020-04-07 2021-10-12 深圳爱根斯通科技有限公司 Device control method, device, electronic device and storage medium
CN112515661B (en) * 2020-11-30 2021-09-14 魔珐(上海)信息科技有限公司 Posture capturing method and device, electronic equipment and storage medium
CN112416133B (en) * 2020-11-30 2021-10-15 魔珐(上海)信息科技有限公司 Hand motion capture method and device, electronic equipment and storage medium
CN112560605B (en) * 2020-12-02 2023-04-18 北京字节跳动网络技术有限公司 Interaction method, device, terminal, server and storage medium
KR20220093871A (en) * 2020-12-28 2022-07-05 한국과학기술원 Hand pose estimation method with a single infrared camera via domain transfer learning and the system thereof
CN114260886A (en) * 2021-10-28 2022-04-01 南京中电智慧科技有限公司 Intelligent recognition and mechanical grabbing unmanned supervision transportation system
CN114267153B (en) * 2021-12-29 2024-03-26 苏州英鹏信息科技有限公司 Household safety monitoring management system
CN115115810B (en) * 2022-06-29 2023-06-02 广东工业大学 Multi-person cooperative focus positioning and enhanced display method based on space gesture capturing

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Abhishek Kar.Skeletal Tracking using Microsoft Kinect.《Skeletal Tracking using Microsoft Kinect》.2011,第2页第3节-第8页第4.6节、图3,9.
Gesture Recognition using Microsoft Kinect;K. K. Biswas等;《Proceedings of the 5th International Conference on Automation,Robotics and Applications》;20111208;全文 *
K. K. Biswas等.Gesture Recognition using Microsoft Kinect.《Proceedings of the 5th International Conference on Automation,Robotics and Applications》.2011,全文.
Skeletal Tracking using Microsoft Kinect;Abhishek Kar;《Skeletal Tracking using Microsoft Kinect》;20111231;第2页第3节-第8页第4.6节、图3,9 *
基于Kinect传感器的智能轮椅手势控制系统的设计与实现;罗元等;《机器人》;20120131;第34卷(第1期);第2节 *
罗元等.基于Kinect传感器的智能轮椅手势控制系统的设计与实现.《机器人》.2012,第34卷(第1期),第2节.

Also Published As

Publication number Publication date
CN102622591A (en) 2012-08-01

Similar Documents

Publication Publication Date Title
CN102622591B (en) 3D (three-dimensional) human posture capturing and simulating system
CN106363637B (en) A kind of quick teaching method of robot and device
CN102871784B (en) Positioning controlling apparatus and method
CN102221887B (en) Interactive projection system and method
CN110969905A (en) Remote teaching interaction and teaching aid interaction system for mixed reality and interaction method thereof
CN106843507B (en) Virtual reality multi-person interaction method and system
WO2014010004A1 (en) Input device, information processing system, information processing device and information processing method
CN104200491A (en) Motion posture correcting system for human body
CN105190703A (en) Using photometric stereo for 3D environment modeling
CN105183161A (en) Synchronized moving method for user in real environment and virtual environment
CN107154197A (en) Immersion flight simulator
CN109531566A (en) A kind of robot livewire work control method based on virtual reality system
CN206105869U (en) Quick teaching apparatus of robot
CN106293087A (en) A kind of information interacting method and electronic equipment
CN111309154A (en) Remote human body action interactive education system based on 5G
CN208985412U (en) Wisdom education physical environment system
CN106846350A (en) One kind is based on RGB D cameras and stereosonic visually impaired people's barrier early warning system and method
CN110058696A (en) A kind of virtual reality implementation method and its application method and correlation technique device
CN112102667A (en) Video teaching system and method based on VR interaction
CN105160970A (en) Remote control actual and virtual experiment system
CN109445598A (en) A kind of augmented reality system and device of view-based access control model
CN113110742A (en) AR multi-person interaction industrial robot teaching system based on SLAM positioning technology
CN205158748U (en) Remote control actual situation experimental apparatus
EP3729235B1 (en) Data processing
CN107426531A (en) Posture visual angle remote control remote viewing intelligent helmet, method, camera device and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130925

Termination date: 20140112