CN103218826B - Projectile based on Kinect detection, three-dimensional localization and trajectory predictions method - Google Patents

Projectile based on Kinect detection, three-dimensional localization and trajectory predictions method Download PDF

Info

Publication number
CN103218826B
CN103218826B CN201310087780.3A CN201310087780A CN103218826B CN 103218826 B CN103218826 B CN 103218826B CN 201310087780 A CN201310087780 A CN 201310087780A CN 103218826 B CN103218826 B CN 103218826B
Authority
CN
China
Prior art keywords
projectile
kinect
pixel
depth
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310087780.3A
Other languages
Chinese (zh)
Other versions
CN103218826A (en
Inventor
陶熠昆
王聪颖
王宏涛
周连杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZHEJIANG SUPCON RESEARCH Co Ltd
Zhejiang Guozi Robot Technology Co Ltd
Original Assignee
ZHEJIANG SUPCON RESEARCH Co Ltd
Zhejiang Guozi Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZHEJIANG SUPCON RESEARCH Co Ltd, Zhejiang Guozi Robot Technology Co Ltd filed Critical ZHEJIANG SUPCON RESEARCH Co Ltd
Priority to CN201310087780.3A priority Critical patent/CN103218826B/en
Publication of CN103218826A publication Critical patent/CN103218826A/en
Application granted granted Critical
Publication of CN103218826B publication Critical patent/CN103218826B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The detection of a kind of projectile based on Kinect, three-dimensional localization and trajectory predictions method, comprise the following steps: S1, degree of depth background modeling, including: S101, all depth values of background pixel is quantified and is normalized within the scope of 0 255;S102, each pixel to background pixel use a Boolean type array to store this pixel all depth values in a period of time x, obtain a data area, as background model;S2, the api function of employing Kinect obtain the depth information of projectile image;S3, depth information to projectile image carry out foreground extraction, obtain foreground image;S4, foreground image is divided into some independent connected components;S5, judge whether connected component is projectile, and calculated the three-dimensional coordinate of projectile by the peg model of the depth camera of Kinect;S6, air drag is filtered as one of quantity of state of filtering, projectile is carried out trajectory predictions.

Description

Projectile based on Kinect detection, three-dimensional localization and trajectory predictions method
Technical field
The present invention relates to detection and the trajectory predictions field of projectile, particularly to the detection of a kind of projectile based on Kinect, three-dimensional localization and trajectory predictions method, be applied to include ball game (table tennis, shuttlecock etc.) spheroid track data analysis, some carry out the robot system of some action for projectile running orbit, and other use the field of this technology.
Background technology
Realize projectile detection a lot of with the scheme of trajectory predictions, the method for current more employing stereoscopic vision.Owing to visual system is affected bigger by illumination variation, environmental change, although have a lot of technology to can adapt to the change of environment and illumination, but still be a very challenging problem.And processor computing capability is required higher by binocular vision system, algorithm is extremely complex, and cost is the highest.
Kinect is a somatosensory device that MS releases, and is generally used for the body feeling interaction game that exploitation runs on XBox360 game host.Kinect from technological essence for be a RGB-D sensor, Microsoft provide simultaneously about human detection and human body each joint position detection algorithm.
Summary of the invention
The present invention is directed to deficiencies of the prior art, the detection of a kind of projectile based on Kinect, three-dimensional localization and trajectory predictions method are provided, three-dimensional coordinate with perception projectile, compare traditional Stereo Vision, its reliability under becoming photoenvironment is higher, and without carrying out stereo camera calibration and algorithm for stereo matching, whole scheme is simple and stable, and cost is relatively low.
The present invention is achieved through the following technical solutions:
The detection of a kind of projectile based on Kinect, three-dimensional localization and trajectory predictions method, comprise the following steps:
S1, degree of depth background modeling, including:
S101, all depth values of background pixel are quantified and are normalized within the scope of 0-255;
S102, each pixel to background pixel use a Boolean type array to store this pixel all depth values in a period of time x, obtain a data area, as background model;
S2, the api function of employing Kinect obtain the depth information of image;
S3, depth information to projectile image carry out foreground extraction, obtain foreground image;
S4, foreground image is divided into some independent connected components;
S5, judge whether connected component is projectile, and calculated the three-dimensional coordinate of projectile by the peg model of the depth camera of Kinect;
S6, air drag is filtered as one of quantity of state of filtering, projectile is carried out trajectory predictions.
It is also preferred that the left the size of Boolean type array is 255, data area is width × highly × 255.
Preferably, foreground extraction includes: set a pixel coordinate (x, y) it is a that depth value quantifies the value after 0-255, judge the pixel (x that pixel coordinate is corresponding in background model, whether a-3 to the a+3 item in Boolean type array y) exists " true " item, it is that this pixel is background, is otherwise prospect.
It is also preferred that the left step S4 is the method using seed filling, and foreground image is divided into some independent connected components.
It is also preferred that the left filtering is to use Kalman filter mode.
It is also preferred that the left degree of depth background modeling is to use codebook background modeling.
It is also preferred that the left x is 10 minutes.
Accompanying drawing explanation
Fig. 1 is the detection of a kind of projectile based on Kinect of the present invention, three-dimensional localization and the flow chart of trajectory predictions method.
Fig. 2 is the detection of a kind of projectile based on Kinect of the present invention, three-dimensional localization and a preferred embodiment of trajectory predictions method.
Detailed description of the invention
Elaborating the present invention below in conjunction with embodiment, the present embodiment is implemented under premised on technical solution of the present invention, gives detailed embodiment, but protection scope of the present invention is not limited to following embodiment.
Refer to Fig. 1, the flow chart of the present invention, realize in accordance with the following steps:
1, degree of depth background modeling
Environment in view of detection projectile is probably at the complicated and relatively-stationary environment of background, it is therefore necessary to be modeled background, and the effect of background modeling is to the most preferably distinguish background and prospect.Being not likely to be unalterable in view of background, the present invention uses a kind of codebook to describe state interested in background.Codebook is one of conventional means of background modeling, and the present invention does not repeats at this and limits.Specific practice is as follows:
All possible depth value is quantified and is normalized within the scope of 0-255;
To each pixel in image, gather a period of time (representative value is 10 minutes) depth value.This pixel all collected depth values within these 10 minutes are stored by the Boolean type array that size is 255;
All pixels in image are performed above operation, obtains the data area of (picture traverse * picture altitude * 255), as final background model.
2 depth maps obtain
The api function using Kinect official to provide can directly obtain image depth information.
3 foreground extraction
Background model is applied in the input image, then can extract foreground model, also will background parts weed out from image.Extracting method is as follows: set pixel coordinate (x, y) it is a that depth value quantifies the value after 0-255, then judge respective pixel (x in background model, y) the a-3 item in Boolean type array is in a+3 item totally seven Booleans, either with or without the item for " true ", if it is present think that this pixel is background, otherwise this pixel is prospect.
4 image segmentations
The effect of image segmentation is the component that the foreground image extracted is divided into independent communication.The present invention uses the method for seed filling, and foreground segmentation becomes each connected component independent of each other.
5 projectile identifications
Projectile identification is that each connected component after splitting image differentiates, it determines whether it belongs to projectile.The foundation judged includes: whether the size dimension of connected region, the shape facility of connected region, the motion conditions of connected region meet general projectile motion rule, according to above criterion, projectile will be identified, and by the peg model of Kinect depth camera, calculate the three-dimensional coordinate of projectile.
6 trajectory predictions based on Kalman filter
The present invention uses the mode of Kalman filter, and projectile is carried out trajectory predictions.It is to be noted, air drag model and the uncertainty of coefficient in view of projectile, the present invention using the coefficient of air resistance of projectile as Kalman filter (Kalman filtering, a kind of existing filtering technique, the present invention does not repeat at this and limits) quantity of state together with estimate so that the present invention can adapt to various types of projectile.
Refer to Fig. 2, for the ease of artisans understand that the present invention, a kind of projectile based on Kinect of the present invention detection presented below, three-dimensional localization and the preferred embodiment of trajectory predictions method, but the present invention is not limited thereto.
By table tennis track predict as a example by, needed for include: table tennis table, Kinect device is (in order to more fully cover billiard table, have employed two Kinect device at this, the present invention does not limits at this), signal handling equipment connects above-mentioned two Kinect device, for signal processing and trajectory predictions.
(1) major function of this embodiment includes:
1, table tennis track reappears and velocity-measuring system: the three-dimensional track of system real time record table tennis, and can when playing slow motion playback table tennis track sequence and real-time speed, reach the purpose of the multi-faceted reproduction of information for the game;
2, table tennis automatic score system: by the judgement to track, contrasts ABC row automatic score;
3, table tennis judge aid system: during differentiation bad for some human eyes, uses this system to judge accurately.
(2) the use flow process of this embodiment is as follows:
1, background modeling: after arranging billiard table, was modeled background between the non-match-period;
2, empty resistance parameter identification: allow athlete carry out training and competition, will learn the empty resistance parameter of table tennis in system during the games;
3, actually used in match.
Additionally, according to above-described embodiment, the present invention can add robot and replace athlete with expanded application in ping-pong robot the most in the above-described embodiments, robot, according to the Kinect device trajectory predictions to table tennis, carries out trajectory planning to mechanical arm and completes return of serve function.One robot can be used for man-machine air exercise, and Liang Tai robot can be used for fighting each other between robot, can be applicable to science and technology center's exhibition project or other field.
Main advantages of the present invention have 2 points:
1) reliability advantage: for visual system, Kinect utilizes structured light technique, has the highest robustness for illumination variation, environmental change.And stereo visual system is to calculate depth information by indirectly resolving mode, and Kinect can directly obtain depth information, enormously simplify calculation cost.
2) the civilian body feeling interaction product that cost advantage: Kinect releases as Microsoft, its cost is the lowest.If using stereoscopic vision scheme and needing to adapt to certain illumination and environmental change, then the cost of camera all can exceed well over scheme set forth in the present invention with the cost of processor.
The specific embodiment being only the application disclosed above, but the application is not limited to this, the changes that any person skilled in the art can think of, all should fall in the protection domain of the application.

Claims (6)

1. a projectile based on Kinect detection, three-dimensional localization and trajectory predictions method, it is characterised in that comprise the following steps:
S1, degree of depth background modeling, including:
S101, all depth values of background pixel are quantified and are normalized within the scope of 0-255;
S102, each pixel to background pixel use a Boolean type array to store this pixel all depth values in a period of time t, obtain a data area, as background model;
S2, the api function of employing Kinect obtain the depth information of image;
S3, depth information to image carry out foreground extraction, and the depth information of this image at least includes the depth value in step S101 and step S102, obtains foreground image;
S4, foreground image is divided into some independent connected components;
S5, judge whether described connected component is projectile, and calculated the three-dimensional coordinate of described projectile by the peg model of the depth camera of Kinect;
S6, air drag is filtered as one of quantity of state of filtering, projectile is carried out trajectory predictions;
Described foreground extraction includes: set a pixel coordinate (x, y) it is a that depth value quantifies the value after 0-255, judge the pixel (x that described pixel coordinate is corresponding in background model, whether a-3 to the a+3 item in Boolean type array y) exists " true " item, it is that this pixel is background, is otherwise prospect.
The detection of a kind of projectile based on Kinect, three-dimensional localization and trajectory predictions method, it is characterised in that the size of described Boolean type array is 255, and described data area is width × highly × 255.
The detection of a kind of projectile based on Kinect, three-dimensional localization and trajectory predictions method, it is characterised in that described step S4 is that foreground image is divided into some independent connected components by the method using seed filling.
The detection of a kind of projectile based on Kinect, three-dimensional localization and trajectory predictions method, it is characterised in that described filtering is to use Kalman filter mode.
The detection of a kind of projectile based on Kinect, three-dimensional localization and trajectory predictions method, it is characterised in that described degree of depth background modeling is to use codebook background modeling.
The detection of a kind of projectile based on Kinect, three-dimensional localization and trajectory predictions method, it is characterised in that described t is 10 minutes.
CN201310087780.3A 2013-03-19 2013-03-19 Projectile based on Kinect detection, three-dimensional localization and trajectory predictions method Expired - Fee Related CN103218826B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310087780.3A CN103218826B (en) 2013-03-19 2013-03-19 Projectile based on Kinect detection, three-dimensional localization and trajectory predictions method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310087780.3A CN103218826B (en) 2013-03-19 2013-03-19 Projectile based on Kinect detection, three-dimensional localization and trajectory predictions method

Publications (2)

Publication Number Publication Date
CN103218826A CN103218826A (en) 2013-07-24
CN103218826B true CN103218826B (en) 2016-08-10

Family

ID=48816569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310087780.3A Expired - Fee Related CN103218826B (en) 2013-03-19 2013-03-19 Projectile based on Kinect detection, three-dimensional localization and trajectory predictions method

Country Status (1)

Country Link
CN (1) CN103218826B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886618A (en) * 2014-03-12 2014-06-25 新奥特(北京)视频技术有限公司 Ball detection method and device
CN105319991B (en) * 2015-11-25 2018-08-28 哈尔滨工业大学 A kind of robot environment's identification and job control method based on Kinect visual informations
CN110553628A (en) * 2019-08-28 2019-12-10 华南理工大学 Depth camera-based flying object capturing method
CN110688965B (en) * 2019-09-30 2023-07-21 北京航空航天大学青岛研究院 IPT simulation training gesture recognition method based on binocular vision

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102681661A (en) * 2011-01-31 2012-09-19 微软公司 Using a three-dimensional environment model in gameplay

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4729812B2 (en) * 2001-06-27 2011-07-20 ソニー株式会社 Image processing apparatus and method, recording medium, and program

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102681661A (en) * 2011-01-31 2012-09-19 微软公司 Using a three-dimensional environment model in gameplay

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Gesture Recognition using Microsoft Kinect;K.K.Biswas等;《Proceedings of the 5th International Conference on Automation,Robotics and Applications》;20111208;100-103 *
基于Kinect深度图像信息的手势轨迹识别及应用;张毅等;《计算机应用研究》;20120915;第29卷(第9期);2547-3550 *

Also Published As

Publication number Publication date
CN103218826A (en) 2013-07-24

Similar Documents

Publication Publication Date Title
US11532172B2 (en) Enhanced training of machine learning systems based on automatically generated realistic gameplay information
US11373354B2 (en) Techniques for rendering three-dimensional animated graphics from video
Bloom et al. G3D: A gaming action dataset and real time action recognition evaluation framework
CN106647742B (en) Movement routine method and device for planning
TWI497346B (en) Human tracking system
CN102448561B (en) Gesture coach
KR102106135B1 (en) Apparatus and method for providing application service by using action recognition
CN102413886A (en) Show body position
CN103608844A (en) Fully automatic dynamic articulated model calibration
CN107220608B (en) Basketball action model reconstruction and defense guidance system and method
CN104021590A (en) Virtual try-on system and virtual try-on method
CN103218826B (en) Projectile based on Kinect detection, three-dimensional localization and trajectory predictions method
CN107341351A (en) Intelligent body-building method, apparatus and system
CN106056089A (en) Three-dimensional posture recognition method and system
CN104035557A (en) Kinect action identification method based on joint activeness
CN107551554A (en) Indoor sport scene simulation system and method are realized based on virtual reality
CN105749525A (en) Basketball training device based on AR technology
CN107909889B (en) Gobang man-machine playing experiment teaching system based on visual guidance
KR100907704B1 (en) Golfer's posture correction system using artificial caddy and golfer's posture correction method using it
CN110348370B (en) Augmented reality system and method for human body action recognition
CN107945206A (en) A kind of mobile object track determination methods of view-based access control model sensor
CN115475373B (en) Display method and device of motion data, storage medium and electronic device
CN205880817U (en) AR and VR data processing equipment
CN205598586U (en) Basketball training device based on AR technique
CN108905199A (en) A kind of game skill acquisition and game skill upgrade method based on AR

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160810

Termination date: 20170319