CN102194248A - Method for detecting and responding false-true collision based on augmented reality - Google Patents

Method for detecting and responding false-true collision based on augmented reality Download PDF

Info

Publication number
CN102194248A
CN102194248A CN 201110114973 CN201110114973A CN102194248A CN 102194248 A CN102194248 A CN 102194248A CN 201110114973 CN201110114973 CN 201110114973 CN 201110114973 A CN201110114973 A CN 201110114973A CN 102194248 A CN102194248 A CN 102194248A
Authority
CN
China
Prior art keywords
collision
dummy object
actual situation
actual
barycenter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 201110114973
Other languages
Chinese (zh)
Inventor
陈明
凌晨
张文俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN 201110114973 priority Critical patent/CN102194248A/en
Publication of CN102194248A publication Critical patent/CN102194248A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for detecting and responding false-true collision based on augmented reality, which comprises the following steps: 1) preprocessing an actual object acquired by a camera; 2) performing posture estimation and motion estimation on the actual object; 3) performing the false-true collision detection according to a position relationship between a mass center of a virtual object and a collision plane; 4) responding to the false-true collision according to a detection result; and 5) modifying an virtual image, rendering and outputting the virtual image. According to the method, a relative true three-dimensional false-true collision response effect can be achieved by using the three-dimensional collision detection method for only calculating four characteristic points of the actual object. The calculation complexity is low. The monocular vision three-dimensional false-true collision detection and response are realized.

Description

Actual situation collision detection and response method based on augmented reality
Technical field
What the present invention relates to is a kind of augmented reality actual situation impacting technology, and specifically a kind of arbitrary shape dummy object of the monocular vision of using based on augmented reality is to the method for actual object collision detection and response.
Background technology
Along with people constantly strengthen the requirement of interactive experience, (Augmented Reality, AR) application has obtained fast development to augmented reality.And based on the AR system of vision because to the not high application mainstream that become of hardware requirement.In augmented reality system and in using, real-world object and virtual objects carry out alternately, and the actual situation collision problem is unavoidable.Actual situation collision detection and response energy make the AR application natural more, true accurately, thereby make actual situation man-machine interaction freely become possibility.Early stage AR system, general usage flag is helped the registration location.Since the requirement of natural interaction, the existing very great development of unmarked AR system.And AR used to be transplanted to become fashion trend on the mobile device, these are all had higher requirement to actual situation collision detection and response algorithm thereof.
For collision detection research, as Chinese patent: its name is called " a kind of large-scale virtual scene collision checking method based on balanced binary tree ", application number CN200910086719.0; Chinese patent: its name is called " towards the method for detecting parallel collision of complex scene real time interactive operation ", application number CN200710043743.7; Chinese patent: its name is called " a kind of method that realizes the 3d gaming collision detection at server end ", application number CN200710117826.6.These collision checking methods at first are the collision checking methods at the virtual scene of large-scale complex, are the collision checking method of dummy object to dummy object, are not suitable for the collision detection of dummy object to actual object.Secondly, these collision checking methods rely on hardware to realize, are not suitable for being transplanted to the requirement of mobile device.Chinese patent: its name is called " using the real time collision detection of shearing ", application number CN200780048182.8; Chinese patent: its name is called " a kind of method for detecting continuous collision based on ellipsoid scanning "; Application number CN200910087900.3; Chinese patent: its name is called " a kind of flexible fabric self collision detection method based on four fork bounding box trees ", application number CN200910087902.2; Chinese patent: its name is called " a kind of method for detecting parallel collision based on subdivision ", application number CN200810202774.7.These collision detection algorithm are primarily aimed at be dummy object in the virtual scene to the collision detection of dummy object, be not suitable for dummy object in the augmented reality to the collision detection of actual object.
At present, also do not utilize both at home and abroad based on the three-dimensional actual situation collision collision detection of monocular vision and the research and the report of response.Above-mentioned patent and correlative study all rely on hardware device, perhaps only study at virtual reality, do not use cheap single camera to carry out actual situation collision research.Even the achievement in research of actual situation collision also only at the situation of two dimension, and is difficult to expand to three-dimensional environment.
Summary of the invention
The problem and shortage that exists of prior art in view of the above, the object of the present invention is to provide a kind of actual situation collision detection and response method based on augmented reality, this method can be taken real screen by single camera, by simple image segmentation, feature point extraction, obtain actual object plane characteristic point coordinate.Calculate the dummy object barycenter again and carry out collision detection, finally estimate the motion of next frame dummy object to the distance on collision plane.This method can be handled three-dimensional actual situation collision problem by simple single camera.
For achieving the above object, the present invention adopts following technical conceive: according to physics of photography, computer vision technique and optical technology, the data of obtaining actual object are in real time analyzed, the actual situation object is carried out contraposition, handle dummy object and actual object collision three-dimensional problem.
According to above-mentioned technical conceive, the present invention is by the following technical solutions: a kind of actual situation collision detection and response method based on augmented reality, it is characterized in that operation steps is: 1. the actual object that camera is obtained carries out pre-service, 2. carrying out the actual object posture estimates and estimation, 3. carry out the actual situation collision detection according to the dummy object barycenter to the position relation on collision plane, 4. carry out the actual situation collision response according to testing result, 5. revise virtual pattern and play up output.The above-mentioned steps principle is as follows:
1., obtain the key frame images of actual object, by based on the image segmentation of the colour of skin and by carrying out feature point extraction, obtain characteristic point coordinates in the screen coordinate system according to the palm palmmprint by camera ,
Figure 412242DEST_PATH_IMAGE002
, these unique point quantity are 4 points, the regularity of distribution is square.
2., the conversion formula that is tied to screen coordinate system by world coordinates calculates the corresponding point in the world coordinate system
Figure 211833DEST_PATH_IMAGE003
World coordinate system and screen coordinate system coordinate conversion formula are as follows:
Figure 222514DEST_PATH_IMAGE004
Wherein,
Figure 252787DEST_PATH_IMAGE005
Be world coordinate system (
Figure 348919DEST_PATH_IMAGE006
, ,
Figure 988028DEST_PATH_IMAGE008
) to the camera coordinate system (
Figure 126885DEST_PATH_IMAGE009
,
Figure 772630DEST_PATH_IMAGE010
,
Figure 22346DEST_PATH_IMAGE011
) transformation matrix. In
Figure 881160DEST_PATH_IMAGE012
Be selection matrix,
Figure 14201DEST_PATH_IMAGE013
Be translation matrix.After obtaining unique point, by obtaining the normal vector on collision plane
Figure 67608DEST_PATH_IMAGE014
Obtain
Figure 969705DEST_PATH_IMAGE003
After, calculate barycenter C(
Figure 450364DEST_PATH_IMAGE015
).Then try to achieve the motion vector of t actual object constantly
Figure 742806DEST_PATH_IMAGE016
3., utilize estimation of above-mentioned unique point and actual object posture and motion estimation result to carry out the actual situation collision detection.Dummy object is approximately a spheroid, and the collision on ball and plane is regarded in dummy object and actual object collision as.The barycenter G of dummy object ball promptly is a vector to the distance on collision plane
Figure 662220DEST_PATH_IMAGE017
At normal vector
Figure 356507DEST_PATH_IMAGE018
On projection d.This moment, the precondition of collision detection was:
Figure 8068DEST_PATH_IMAGE019
Wherein r is the radius of ball, and τ is G accounts for r to the distance of collision plane Π a ratio.Promptly do not produce the actual situation collision as not satisfying condition.When satisfying this condition, then calculate the projection G ' of barycenter G on the collision plane.And judge whether that G ' is having unique point
Figure 617166DEST_PATH_IMAGE003
The collision area inside that is surrounded:
Figure 12375DEST_PATH_IMAGE020
If satisfy this condition then, the point of impingement is in collision area inside, do not satisfy condition and then calculates the distance of G ' to the collision area edge
Figure 623485DEST_PATH_IMAGE021
Whether satisfy condition
Figure 711527DEST_PATH_IMAGE021
≤ τ r.For example calculate G ' to the edge
Figure 40877DEST_PATH_IMAGE022
Situation, then calculate following formula:
Figure 239777DEST_PATH_IMAGE023
Do not collide if do not satisfy following formula then actual situation does not take place.As satisfy condition then calculate dummy object whether barycenter G between two unique points.The point of impingement then being described,, then continue the distance of the barycenter of design conditions dummy object to one of any four unique points as not satisfying condition at the collision area edge
Figure 643077DEST_PATH_IMAGE024
Whether satisfy condition
Figure 964337DEST_PATH_IMAGE024
≤ τ r.Satisfying condition then bumps, and the point of impingement is in this unique point.Do not satisfy then explanation and do not produce the actual situation collision.
4., utilize above-mentioned actual situation collision detection result to carry out the actual situation collision response.If the actual situation collision does not take place, then the motion vector of next frame dummy object
Figure 718666DEST_PATH_IMAGE025
The motion vector of frame dummy object therewith
Figure 13600DEST_PATH_IMAGE026
Identical.As the actual situation collision taken place, whether the motion vector angle that then calculates the normal vector on collision plane and dummy object is between 90 ° to 180 °.In its interval, then
Figure 536985DEST_PATH_IMAGE027
, wherein Be momentum; Not between the interval then
Figure 5193DEST_PATH_IMAGE029
5., utilize above-mentioned collision response result, calculate the motion of dummy object, carry out the dummy object registration according to the augmented reality technology, carry out dummy object at last and play up and show output, carry out new circulation.
The present invention compared with prior art, have following conspicuous outstanding feature and remarkable advantage: the present invention adopts single camera to catch picture, and carry out pre-service according to key-frame extraction, image segmentation, the feature point extraction technology of technology maturation, carry out collision detection and response by the relation between four unique points of dummy object barycenter and collision plane.Not only can carry out three-dimensional actual situation collision, and the calculated amount of collision detection reduces greatly.And only use single camera just can carry out three-dimensional registration, than used in the past multi-cam easier, save hardware spending more.In addition, the low-complexity of algorithm is transplanted on the mobile device that has only single camera the actual situation collision becomes possibility.
Description of drawings
Fig. 1 is actual situation collision detection and the response method flow chart that the present invention is based on augmented reality.
Fig. 2 is the pretreated process flow diagram of data of the actual object that obtains of embodiment.
Fig. 3 is the actual object posture estimation of embodiment and the process flow diagram of estimation.
Fig. 4 is the process flow diagram of the actual situation collision detection of embodiment.
Fig. 5 is the process flow diagram of the actual situation collision response of embodiment.
Embodiment
Below in conjunction with accompanying drawing embodiments of the invention are described in further detail.
A concrete preferred embodiment of the present invention, as shown in Figure 1, above-mentioned actual situation collision detection and response method based on augmented reality comprises that step is as follows:
1., in real time obtain the data of actual object and carry out pre-service, obtain the unique point of actual object;
2., the posture of actual object is estimated and estimation;
3., actual situation object ball knock-on collision testing process;
4., actual situation object ball knock-on collision response process;
5., play up output.
Above-mentioned steps is requirement 1., obtains the data that monocular cam captures in real time and handles, and as shown in Figure 2, its concrete steps are as follows:
(1), obtains the key frame images of actual object by monocular cam;
(2), the image of input capture;
(3), image is carried out Face Detection;
(4), step (3) result being carried out connected domain detects;
(5), the zone of calculating the connected domain area and removing small size;
(6), by the feature of palmmprint, carry out the feature point extraction of four rectangular distribution;
(7), preprocessing process finishes.
The above-mentioned steps 2. posture of described actual object estimates and estimation that as shown in Figure 3, in the calculating that the unique point acquisition is collided the motion vector of planar unit normal vector and actual object later, its concrete steps are as follows:
(8), estimation of the posture of actual object and motion estimation process begin;
(9), the unique point that 1. obtains according to step, the computational transformation matrix;
(10), the coordinate of calculated characteristics point in world coordinate system;
(11), the unit normal vector on the collision plane of calculated characteristics point formation
Figure 545896DEST_PATH_IMAGE014
(12), obtain the result according to step (10), the barycenter on calculating collision plane;
(13), according to the motion vector of centroid calculation actual object on collision plane
Figure 251683DEST_PATH_IMAGE030
(14), the posture of actual object is estimated and the motion estimation process end.
The 3. described collision detection process of above-mentioned steps according to the posture estimation of above-mentioned actual object and the result of estimation, determines whether to take place the actual situation collision by a series of Rule of judgment, and as shown in Figure 4, its concrete steps are as follows:
(15), collision detection process begins;
(16), the dummy object barycenter to the collision plane apart from the d d≤τ r that satisfies condition? if satisfy, then change step (17), otherwise change step (22);
(17), calculate the dummy object barycenter at the projection G ' on collision plane in the collision area that four unique points surround? if in collision area, then change step (21), otherwise change step (18);
(18), calculation procedure (17) gained projection G ' is to the distance at collision area edge
Figure 586850DEST_PATH_IMAGE031
Whether satisfy condition
Figure 315771DEST_PATH_IMAGE031
≤ τ r? if satisfy, then change step (19), otherwise change step (22);
(19), the barycenter of dummy object, the projection on the collision edge line that any two unique points are formed is between two unique points? be then to change step (21), otherwise change step (20);
(20), the barycenter of dummy object is to the distance of one of any four unique points
Figure 456903DEST_PATH_IMAGE032
Whether satisfy condition
Figure 954880DEST_PATH_IMAGE032
≤ τ r? be then to change step (21), otherwise change step (22);
(21), the actual situation collision has taken place;
(22), collision detection process finishes.
The 4. described collision response process of above-mentioned steps according to the collision detection result, is carried out the calculating of dummy object next frame motion vector, and as shown in Figure 5, its concrete steps are as follows:
(23), the collision response process begins;
(24), calculate the motion vector of dummy object
(25), the actual situation collision taken place? be then to change step (26), otherwise change step (28);
(26), the motion vector angle of the collision normal vector on plane and dummy object (90 °, 180 °]? step (27) is then changeed betwixt in the angle, otherwise changes step (29);
(27), the motion vector of next frame dummy object
Figure 303264DEST_PATH_IMAGE027
, change step (30);
(28), the motion vector of next frame dummy object
Figure 185770DEST_PATH_IMAGE033
, change step (30);
(29), the motion vector of next frame dummy object
(30), the collision response process finishes.
The 5. described output procedure of playing up of above-mentioned steps, be the motion of the motion vector calculation dummy object of the dummy object that 4. obtains according to step, revise virtual pattern, carry out the dummy object registration according to the augmented reality technology, and play up output, thereby reach the effect of dummy object and actual object collision.

Claims (6)

1. actual situation collision detection and response method based on an augmented reality, it is characterized in that, operation steps is: 1. the actual object that camera is obtained carries out pre-service, 2. carrying out the actual object posture estimates and estimation, 3. carry out the actual situation collision detection according to the dummy object barycenter to the position relation on collision plane, 4. carry out the actual situation collision response according to testing result, 5. revise virtual pattern and play up output.
2. according to claim 1 described actual situation collision detection and response method based on augmented reality, it is characterized in that described step 1., the actual object preprocessing process that camera is obtained is: key frame is carried out image segmentation, feature point extraction, obtain characteristic point coordinates collection in the screen coordinate system; The concrete steps operation steps is as follows:
(1), obtains the key frame images of actual object by monocular cam;
(2), the image of input capture;
(3), image is carried out Face Detection;
(4), step (3) result being carried out connected domain detects;
(5), the zone of calculating the connected domain area and removing small size;
(6), by the feature of palmmprint, carry out the feature point extraction of four rectangular distribution;
(7), preprocessing process finishes.
3. according to claim 1 described actual situation collision detection and response method based on augmented reality, it is characterized in that, the method that 2. described step carries out estimation of actual object posture and estimation is: by calculating the transformation matrix that world coordinates is tied to screen coordinate system, obtain the coordinate of corresponding point, calculate collision planar process vector and carry out the estimation of actual object posture, and the actual object of current time is carried out estimation; The concrete operations step is as follows:
(8), estimation of the posture of actual object and motion estimation process begin;
(9), the unique point that 1. obtains according to step, the computational transformation matrix;
(10), the coordinate of calculated characteristics point in world coordinate system;
(11), the unit normal vector on the collision plane of calculated characteristics point formation
Figure 2011101149734100001DEST_PATH_IMAGE001
(12), obtain the result according to step (10), the barycenter on calculating collision plane;
(13), according to the motion vector of centroid calculation actual object on collision plane
(14), the posture of actual object is estimated and the motion estimation process end.
4. according to claim 1 described actual situation collision detection and response method based on augmented reality, it is characterized in that, the method that described step is carried out the actual situation collision detection in 3. is: dummy object is approximately a spheroid, and dummy object and actual object collision are approximately the collision on ball and plane; The barycenter of dummy object ball promptly is vector the projection on planar process vector of barycenter to unique point to the distance on collision plane, and this moment, the precondition of actual situation collision detection was exactly the radius that this distance is not more than ball; As not satisfying precondition is that decidable does not produce actual situation collision, when satisfying this condition, then calculates the projection of barycenter on the collision plane, and judges that it is whether in collision area inside that unique point surrounded; If satisfy condition, then the point of impingement is in collision area inside, do not satisfy condition then to calculate distance that this barycenter projects to the collision area edge whether smaller or equal to the radius of ball; If do not satisfy condition, actual situation collision does not then take place, as satisfies condition, whether the position of barycenter of then calculating dummy object is between two unique points; Satisfy condition the position of the point of impingement then at the collision area edge, as not satisfying condition, the barycenter that then continues to calculate dummy object to the distance of one of any four unique points whether smaller or equal to the radius of ball; Satisfy condition, then bump, point of impingement position does not satisfy then not producing the actual situation collision on this unique point; The concrete operations step is as follows:
(15), collision detection process begins;
(16), the dummy object barycenter to the collision plane apart from the d d≤τ r that satisfies condition? if satisfy, then change step (17), otherwise change step (22);
(17), calculate the dummy object barycenter at the projection G ' on collision plane in the collision area that four unique points surround? if in collision area, then change step (21), otherwise change step (18);
(18), calculation procedure (17) gained projection G ' is to the distance at collision area edge
Figure 2011101149734100001DEST_PATH_IMAGE003
Whether satisfy condition
Figure 308331DEST_PATH_IMAGE003
≤ τ r? if satisfy, then change step (19), otherwise change step (22);
(19), the barycenter of dummy object, the projection on the collision edge line that any two unique points are formed is between two unique points? be then to change step (21), otherwise change step (20);
(20), the barycenter of dummy object is to the distance of one of any four unique points
Figure 824501DEST_PATH_IMAGE004
Whether satisfy condition
Figure 646963DEST_PATH_IMAGE004
≤ τ r? be then to change step (21), otherwise change step (22);
(21), the actual situation collision has taken place;
(22), collision detection process finishes.
5. according to claim 1 described actual situation collision detection and response method based on augmented reality, it is characterized in that, the method of the actual situation collision response of described step in 4. is: as the actual situation collision does not take place, then the motion vector of frame dummy object is identical therewith for the motion vector of next frame dummy object; As the actual situation collision taken place, whether the normal vector that calculates the collision plane and the motion vector angle of dummy object be between 90 ° to 180 °, and then the motion vector of next frame dummy object equals the dummy object motion vector and colliding projection vector, dummy object motion vector and momentum sum on the planar process vector in this interval; Not in the interval then the motion vector of next frame dummy object be the vector of actual situation object of which movement this moment sum; The concrete operations step is as follows:
(23), the collision response process begins;
(24), calculate the motion vector of dummy object
Figure 2011101149734100001DEST_PATH_IMAGE005
(25), the actual situation collision taken place? be then to change step (26), otherwise change step (28);
(26), the motion vector angle of the collision normal vector on plane and dummy object (90 °, 180 °]? step (27) is then changeed betwixt in the angle, otherwise changes step (29);
(27), the motion vector of next frame dummy object
Figure 586100DEST_PATH_IMAGE006
, change step (30);
(28), the motion vector of next frame dummy object , change step (30);
(29), the motion vector of next frame dummy object
Figure 722683DEST_PATH_IMAGE008
(30), the collision response process finishes.
6. according to claim 1 described actual situation collision detection and response method based on augmented reality, it is characterized in that, 5. described step is revised virtual pattern and is played up output: according to actual situation collision response result, calculate the motion of dummy object, carry out the dummy object registration according to the augmented reality technology, play up output at last.
CN 201110114973 2011-05-05 2011-05-05 Method for detecting and responding false-true collision based on augmented reality Pending CN102194248A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110114973 CN102194248A (en) 2011-05-05 2011-05-05 Method for detecting and responding false-true collision based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110114973 CN102194248A (en) 2011-05-05 2011-05-05 Method for detecting and responding false-true collision based on augmented reality

Publications (1)

Publication Number Publication Date
CN102194248A true CN102194248A (en) 2011-09-21

Family

ID=44602259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110114973 Pending CN102194248A (en) 2011-05-05 2011-05-05 Method for detecting and responding false-true collision based on augmented reality

Country Status (1)

Country Link
CN (1) CN102194248A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103543754A (en) * 2013-10-17 2014-01-29 广东威创视讯科技股份有限公司 Camera control method and device in three-dimensional GIS (geographic information system) roaming
CN104704535A (en) * 2012-10-02 2015-06-10 索尼公司 Augmented reality system
CN105512377A (en) * 2015-11-30 2016-04-20 腾讯科技(深圳)有限公司 Real time virtual scene cylinder collider and convex body collision detection method and system
CN106774870A (en) * 2016-12-09 2017-05-31 武汉秀宝软件有限公司 A kind of augmented reality exchange method and system
CN107610134A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Based reminding method, device, electronic installation and computer-readable recording medium
CN107610127A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Image processing method, device, electronic installation and computer-readable recording medium
CN107742300A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Image processing method, device, electronic installation and computer-readable recording medium
CN108509043A (en) * 2018-03-29 2018-09-07 联想(北京)有限公司 A kind of interaction control method and system
CN109920057A (en) * 2019-03-06 2019-06-21 珠海金山网络游戏科技有限公司 A kind of viewpoint change method and device calculates equipment and storage medium
CN110716683A (en) * 2019-09-29 2020-01-21 北京金山安全软件有限公司 Generation method, device and equipment of collision object
US11138740B2 (en) 2017-09-11 2021-10-05 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing methods, image processing apparatuses, and computer-readable storage medium
CN115293018A (en) * 2022-09-29 2022-11-04 武汉亘星智能技术有限公司 Collision detection method and device for flexible body, computer equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893935A (en) * 2010-07-14 2010-11-24 北京航空航天大学 Cooperative construction method for enhancing realistic table-tennis system based on real rackets

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893935A (en) * 2010-07-14 2010-11-24 北京航空航天大学 Cooperative construction method for enhancing realistic table-tennis system based on real rackets

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《Electronics Letters》 20100624 D. Lee等 《Sphere-to-sphere collision estimation of virtual objects to arbitrarily-shaped real objects for augmented reality》 第46卷, 第13期 *
《IEICE Electronics Express》 20080910 Daeho Lee 等 《Estimation of collision response of virtual objects to arbitrary-shaped real objects》 678-682 第5卷, 第17期 *
《计算机辅助设计与图形学学报》 20110430 李岩 等 《一种手部实时跟踪与定位的虚实碰撞检测方法》 第23卷, 第4期 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104704535A (en) * 2012-10-02 2015-06-10 索尼公司 Augmented reality system
US9779550B2 (en) 2012-10-02 2017-10-03 Sony Corporation Augmented reality system
CN103543754A (en) * 2013-10-17 2014-01-29 广东威创视讯科技股份有限公司 Camera control method and device in three-dimensional GIS (geographic information system) roaming
US10311544B2 (en) 2015-11-30 2019-06-04 Tencent Technology (Shenzhen) Company Limited Method for detecting collision between cylindrical collider and convex body in real-time virtual scenario, terminal, and storage medium
CN105512377A (en) * 2015-11-30 2016-04-20 腾讯科技(深圳)有限公司 Real time virtual scene cylinder collider and convex body collision detection method and system
US11301954B2 (en) 2015-11-30 2022-04-12 Tencent Technology (Shenzhen) Company Limited Method for detecting collision between cylindrical collider and convex body in real-time virtual scenario, terminal, and storage medium
CN106774870A (en) * 2016-12-09 2017-05-31 武汉秀宝软件有限公司 A kind of augmented reality exchange method and system
CN107610134B (en) * 2017-09-11 2020-03-31 Oppo广东移动通信有限公司 Reminding method, reminding device, electronic device and computer readable storage medium
CN107742300A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Image processing method, device, electronic installation and computer-readable recording medium
CN107610127A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Image processing method, device, electronic installation and computer-readable recording medium
US11138740B2 (en) 2017-09-11 2021-10-05 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing methods, image processing apparatuses, and computer-readable storage medium
CN107610134A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Based reminding method, device, electronic installation and computer-readable recording medium
CN108509043A (en) * 2018-03-29 2018-09-07 联想(北京)有限公司 A kind of interaction control method and system
CN108509043B (en) * 2018-03-29 2021-01-15 联想(北京)有限公司 Interaction control method and system
CN109920057A (en) * 2019-03-06 2019-06-21 珠海金山网络游戏科技有限公司 A kind of viewpoint change method and device calculates equipment and storage medium
CN109920057B (en) * 2019-03-06 2022-12-09 珠海金山数字网络科技有限公司 Viewpoint transformation method and device, computing equipment and storage medium
CN110716683A (en) * 2019-09-29 2020-01-21 北京金山安全软件有限公司 Generation method, device and equipment of collision object
CN110716683B (en) * 2019-09-29 2021-03-26 北京金山安全软件有限公司 Generation method, device and equipment of collision object
CN115293018A (en) * 2022-09-29 2022-11-04 武汉亘星智能技术有限公司 Collision detection method and device for flexible body, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN102194248A (en) Method for detecting and responding false-true collision based on augmented reality
CN105229666B (en) Motion analysis in 3D images
US9039528B2 (en) Visual target tracking
US9842405B2 (en) Visual target tracking
US8577084B2 (en) Visual target tracking
CN102254346A (en) Method for detecting augmented reality virtual-real collision based on cloud computing
US8565476B2 (en) Visual target tracking
CN106875431B (en) Image tracking method with movement prediction and augmented reality implementation method
US8682028B2 (en) Visual target tracking
CN103246884B (en) Real-time body's action identification method based on range image sequence and device
US8577085B2 (en) Visual target tracking
US20100195867A1 (en) Visual target tracking using model fitting and exemplar
CN105069751B (en) A kind of interpolation method of depth image missing data
US8565477B2 (en) Visual target tracking
TW201227538A (en) Method and apparatus for tracking target object
TW201234261A (en) Using a three-dimensional environment model in gameplay
CN114651284A (en) Lightweight multi-branch and multi-scale heavy person identification
Liu et al. Trampoline motion decomposition method based on deep learning image recognition
CN104978583A (en) Person action recognition method and person action recognition device
CN103440036B (en) The display of 3-D view and interactive operation method and device
CN105069829A (en) Human body animation generation method based on multi-objective video
CN110377033B (en) RGBD information-based small football robot identification and tracking grabbing method
Che et al. A novel framework of hand localization and hand pose estimation
CN108829248A (en) A kind of mobile target selecting method and system based on the correction of user's presentation model
Liao et al. Action recognition based on depth image sequence

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20110921