CN102254346A - Method for detecting augmented reality virtual-real collision based on cloud computing - Google Patents

Method for detecting augmented reality virtual-real collision based on cloud computing Download PDF

Info

Publication number
CN102254346A
CN102254346A CN201110181327XA CN201110181327A CN102254346A CN 102254346 A CN102254346 A CN 102254346A CN 201110181327X A CN201110181327X A CN 201110181327XA CN 201110181327 A CN201110181327 A CN 201110181327A CN 102254346 A CN102254346 A CN 102254346A
Authority
CN
China
Prior art keywords
collision
actual situation
dummy object
cloud computing
augmented reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201110181327XA
Other languages
Chinese (zh)
Inventor
凌晨
陈明
田丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201110181327XA priority Critical patent/CN102254346A/en
Publication of CN102254346A publication Critical patent/CN102254346A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a method for detecting augmented reality virtual-real collision based on cloud computing. The method comprises the following steps of: extracting characteristic points of a key frame firstly; then, estimating postures and motions of a practical object; establishing a bounding box of a virtual object; detecting and responding to the virtual-real collision based on cloud computing; and finally, registering, rendering and displaying the virtual object so as to reach a virtual-real collision detection effect. The speed for detecting and responding to the collision between the virtual object and the practical object is improved by using the commercialized cloud computing. The real-time requirement of the augmented reality virtual-real collision can be achieved.

Description

Augmented reality actual situation collision checking method based on cloud computing
Technical field
What the present invention relates to is a kind of augmented reality actual situation impacting technology, and specifically a kind of monocular vision arbitrary shape dummy object based on cloud computing is to the method for actual object collision detection and response.
Background technology
Along with people constantly strengthen the requirement of interactive experience, (Augmented Reality, AR) application has obtained fast development to augmented reality.Based on the AR system of vision because to the not high application mainstream that become of hardware requirement.(Virtual Reality, virtual to virtual collision in VR), actual object has physical characteristics in the AR collision detection, and dummy object only has the characteristic of dummy object, must give dummy object certain physical characteristic artificially to be different from virtual reality.Actual situation collision detection and response energy make the AR application natural more, true accurately, thereby make actual situation man-machine interaction freely become possibility.And AR uses to be transplanted to and has become fashion trend on the mobile device, and these are all had higher requirement to actual situation collision detection and response algorithm thereof.
For collision detection research, as Chinese patent: its name is called " a kind of large-scale virtual scene collision checking method based on balanced binary tree ", application number CN200910086719.0; Chinese patent: its name is called " towards the method for detecting parallel collision of complex scene real time interactive operation ", application number CN200710043743.7; Chinese patent: its name is called " a kind of method that realizes the 3d gaming collision detection at server end ", application number CN200710117826.6.These collision checking methods at first are the collision checking methods at the virtual scene of large-scale complex, are the collision checking method of dummy object to dummy object, are not suitable for the collision detection of dummy object to actual object.Secondly, these collision checking methods rely on hardware to realize, are not suitable for being transplanted to the requirement of mobile device.Chinese patent: its name is called " using the real time collision detection of shearing ", application number CN200780048182.8; Chinese patent: its name is called " a kind of method for detecting continuous collision based on ellipsoid scanning "; Application number CN200910087900.3; Chinese patent: its name is called " a kind of flexible fabric self collision detection method based on four fork bounding box trees ", application number CN200910087902.2; Chinese patent: its name is called " a kind of method for detecting parallel collision based on subdivision ", application number CN200810202774.7.These collision detection algorithm are primarily aimed at be dummy object in the virtual scene to the collision detection of dummy object, be not suitable for dummy object in the augmented reality to the collision detection of actual object.
Above-mentioned patent and correlative study require high to hardware device, perhaps only study at virtual reality, there is no specially to study at the collision of augmented reality, also use cheap single camera to carry out actual situation collision research.The 3D reconstructing method is carried out collision detection, and calculation of complex can't be accomplished real-time.
Summary of the invention
The problem and shortage of prior art existence the object of the present invention is to provide a kind of augmented reality actual situation collision checking method based on cloud computing in view of the above, and this method is at first by feature point extraction in the key frame; The actual object posture is estimated and estimation then; Set up the dummy object bounding box again; Carry out actual situation collision detection and response afterwards based on cloud; Register at last, play up and show dummy object, thereby reach actual situation collision detection effect.
For achieving the above object, the present invention adopts following technical conceive: according to physics of photography, computer vision technique, optical technology, cloud computing technology etc., calculate the speed that improves collision detection by distributed parallel, to reach real-time requirement.
The present invention realizes above-mentioned augmented reality actual situation collision checking method based on cloud computing by the following technical solutions, and its step is as follows:
1, the feature point extraction of key frame: need obtain the front elevation of practical object during initialization, as the references object of computational transformation matrix.From the real image that camera photographs, extract key frame images afterwards, in image, carry out image segmentation, obtain characteristic point coordinates in the screen coordinate system ,
Figure 585527DEST_PATH_IMAGE002
, these unique point quantity are 4 points, the regularity of distribution is rectangular.
2, actual object posture and estimation:, obtain under the corresponding world coordinate system by calculating the transformation matrix that world coordinates is tied to screen coordinate system
Figure 498251DEST_PATH_IMAGE003
Coordinate, world coordinate system and screen coordinate system coordinate conversion formula are as follows:
Wherein,
Figure 47361DEST_PATH_IMAGE005
Be world coordinate system ( ,
Figure 409258DEST_PATH_IMAGE007
,
Figure 590841DEST_PATH_IMAGE008
) to the camera coordinate system (
Figure 983776DEST_PATH_IMAGE009
,
Figure 883599DEST_PATH_IMAGE010
, ) transformation matrix.
Figure 171284DEST_PATH_IMAGE005
In
Figure 735121DEST_PATH_IMAGE012
Be selection matrix,
Figure 309191DEST_PATH_IMAGE013
Be translation matrix.After obtaining unique point, by obtaining the normal vector on collision plane
Figure 413413DEST_PATH_IMAGE014
Obtain
Figure 507271DEST_PATH_IMAGE003
After, calculate barycenter C(
Figure 304325DEST_PATH_IMAGE015
).Then try to achieve the motion vector of t actual object constantly
Figure 601577DEST_PATH_IMAGE016
3, set up the bounding box of dummy object: dummy object is considered as spheroid, promptly uses and surround ball, dummy object is set up the bounding box tree as bounding box.
4, get actual situation collision detection and response based on cloud: collision detects the Distributed Calculation of one of gordian technique in the use cloud computing to actual situation:
1), according to the quantity of actual situation collision detection node, dummy object bounding box tree is divided into groups.Each numbering of surrounding ball is designated as key, its barycenter G and radius r are designated as value, obtain raw data<key, value 〉.
2), with each group raw data<key, value〉send into actual situation collision detection node and carry out the actual situation collision detection.Dummy object is approximately a spheroid, and the collision on ball and plane is regarded in dummy object and actual object collision as.The barycenter G of dummy object ball promptly is a vector to the distance on collision plane
Figure 775069DEST_PATH_IMAGE017
At normal vector
Figure 785751DEST_PATH_IMAGE018
On projection d.This moment, the precondition of collision detection was: d≤τ r, and wherein r is the radius of ball, τ is G accounts for r to the distance of collision plane Π a ratio.Promptly do not produce the actual situation collision as not satisfying condition.When satisfying this condition, then calculate the projection G ' of barycenter G on the collision plane.And judge whether that G ' is having unique point
Figure 691390DEST_PATH_IMAGE003
The collision area inside that is surrounded:
Figure 787522DEST_PATH_IMAGE019
If satisfy this condition then collision taken place.Barycenter G is charged among the value at the projection G ' on collision plane, to new data<key, value ' 〉.
3), general<key, value '〉send into actual situation collision response node, judge the information that whether comprises the point of impingement among the value ', whether the actual situation collision has promptly taken place, if actual situation collision takes place, the motion vector of next frame dummy object then
Figure 420497DEST_PATH_IMAGE020
The motion vector of frame dummy object therewith
Figure 180293DEST_PATH_IMAGE021
Identical.As the actual situation collision taken place, whether the motion vector angle that then calculates the normal vector on collision plane and dummy object is between 90 ° to 180 °.In its interval, then
Figure 319150DEST_PATH_IMAGE022
, wherein
Figure 840261DEST_PATH_IMAGE023
Be momentum; Not between the interval then
Figure 89977DEST_PATH_IMAGE024
5, demonstration is played up in the dummy object registration: according to actual situation collision response result, dummy object is registered in world coordinate system, the result calculates the movement position of next frame dummy object according to the actual situation collision response, and plays up back demonstration output.
The present invention compared with prior art, have following conspicuous outstanding feature and remarkable advantage: the present invention uses the Distribution calculation in the cloud computing that actual situation collision is detected and responds, be not subjected to the not enough limitation of computing power, distribute flexibly, improve the speed of calculating greatly, accomplished the requirement of real-time.Be different from grid computing, cloud computing is commercialization, makes any at any time place of user, as long as be connected to network, just can obtain the computing power of supercomputer, makes mobile augmented reality technology become possibility.
Description of drawings
Fig. 1 is the augmented reality actual situation collision checking method flow chart that the present invention is based on cloud computing;
Fig. 2 the present invention is based on the actual situation collision detection of cloud and the synoptic diagram of response;
Fig. 3 the present invention is based on the actual situation collision detection of cloud and the flow chart of response.
Embodiment
Below in conjunction with accompanying drawing embodiments of the invention are described in further detail.
A concrete preferred embodiment of the present invention, as shown in Figure 1, this augmented reality actual situation collision checking method based on cloud computing comprises that step is as follows:
1., the feature point extraction of key frame;
2., actual object posture and estimation;
3., set up the bounding box of dummy object;
4., based on the actual situation collision detection and the response of cloud;
5., demonstration is played up in the dummy object registration.
Above-mentioned steps is the feature point extraction of described key frame 1., and its concrete steps are as follows:
(1), obtains the key frame images of actual object by monocular cam;
(2), image is carried out Face Detection;
(3), carrying out connected domain detects;
(4), the zone of calculating the connected domain area and removing small size;
(5), extract the unique point of four rectangular distribution.
2. described actual object posture of above-mentioned steps and estimation, its concrete steps are as follows:
(6), the unique point that 1. obtains according to step, the computational transformation matrix;
(7), the coordinate of calculated characteristics point in world coordinate system;
(8), the unit normal vector on the collision plane of calculated characteristics point formation
Figure 75250DEST_PATH_IMAGE014
(9), calculate the barycenter on collision plane;
(10), according to the motion vector of centroid calculation actual object on collision plane
Figure 571960DEST_PATH_IMAGE025
The 3. described bounding box of setting up dummy object of above-mentioned steps, its concrete steps are as follows:
(11), utilize divide-and-conquer strategy to set up the bounding box tree of dummy object fast.
Above-mentioned steps 4. described actual situation collision detection and response based on cloud, an actual situation collision detection and a response synoptic diagram based on cloud with N actual situation collision detection node and N actual situation collision response node, as shown in Figure 2.As shown in Figure 3, its concrete steps are as follows:
(12), according to the quantity of actual situation collision detection node, the dummy object bounding box tree that step (11) is obtained divides into groups, for example actual situation collision detection node number is 3 among Fig. 2, is about to all bounding boxs and is divided into 3 groups of raw data;
(13), each numbering of surrounding ball is designated as key, its barycenter G and radius r are designated as value, obtain raw data<key, value send into actual situation collision detection node;
(14), in actual situation collision detection node, the dummy object barycenter to the collision plane apart from the d d≤τ r that satisfies condition? if satisfy, then change step (15), otherwise do not produce the actual situation collision, change step (16);
(15), calculate the dummy object barycenter at the projection G ' on collision plane in the collision area that four unique points surround? if in collision area, the actual situation collision then takes place;
(16) if collision has taken place, barycenter G is charged among the value at the projection G ' on collision plane, to new data<key, value ' 〉, with data<key, value ' send into actual situation collision response node;
(17), calculate the motion vector of dummy object
Figure 642684DEST_PATH_IMAGE021
(18), the actual situation collision taken place? be then to change step (19), otherwise change step (21);
(19), the motion vector angle of the collision normal vector on plane and dummy object (90 °, 180 °]? step (20) is then changeed betwixt in the angle, otherwise changes step (22);
(20), the motion vector of next frame dummy object
Figure 633774DEST_PATH_IMAGE022
, change step (23);
(21), the motion vector of next frame dummy object
Figure 473554DEST_PATH_IMAGE026
, change step (23);
(22), the motion vector of next frame dummy object
(23), the motion vector of output dummy object.
Demonstration is played up in the 5. described dummy object registration of above-mentioned steps, and its concrete steps are as follows:
(24), the motion vector of the dummy object that 4. obtains according to step, play up dummy object again;
(25), dummy object registration, play up output, go to step (1).

Claims (6)

1., the feature point extraction of key frame 1. augmented reality actual situation collision checking method based on cloud computing is characterized in that operation steps is as follows:; 2., actual object posture and estimation; 3., set up the bounding box of dummy object; 4., based on the actual situation collision detection and the response of cloud; 5., demonstration is played up in the dummy object registration.
2. according to the described augmented reality actual situation collision checking method based on cloud computing of claim 1,1. the concrete steps of the feature point extraction of key frame are as follows to it is characterized in that described step:
(1), obtains the key frame images of actual object by monocular cam;
(2), image is carried out Face Detection;
(3), carrying out connected domain detects;
(4), the zone of calculating the connected domain area and removing small size;
(5), extract the unique point of four rectangular distribution.
3. according to the described augmented reality actual situation collision checking method based on cloud computing of claim 1,2. the concrete operations step of actual object posture and estimation is as follows to it is characterized in that described step:
(1), the unique point that 1. obtains according to step, the computational transformation matrix;
(2), the coordinate of calculated characteristics point in world coordinate system;
(3), the unit normal vector on the collision plane of calculated characteristics point formation
Figure 323620DEST_PATH_IMAGE001
(4), calculate the barycenter on collision plane;
(5), according to the motion vector of centroid calculation actual object on collision plane
Figure 118401DEST_PATH_IMAGE002
4. according to the described augmented reality actual situation collision checking method based on cloud computing of claim 1, it is as follows to it is characterized in that 3. described step sets up the concrete operations step of bounding box of dummy object:
(1), utilize divide-and-conquer strategy to set up the bounding box tree of dummy object fast.
5. according to the described augmented reality actual situation collision checking method based on cloud computing of claim 4,4. the concrete operations step based on the actual situation collision detection of cloud and response is as follows to it is characterized in that described step:
(1), according to the quantity of actual situation collision detection node, the dummy object bounding box tree that is obtained is divided into groups, actual situation collision detection node number is 3, is about to all bounding boxs and is divided into 3 groups of raw data;
(2), each numbering of surrounding ball is designated as key, its barycenter G and radius r are designated as value, obtain raw data<key, value send into actual situation collision detection node;
(3), in actual situation collision detection node, the dummy object barycenter to the collision plane apart from the d d≤τ r that satisfies condition? if satisfy, then change next step (4), otherwise do not produce the actual situation collision, change next step (5);
(4), calculate the dummy object barycenter at the projection G ' on collision plane in the collision area that four unique points surround? if in collision area, the actual situation collision then takes place;
(5) if collision has taken place, barycenter G is charged among the value at the projection G ' on collision plane, to new data<key, value ' 〉, with data<key, value ' send into actual situation collision response node;
(6), calculate the motion vector of dummy object
Figure 501103DEST_PATH_IMAGE003
(7), the actual situation collision taken place? be then to change next step (8), otherwise change step (10);
(8), the motion vector angle of the collision normal vector on plane and dummy object (90 °, 180 °]? next step (9) is then changeed betwixt in the angle, otherwise step (11) after changeing;
(9), the motion vector of next frame dummy object
Figure 152664DEST_PATH_IMAGE004
, change step (12);
(10), the motion vector of next frame dummy object
Figure 135663DEST_PATH_IMAGE005
, change step (12);
(11), the motion vector of next frame dummy object
Figure 530873DEST_PATH_IMAGE006
(12), the motion vector of output dummy object.
6. according to the described augmented reality actual situation collision checking method of claim 5 based on cloud computing, it is characterized in that described step 5. the dummy object registration to play up the concrete operations step of demonstration as follows:
(1), the motion vector of the dummy object that 4. obtains according to step, play up dummy object again;
(2), dummy object registration, play up output, go to initial step and 1. restart.
CN201110181327XA 2011-06-30 2011-06-30 Method for detecting augmented reality virtual-real collision based on cloud computing Pending CN102254346A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110181327XA CN102254346A (en) 2011-06-30 2011-06-30 Method for detecting augmented reality virtual-real collision based on cloud computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110181327XA CN102254346A (en) 2011-06-30 2011-06-30 Method for detecting augmented reality virtual-real collision based on cloud computing

Publications (1)

Publication Number Publication Date
CN102254346A true CN102254346A (en) 2011-11-23

Family

ID=44981587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110181327XA Pending CN102254346A (en) 2011-06-30 2011-06-30 Method for detecting augmented reality virtual-real collision based on cloud computing

Country Status (1)

Country Link
CN (1) CN102254346A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104780180A (en) * 2015-05-12 2015-07-15 成都绿野起点科技有限公司 Virtual reality platform based on mobile terminals
CN104794754A (en) * 2015-05-12 2015-07-22 成都绿野起点科技有限公司 Distribution type virtual reality system
CN104869160A (en) * 2015-05-12 2015-08-26 成都绿野起点科技有限公司 Distributed virtual reality system based on cloud platform
CN105250130A (en) * 2015-09-01 2016-01-20 杭州喵隐科技有限公司 Virtual reality implementation method based on electric massage appliance
US9626737B2 (en) 2013-11-15 2017-04-18 Canon Information And Imaging Solutions, Inc. Devices, systems, and methods for examining the interactions of objects in an enhanced scene
CN109003398A (en) * 2012-06-14 2018-12-14 百利游戏技术有限公司 System and method for augmented reality game
US10545584B2 (en) 2016-05-17 2020-01-28 Google Llc Virtual/augmented reality input device
US10592048B2 (en) 2016-05-17 2020-03-17 Google Llc Auto-aligner for virtual reality display
US10722800B2 (en) 2016-05-16 2020-07-28 Google Llc Co-presence handling in virtual reality

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893935A (en) * 2010-07-14 2010-11-24 北京航空航天大学 Cooperative construction method for enhancing realistic table-tennis system based on real rackets

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893935A (en) * 2010-07-14 2010-11-24 北京航空航天大学 Cooperative construction method for enhancing realistic table-tennis system based on real rackets

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DAEHO LEE,ET AL: "Estimation of collision response of virtual objects to arbitrary-shaped real objects", 《IEICE ELECTRONICS EXPRESS》, vol. 5, no. 17, 10 September 2008 (2008-09-10), pages 678 - 682 *
陈明等: "增强现实中的视频对象跟踪算法", 《计算机工程》, vol. 36, no. 12, 30 June 2010 (2010-06-30), pages 229 - 231 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109003398A (en) * 2012-06-14 2018-12-14 百利游戏技术有限公司 System and method for augmented reality game
US9626737B2 (en) 2013-11-15 2017-04-18 Canon Information And Imaging Solutions, Inc. Devices, systems, and methods for examining the interactions of objects in an enhanced scene
CN104780180B (en) * 2015-05-12 2019-02-12 国电物资集团有限公司电子商务中心 A kind of Virtual Reality Platform based on mobile terminal
CN104869160A (en) * 2015-05-12 2015-08-26 成都绿野起点科技有限公司 Distributed virtual reality system based on cloud platform
CN104794754B (en) * 2015-05-12 2018-04-20 成都绿野起点科技有限公司 A kind of Distributed Virtual Reality System
CN104869160B (en) * 2015-05-12 2018-07-31 成都绿野起点科技有限公司 A kind of Distributed Virtual Reality System based on cloud platform
CN104794754A (en) * 2015-05-12 2015-07-22 成都绿野起点科技有限公司 Distribution type virtual reality system
CN104780180A (en) * 2015-05-12 2015-07-15 成都绿野起点科技有限公司 Virtual reality platform based on mobile terminals
CN105250130A (en) * 2015-09-01 2016-01-20 杭州喵隐科技有限公司 Virtual reality implementation method based on electric massage appliance
CN105250130B (en) * 2015-09-01 2018-02-02 杭州喵隐科技有限公司 A kind of virtual reality implementation method based on electric massage apparatus
US10722800B2 (en) 2016-05-16 2020-07-28 Google Llc Co-presence handling in virtual reality
US10545584B2 (en) 2016-05-17 2020-01-28 Google Llc Virtual/augmented reality input device
US10592048B2 (en) 2016-05-17 2020-03-17 Google Llc Auto-aligner for virtual reality display

Similar Documents

Publication Publication Date Title
CN102254346A (en) Method for detecting augmented reality virtual-real collision based on cloud computing
CN102194248A (en) Method for detecting and responding false-true collision based on augmented reality
CN103839277B (en) A kind of mobile augmented reality register method of outdoor largescale natural scene
CN105389539B (en) A kind of three-dimension gesture Attitude estimation method and system based on depth data
CN102999942B (en) Three-dimensional face reconstruction method
CN107016704A (en) A kind of virtual reality implementation method based on augmented reality
CN106875431B (en) Image tracking method with movement prediction and augmented reality implementation method
CN109242961A (en) A kind of face modeling method, apparatus, electronic equipment and computer-readable medium
JP2023521952A (en) 3D Human Body Posture Estimation Method and Apparatus, Computer Device, and Computer Program
CN105608421B (en) A kind of recognition methods of human action and device
CN104392045B (en) A kind of real time enhancing virtual reality system and method based on intelligent mobile terminal
CN104021590A (en) Virtual try-on system and virtual try-on method
Isikdogan et al. A real time virtual dressing room application using Kinect
CN106101535B (en) A kind of video stabilizing method based on part and mass motion disparity compensation
CN107688391A (en) A kind of gesture identification method and device based on monocular vision
TW201835723A (en) Graphic processing method and device, virtual reality system, computer storage medium
CN102254345A (en) Method for registering natural characteristic based on cloud computation
CN109887003A (en) A kind of method and apparatus initialized for carrying out three-dimensional tracking
CN105107200B (en) Face Changing system and method based on real-time deep body feeling interaction and augmented reality
CN205581784U (en) Can mix real platform alternately based on reality scene
WO2012078006A2 (en) Image processor, lighting processor and method therefor
CN108564653A (en) Human skeleton tracing system and method based on more Kinect
CN203773476U (en) Virtual reality system based on 3D interaction
CN109920000A (en) A kind of augmented reality method without dead angle based on polyphaser collaboration
CN104898954B (en) A kind of interactive browsing method based on augmented reality

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20111123