CN104474710B - Based on large scale scene group of subscribers tracking system and the method for Kinect network - Google Patents

Based on large scale scene group of subscribers tracking system and the method for Kinect network Download PDF

Info

Publication number
CN104474710B
CN104474710B CN201410747738.4A CN201410747738A CN104474710B CN 104474710 B CN104474710 B CN 104474710B CN 201410747738 A CN201410747738 A CN 201410747738A CN 104474710 B CN104474710 B CN 104474710B
Authority
CN
China
Prior art keywords
kinect
user
tracking
information
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410747738.4A
Other languages
Chinese (zh)
Other versions
CN104474710A (en
Inventor
杨承磊
盖伟
崔婷婷
穆冠琦
林于杰
杨义军
孟祥旭
冯硕
尹晓雅
关东东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201410747738.4A priority Critical patent/CN104474710B/en
Publication of CN104474710A publication Critical patent/CN104474710A/en
Application granted granted Critical
Publication of CN104474710B publication Critical patent/CN104474710B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of large scale scene group of subscribers tracking system based on Kinect network and method, the advantages such as this system has simply, convenient, easy to operate.Group interaction for large scale scene is played, according to the effective coverage of each Kinect visual range, large scale scene is divided into some zonules, user's deep image information of being caught by Kinect, real-time tracking is carried out to the user of zones of different, and the multi-user data that different K inect catches is associated, complete the real-time tracking to user and identity validation.The advantage such as convenient, easy to operate, cost is lower.The layout of Kinect network does not need to measure accurately, and each Kinect is only responsible for the user catching respective effective coverage.Do not need overlap between each Kinect viewing area, namely save the usage quantity of user's capture device to greatest extent.

Description

Based on large scale scene group of subscribers tracking system and the method for Kinect network
Technical field
The present invention relates to a kind of large scale scene group of subscribers tracking system based on Kinect network and method.
Background technology
Along with the raising of social modernization's level, the requirement that people live for entertainment is also more and more higher, with the output true to nature of three-dimensional, feeling of immersion for the virtual reality technology of mark is also applied to these fields gradually, the reality-virtualizing game based on colony is favored gradually." Kinect motion conference 2 " that in October, 2011 is released by Microsoft Game Studios, contains six kinds of brand-new, to support many people and single player mode motions, comprises tennis, golf, rugby, baseball, skiing and dartlike weapon.Current popular 7D dynamic cinema has won the favor of young viewers pursuing feeling of freshness, excitement, the video display shock that advanced 7D interaction technique allows curious the audience's feeling bring to science and technology, the even tens of people of many people sees while play before screen, this many people with interactive competition functionality fight/cooperate amusement to make us getting drunk wherein, finds pleasure in it.
But it is fewer in number that current group interaction virtual game is supported, is invalid for the group interaction in large scale scene.Monitoring range based on single camera tracking is limited, and the interference such as easily to be blocked and cause following the tracks of unsuccessfully.Tracking based on multiple-camera not only can expanding monitoring region, is applicable to the tracking of large scale scene, and multiple different visual angle can be provided to be beneficial to solve occlusion issue.Therefore, the present invention utilizes multiple camera tracking to devise the group of subscribers tracking supporting large scale scene, utilize Kinect as user's capture device, Kinect network is arranged in scene, by the collaborative work between Kinect, be several zonules by scene partitioning, respectively real-time tracking is carried out to the user of zones of different, then the user data that different K inect catches is associated, complete the real-time tracking to group of subscribers in large scale scene and identity validation, for group interaction virtual game provides Data Source.
Summary of the invention
The present invention is in order to solve the problem, propose a kind of large scale scene group of subscribers tracking system based on Kinect network and method, native system is by playing to the group interaction of large scale scene, according to the effective coverage of each Kinect visual range, large scale scene is divided into some zonules, user's deep image information of being caught by Kinect, real-time tracking is carried out to the user of zones of different, and the multi-user data that different K inect catches is associated, complete the real-time tracking to user and identity validation.
To achieve these goals, the present invention adopts following technical scheme:
Based on a group of subscribers tracking system for the large scale scene of Kinect network, comprise Kinect network arrangement module, Kinect correction module, Kinect communication module and group of subscribers tracking module, wherein:
Described Kinect network arrangement module, for large scale scene is covered, be several regions by scene partitioning, division methods is: the position of the horizontal direction of each Kinect under adjustment sustained height, makes each Kinect be spliced into a seamless large regions in the rectangle capture region on ground; Obtain the user profile of arbitrary region, for group of subscribers tracking module provides input data, each region is responsible for by 1 Kinect, obtains user's depth information in this region;
Described Kinect correction module, for by the virtual borderlines of each Kinect self in world coordinates, Coordinate Conversion is carried out to the user profile that each Kinect captures, completes the correction of each Kinect coordinate;
Described Kinect communication module, merges for the user profile calculated by each Kinect (i.e. client), and concentrate and be sent to server end, server end carries out data correlation, usertracking;
Described group of subscribers tracking module, comprise two parts: the usertracking submodule based on Kinect client and the usertracking submodule based on Kinect server end, for carrying out real-time tracking to the user of arbitrary region, and the multi-user data that different K inect catches is associated, complete the real-time tracking to user and identity validation.
Based on the tracking of said system, comprise the following steps:
(1) Kinect network arrangement module is opened: according to the effective coverage of each Kinect depth image visual range, large scale scene is divided into several zonules, carries out space Kinect layout, obtain the deep image information in each region;
(2) Kinect correction module is opened: by the virtual borderlines of each Kinect self in world coordinates, Coordinate Conversion is carried out to the customer position information that each Kinect calculates, is converted to the position coordinates in large scale scene;
(3) Kinect communication module is opened: the customer position information obtained by above-mentioned each Kinect, is sent to server by socket communication mode;
(4) group of subscribers tracking module is opened: the positional information first being obtained each district intra domain user by the usertracking submodule of Kinect client, then by Kinect communication module, data are sent to server, server carries out data correlation, completes and carries out real-time tracking to the user of arbitrary region.
Kinect bearing calibration in described step (2), specifically comprises the following steps:
(2-1) the effective viewing area of Kinect is marked, the depth image obtained with Kinect and coloured image are for normative reference, respectively with the target object of differing heights Kinect can overlay area move, calibrate the maximum visual region of Kinect on the ground;
(2-2) in scene Kinect viewing area in choose the sample point of many group diverse locations, make sample point be evenly distributed in viewing area, and measure the physical location of each sample point in the world coordinate system of regulation; Meanwhile, the sample point coordinate of relevant position in Kinect depth image coordinate system is calculated;
(2-3) least square method is utilized to calculate the transformation matrix of each Kinect for world coordinate system.
The usertracking method of Kinect client in described step (4), specifically comprises the following steps:
(4-1) depth image of each Kinect present frame is obtained;
(4-2) extract the foreground image of each Kinect according to predetermined depth threshold value, carry out binary conversion treatment to image, the part of prescribed depth threshold value in scene is set to foreground, and remainder is set to background colour;
(4-3) Morphological scale-space is carried out to image, the image after binaryzation is carried out expansion process.
(4-4) adopt regional connectivity algorithm to split multiple target, and calculate the positional information of each target.
In described step (4-4), specifically comprise the following steps:
(4-4-1) image pixel is traveled through, find the foreground point in image and use chained list to be linked these foreground points;
(4-4-2) BFS BFS traversal is carried out to foreground point, judge each attribute being communicated with block, and connection block is divided into user object;
(4-4-3) according to the number attribute information of point in the connection block preserved in (4-4-2), the validity of object is judged: if number is less than the threshold value preset, then judge that this object is invalidated object, this object is destroyed and object number is subtracted one; If number is more than or equal to the threshold value preset, then judge that this object is effective object;
(4-4-4) obtained result (object number, object centers point coordinates) is saved, focus on to pass to server end.
In described step (4-4-2), the attribute being communicated with block comprises center point coordinate, characteristic value and is communicated with the number of point in block.
The usertracking method of Kinect server end in described step (4), specifically comprises the following steps:
(4-a) initial phase: authentication is carried out to the initial target that each Kinect client receives, namely gives different number information to different users, and the identity of user and positional information are kept at R0={x0 i, y0 i, Id0 i| i=1 ..., M}, wherein (x0 i, y0 i) represent the initial position message of user i, Id0 irepresent the initial number information of user, M represents the number of users that received server-side arrives;
(4-b) the real-time tracking stage, receive customer position information from Kinect client carry out identity validation to follow-up; By each reception to data be kept at R={x i, y i, Id i| i=1 ..., N}, wherein N represents the number of users that service end is currently received; x i, y i) represent the positional information of user i, Id irepresent the number information of user; Make pre_R={pre_x i, pre_y i, pre_Id i| i=1 ..., pre_N} initial value is R0, and the user in R and pre_R is carried out minimum distance coupling, obtains the identity information of present frame user, and makes pre_R=R, engraves the user profile of carving for the moment when record is followed the tracks of, for real-time tracking coupling provides reference information.
Beneficial effect of the present invention is:
(1) convenient, easy to operate, cost is lower; The layout of Kinect network does not need to measure accurately, and each Kinect is only responsible for the user catching respective effective coverage; Do not need overlap between each Kinect viewing area, namely save the usage quantity of user's capture device to greatest extent;
(2) less for the movement limit of user in space, do not need each user to be in completely in the effective coverage of certain Kinect, a part for user's body is in Kinect region;
(3) there is the ability that large scale scene dynamic realtime is followed the tracks of; For the group interaction of large scale scene, not by the restriction of participation number, in space, each zone user number can be dynamic change; Real-time tracking and identity validation can be realized to the conversion of user's state between different K inect region.
Accompanying drawing explanation
Fig. 1 is the schematic diagram of Kinect network arrangement;
Fig. 2 is the usertracking method flow schematic diagram of Kinect client;
Fig. 3 is Kinect client user tracking data structure schematic diagram;
Fig. 4 is the usertracking method schematic diagram of Kinect server end;
Fig. 5 is that the present invention applies the example schematic with colony's shooting game.
Detailed description of the invention:
Below in conjunction with accompanying drawing and embodiment, the invention will be further described.
As shown in Figure 1, according to Kinect effective lock-on range in the Z-axis direction, Kinect is arranged in rational height (recommendation is 3.7m, at this moment substantially can hold the user of height from 1m to 2m); The rectangle demarcated on ground according to each Kinect catches the relative position in region, the position of Kinect in the horizontal direction of sustained height, make rectangle capture region longitudinally alignment and the lateral alignment of each Kinect, be spliced into a large seamless rectangular scene areas.
As shown in Figure 2, the usertracking method flow of Kinect client is mainly divided into two parts: the processing stage of being respectively early-stage preparations stage and later stage tracking.Wherein the depth data got from Kinect sensor is mainly carried out binaryzation, expansive working by the early-stage preparations stage, for the tracking process in later stage is prepared; Post-processed is then on the basis of early-stage preparations, carries out connection block judge, in order to judge the information such as number, center position of object data.Specifically comprise the steps:
(1) depth image of present frame is obtained.
(2) extract foreground image according to predetermined depth threshold value, binary conversion treatment is carried out to image.The depth information of each pixel and predetermined depth data are judged, the pixel meeting depth threshold is set to foreground point, and rest of pixels point is set to background dot.
(3) Morphological scale-space is carried out to image, the image after binaryzation is carried out expansion process, hole less in removal of images, obtain smoother, connective image preferably.
(4) adopt regional connectivity algorithm to split multiple target, and calculate the positional information of each target.BFS BFS traversal is carried out to each foreground point, calculates the information such as number, the center point coordinate of object, the characteristic value of object of present frame object.
(5) attribute information of object is saved, focus on to pass to server end;
(6) judge whether present procedure terminates to run, if not, then jump to the data message that step (1) obtains next frame again, otherwise, EP (end of program).
The ergodic algorithm about BFS of described step (4), specifically as shown in Figure 3.
As shown in Figure 3, after having got bianry image, first image is traveled through, used chained list to link the foreground point got; Secondly chained list is traveled through, the number of the connection block (object) in inverted order output image and positional information; Then the number of the point in object and predetermined threshold value are proofreaded, when being more than or equal to predetermined threshold value when counting, conservation object information; Otherwise this object is invalidated object, destroys this object and object number is subtracted one; Finally, the object information of this frame is saved to send server end to focus on.Specifically comprise the steps:
(1) travel through binaryzation array, and foreground point is kept in chained list;
(2) take out head in chained list and point to element, and by next for head pointed element;
(3) judge whether current got element is empty, if so, then jumps to step (7);
(4) judge that whether this element is traversed by secondary, if so, then jump to step (2);
(5) marking this element is secondary traversal, and adds one by counting in residing connection block;
(6) judge that in its eight neighborhood, whether point is traversed by secondary, if so, then jumps to step (2), otherwise, jump to step (5);
(7) connection block (object) number in this frame is added up;
(8) whether counting in each object in image meets is preset threshold value of counting, and if so, then preserve the attribute information of each object, otherwise eliminate this object, object number subtracts one;
(9) according to the position of this Kinect in scene, this frame object positional information is changed, and the position coordinates after conversion is passed in server.
As shown in Figure 4, the usertracking method of Kinect server end is mainly divided into two steps:
(1) initial phase.Authentication is carried out to the initial target that each Kinect client receives, namely gives different number information to different users, and the identity of user and positional information are kept at R0={x0i, y0i, Id0i|i=1 ..., M}.Wherein, (x0i, y0i) represents the positional information of user i, and Idi represents the number information of user, and M represents the number of users that received server-side arrives.Meanwhile, define a upper moment user profile and deposit array pre_R={pre_xi, pre_yi, pre_Idi|i=1 ..., pre_N}, and pre_R=R0.
(2) the real-time tracking stage, receive customer position information from Kinect client carry out identity validation to follow-up.Specifically comprise the following steps:
(2-1) by each reception to data be kept at R={x i, y i, Id i| i=1 ..., N}, wherein (x i, y i) representing the positional information of user i, N represents the number of users that service end is currently received, Id i=-1.
(2-2) the customer position information R received by current time carries out minimum distance with the user identity in a upper moment user profile pre_R and mates, and obtains the identity information of present frame multi-user, and upgrades corresponding Id i.
Id i = min j D ij ( j = 1 , . . , pre _ N )
D ij = ( x i - pre _ x j ) 2 + ( y i - pre _ y j ) 2
Id ibe the subscriber identity information that user i and a upper moment mate most.
(2-3) pre_R is upgraded, pre_R=R.
If (2-4) follow the tracks of the moment not terminate, then skip to step (2), otherwise terminate algorithm.
As shown in Figure 5, be the example that the present invention is applied to the virtual shooting game of group interaction.Utilize Kinect as user's capture device, by the collaborative work between Kinect, be 4 zonules by scene partitioning, respectively real-time tracking is carried out to the user of zones of different, then the user data that different K inect catches is associated, complete the real-time tracking to group of subscribers in large scale scene and identity validation.
By reference to the accompanying drawings the specific embodiment of the present invention is described although above-mentioned; but not limiting the scope of the invention; one of ordinary skill in the art should be understood that; on the basis of technical scheme of the present invention, those skilled in the art do not need to pay various amendment or distortion that creative work can make still within protection scope of the present invention.

Claims (8)

1. based on a group of subscribers tracking system for the large scale scene of Kinect network, it is characterized in that: comprise Kinect network arrangement module, Kinect correction module, Kinect communication module and group of subscribers tracking module, wherein:
Described Kinect network arrangement module, for large scale scene is covered, be several regions by scene partitioning, division methods is: the position of the horizontal direction of each Kinect under adjustment sustained height, makes each Kinect be spliced into a seamless large regions in the rectangle capture region on ground; Obtain the user profile of arbitrary region, for group of subscribers tracking module provides input data, each region is responsible for by 1 Kinect, obtains user's depth information in this region;
Described Kinect correction module, for by the virtual borderlines of each Kinect self in world coordinates, Coordinate Conversion is carried out to the user profile that each Kinect captures, completes the correction of each Kinect coordinate;
Described Kinect communication module, for by each Kinect, the user profile calculated merges, and concentrate and be sent to server end, server end carries out data correlation, usertracking;
Described group of subscribers tracking module, comprise two parts: the usertracking submodule based on Kinect client and the usertracking submodule based on Kinect server end, for carrying out real-time tracking to the user of arbitrary region, and the multi-user data that different K inect catches is associated, complete the real-time tracking to user and identity validation.
2. the tracking of group of subscribers tracking system as claimed in claim 1, is characterized in that: comprise the following steps:
(1) Kinect network arrangement module is opened: according to the effective coverage of each Kinect depth image visual range, large scale scene is divided into several zonules, carries out space Kinect layout, obtain the deep image information in each region;
(2) Kinect correction module is opened: by the virtual borderlines of each Kinect self in world coordinates, Coordinate Conversion is carried out to the customer position information that each Kinect calculates, is converted to the position coordinates in large scale scene;
(3) Kinect communication module is opened: the customer position information obtained by above-mentioned each Kinect, is sent to server by socket communication mode;
(4) group of subscribers tracking module is opened: the positional information first being obtained each district intra domain user by the usertracking submodule of Kinect client, then by Kinect communication module, data are sent to server, server carries out data correlation, completes and carries out real-time tracking to the user of arbitrary region.
3. tracking as claimed in claim 2, is characterized in that: Kinect bearing calibration in described step (2), specifically comprises the following steps:
(2-1) the effective viewing area of Kinect is marked, the depth image obtained with Kinect and coloured image are for normative reference, respectively with the target object of differing heights Kinect can overlay area move, calibrate the maximum visual region of Kinect on the ground;
(2-2) in scene Kinect viewing area in choose the sample point of many group diverse locations, make sample point be evenly distributed in viewing area, and measure the physical location of each sample point in the world coordinate system of regulation; Meanwhile, the sample point coordinate of relevant position in Kinect depth image coordinate system is calculated;
(2-3) least square method is utilized to calculate the transformation matrix of each Kinect for world coordinate system.
4. tracking as claimed in claim 2, is characterized in that: the usertracking method of Kinect client in described step (4), specifically comprises the following steps:
(4-1) depth image of the present frame of each Kinect is obtained;
(4-2) extract foreground image according to predetermined depth threshold value, carry out binary conversion treatment to image, the part of prescribed depth threshold value in scene is set to foreground, and remainder is set to background colour;
(4-3) Morphological scale-space is carried out to image, the image after binaryzation is carried out expansion process.
(4-4) adopt regional connectivity algorithm to split multiple target, and calculate the positional information of each target.
5. tracking as claimed in claim 4, is characterized in that: in described step (4-4), specifically comprise the following steps:
(4-4-1) image pixel is traveled through, find the foreground point in image and use chained list to be linked these foreground points;
(4-4-2) BFS BFS traversal is carried out to foreground point, judge each attribute being communicated with block, and connection block is divided into user object;
(4-4-3) according to the number attribute information of point in the connection block preserved in (4-4-2), the validity of object is judged: if number is less than the threshold value preset, then judge that this object is invalidated object, this object is destroyed and object number is subtracted one; If number is more than or equal to the threshold value preset, then judge that this object is effective object;
(4-4-4) obtained result is saved, focus on to pass to server end.
6. tracking as claimed in claim 5, is characterized in that: in described step (4-4-2), and the attribute being communicated with block comprises center point coordinate, characteristic value and is communicated with the number of point in block.
7. tracking as claimed in claim 5, is characterized in that: the result that obtains in described step (4-4-4) comprises object number, object centers point coordinates.
8. tracking as claimed in claim 4, is characterized in that: the usertracking method of Kinect server end in described step (4), specifically comprises the following steps:
(4-a) initial phase: authentication is carried out to the initial target that each Kinect client receives, namely gives different number information to different users, and the identity of user and positional information are kept at R0={x0 i, y0 i, Id0 i| i=1 ..., M}, wherein (x0 i, y0 i) represent the initial position message of user i, Id0 irepresent the initial number information of user, M represents the number of users that received server-side arrives;
(4-b) the real-time tracking stage, receive customer position information from Kinect client carry out identity validation to follow-up; By each reception to data be kept at R={x i, y i, Id i| i=1 ..., N}, wherein N represents the number of users that service end is currently received; x i, y i) represent the positional information of user i, Id irepresent the number information of user; Make pre_R={pre_x i, pre_y i, pre_Id i| i=1 ..., pre_N} initial value is R0, and the user in R and pre_R is carried out minimum distance coupling, obtains the identity information of present frame user, and makes pre_R=R, engraves the user profile of carving for the moment when record is followed the tracks of, for real-time tracking coupling provides reference information.
CN201410747738.4A 2014-12-09 2014-12-09 Based on large scale scene group of subscribers tracking system and the method for Kinect network Expired - Fee Related CN104474710B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410747738.4A CN104474710B (en) 2014-12-09 2014-12-09 Based on large scale scene group of subscribers tracking system and the method for Kinect network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410747738.4A CN104474710B (en) 2014-12-09 2014-12-09 Based on large scale scene group of subscribers tracking system and the method for Kinect network

Publications (2)

Publication Number Publication Date
CN104474710A CN104474710A (en) 2015-04-01
CN104474710B true CN104474710B (en) 2015-09-02

Family

ID=52749396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410747738.4A Expired - Fee Related CN104474710B (en) 2014-12-09 2014-12-09 Based on large scale scene group of subscribers tracking system and the method for Kinect network

Country Status (1)

Country Link
CN (1) CN104474710B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105677031B (en) * 2016-01-04 2018-12-11 广州华欣电子科技有限公司 Control method and device based on gesture track recognition
CN105718066B (en) * 2016-01-30 2018-06-01 卓汎有限公司 It is a kind of can flexible combination real-time optical alignment system
CN107093171B (en) * 2016-02-18 2021-04-30 腾讯科技(深圳)有限公司 Image processing method, device and system
CN107016651A (en) * 2017-03-09 2017-08-04 广东欧珀移动通信有限公司 Image sharpening method, image sharpening device and electronic installation
CN110013669A (en) * 2019-03-05 2019-07-16 深圳鼎盛乐园娱乐服务有限公司 A kind of virtual reality is raced exchange method more
CN113362090A (en) * 2020-03-03 2021-09-07 北京沃东天骏信息技术有限公司 User behavior data processing method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102016877A (en) * 2008-02-27 2011-04-13 索尼计算机娱乐美国有限责任公司 Methods for capturing depth data of a scene and applying computer actions
CN102179048A (en) * 2011-02-28 2011-09-14 武汉市高德电气有限公司 Method for implementing realistic game based on movement decomposition and behavior analysis
CN102449577A (en) * 2009-06-01 2012-05-09 微软公司 Virtual desktop coordinate transformation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6947579B2 (en) * 2002-10-07 2005-09-20 Technion Research & Development Foundation Ltd. Three-dimensional face recognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102016877A (en) * 2008-02-27 2011-04-13 索尼计算机娱乐美国有限责任公司 Methods for capturing depth data of a scene and applying computer actions
CN102449577A (en) * 2009-06-01 2012-05-09 微软公司 Virtual desktop coordinate transformation
CN102179048A (en) * 2011-02-28 2011-09-14 武汉市高德电气有限公司 Method for implementing realistic game based on movement decomposition and behavior analysis

Also Published As

Publication number Publication date
CN104474710A (en) 2015-04-01

Similar Documents

Publication Publication Date Title
CN104474710B (en) Based on large scale scene group of subscribers tracking system and the method for Kinect network
US10810798B2 (en) Systems and methods for generating 360 degree mixed reality environments
EP3265864B1 (en) Tracking system for head mounted display
KR102077108B1 (en) Apparatus and method for providing contents experience service
US8924583B2 (en) Method, apparatus and system for viewing content on a client device
CN103390287B (en) Device and method for augmented reality
CN102726051B (en) Virtual plug-in unit in 3D video
US20120250980A1 (en) Method, apparatus and system
GB2575843A (en) Method and system for generating an image
CN105373224A (en) Hybrid implementation game system based on pervasive computing, and method thereof
US20220180570A1 (en) Method and device for displaying data for monitoring event
CN105430471B (en) The display methods and device of barrage in a kind of video
US8947534B2 (en) System and method for providing depth imaging
US20160012644A1 (en) Augmented Reality System and Method
CN103106404A (en) Apparatus, method and system
KR101739220B1 (en) Special Video Generation System for Game Play Situation
CN107665231A (en) Localization method and system
KR20140023136A (en) Putting information display system and method for golf on the air
Sturm From idyllic past-time to spectacle of accelerated intensity: Televisual technologies in contemporary cricket
CN106408666A (en) Mixed reality demonstration method
CN102918559A (en) Image-processing apparatus and image-processing method for expressing lie on a green, and virtual golf simulation apparatus using same
JP2014048864A (en) Display control system, game system, control method for display control system, display control device, control method for display control device, and program
US20120194736A1 (en) Methods and Apparatus for Interactive Media
KR20180068254A (en) Apparatus and method for providing game video
CN117547817A (en) Position adjustment method and device for virtual lens, electronic equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150902

Termination date: 20181209

CF01 Termination of patent right due to non-payment of annual fee