CN103785169A - Mixed reality arena - Google Patents

Mixed reality arena Download PDF

Info

Publication number
CN103785169A
CN103785169A CN201310757252.4A CN201310757252A CN103785169A CN 103785169 A CN103785169 A CN 103785169A CN 201310757252 A CN201310757252 A CN 201310757252A CN 103785169 A CN103785169 A CN 103785169A
Authority
CN
China
Prior art keywords
user
virtual
incarnation
display device
physical space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310757252.4A
Other languages
Chinese (zh)
Inventor
S·拉塔
D·麦克洛克
K·兹努达
A·克劳斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to CN201310757252.4A priority Critical patent/CN103785169A/en
Publication of CN103785169A publication Critical patent/CN103785169A/en
Pending legal-status Critical Current

Links

Images

Abstract

A computing system comprises a perspective display device, a logic subsystem and a storage subsystem storing instructions. When the logic subsystem executes, a virtual arena, an avatar controlled by a user and an opponent avatar are displayed on the perspective display device according to the instructions. When a physical space is watched through the perspective display device, the virtual area looks like being integrated in the physical space. Receiving user input is responded, and updated avatars controlled by the user can be displayed on the perspective display device according to the instructions.

Description

The arena of mixed reality
Background technology
FTG is usually shown as the predefined fistfight environment on the two-dimentional fixed mask of video game system.User typically provides the control to FTG by being connected to the PlayStation 3 videogame console/PS3 of video game system.
Summary
Herein disclosed is the embodiment for the FTG of mixed reality is provided at computing system.For example, computing system can comprise the storage subsystem of having an X-rayed display device, logic subsystem and storage instruction, in the time carrying out instruction by logic subsystem, this instruction shows incarnation (avatar) and the opponent's incarnation that virtual arena, user control on perspective display device, in the time watching physical space by perspective display device, virtual arena is integrated in physical space.In response to receiving user's input, the incarnation that this instruction also can be controlled the user who has an X-rayed display update on display device based on user's input.
Provide this summary, in order to the selection of the concept that further describes in describing in detail below with reduced form introduction.This summary is not intended to identify key feature or the essential feature of theme required for protection, is not intended to the scope for limiting theme required for protection yet.In addition, theme required for protection is not limited to solve the realization of any or all shortcoming of pointing out in disclosure any part.
Brief Description Of Drawings
Figure 1A illustrates the top view of wearing the user of perspective display device in physical space.
Figure 1B illustrates the user's of Figure 1A constant first person distant view (perspective).
Fig. 1 C illustrates the first person distant view that shows the user of Figure 1A when showing virtual arena when perspective display device expansion.
Fig. 2 illustrates according to the exemplary extended of the physical space in the third person of embodiment of the present disclosure.
Fig. 3 illustrates according to the exemplary extended of the physical space of the first person of embodiment of the present disclosure.
Fig. 4 illustrates the exemplary opponent's computing system that FTG is provided according to embodiment of the present disclosure.
Fig. 5 illustrates the illustrative methods that virtual arena is integrated into physical space according to embodiment of the present disclosure.
Fig. 6 illustrates the exemplary head mounted display according to embodiment of the present disclosure.
Fig. 7 is the exemplary computing system according to embodiment of the present disclosure.
Describe in detail
FTG is often implemented as the two dimension not almost being connected with real world, predefined virtual environment.What these gaming limits user can experience immerses degree and user is bound by fixing screen and video game system.Therefore, the disclosed embodiments are for FTG, and this FTG takes user to " ring-side " and even directly brings fistfight into by fistfight is incorporated into user's physical environment.For example, as described in more detail below, by show virtual arena and one or more incarnation in see-through display, virtual arena and incarnation can be integrated in user's physical environment.Like this integrated allows user and physical environment interaction, to be provided for the control of the incarnation that in FTG, user controls.
Figure 1A schematically shows the user's 100 who utilizes computing system 101 top view, and described computing system 101 comprises the perspective display device 102 in physical space 104.As used herein, term physical space can refer to user 100 real world physical environment, for example room.Equally, physical location can refer to the position of user, real-world objects and/or virtual objects in physical space.In fact physical space can comprise any indoor or outdoors environment.Line 106a and 106b are by the visual field of perspective display device indicating user.Figure 1A also shows real- world objects 108a, 108b, 108c and the 108d in the physical space 104 in user 100 visuals field.
Figure 1B shows the first person distant view of watching the user 100 of real- world objects 108a, 108b, 108c and 108d by perspective display device 102.In Figure 1B, perspective display device presents virtual objects not visiblely.Thereby user is merely able to see real-world objects.User sees such real-world objects, is because can pass to by see-through display user's eyes from the light of real-world objects reflection.
Computing system 101 can be configured to provide mixed reality FTG.For example, Fig. 1 C shows the user's 100 identical with Figure 1B first person distant view, but perspective display device visually presents the virtual objects corresponding to mixed reality FTG.Especially, perspective display device 102 is just showing incarnation 112 and the opponent's incarnation 114 that virtual arena 110, user control.From user's visual angle, it is integrated with physical space 104 that virtual arena and incarnation seem.
Especially, Fig. 1 C shows to be rendered as and seems virtual arena and be placed on the virtual arena 110 on room floor.For example, virtual arena 110 is rendered as complete fuzzy real-world objects 108d partly fuzzy real- world objects 108a, 108b and 108c.In addition, incarnation 112 and 114 is rendered as and seems incarnation and stand in virtual arena.
Incarnation 112 and opponent's incarnation 114 that virtual arena 110, user control are provided as limiting examples.In fact virtual arena and incarnation may be rendered as has any outward appearance and does not depart from the scope of the present disclosure.In addition, extra or alternative virtual objects can be presented on perspective display device 102, and seems to be integrated in physical space 104.
For virtual arena 110 is integrated in physical space 104, perspective display device 102 can pass through one or more imageing sensor Imaging physics space.Physical space can comprise the surface configuration of one or more definition physical spaces and the features of terrain of other features.For example can be used for determining the virtual arenic suitable opening of placement or flat site about the information of these features.
In certain embodiments, can have an X-rayed the computing system 101 of display device 102 and/or the various sensors of outside computing system (for example opponent's computing system) detect by comprising about the information of physical space.For example, computing system 101 can be by using the feature of the mark physical spaces such as resurfacing, room mapping, location-based service.In an example, can be by the position of definite physical spaces such as GPS, honeycomb triangulation, the world coordinates system that provided by network service.Based on definite position of physical space, computing system 101 can receive the information about physical space by the server from an example.In certain embodiments, computing system 101 can comprise depth camera.Depth camera can comprise by imageing sensor imaging the physical space 104 of one or more featuress of terrain.Depth camera also can be object in physical space 104 (for example object 108a, 108b, 108c and 108d (and form each pixel of these objects)) and determines depth value.
Computing system 101 can utilize the landform in Physics space and the information of further feature, so that the open area in identification physical space.In the time watching physical space by perspective display device, perspective display device 102 can show the virtual arena being integrated in physical space in see-through display.In certain embodiments, perspective display device shows the avatar with one or more interactive element, the one or more object integrations in described one or more interactive element and physical space.Computing system 101 can identify the real object in physical space 104, and this real object can visually expand to be used as mixed reality interactive element.For example, virtual arena can comprise and virtual rock integrated together with Park long chair in physical space.In this example, Park long chair provides physical arrangement, but looks like rock.In another example, computing system 101 can identification wall, and described wall can be expanded with virtual fence or virtual rope as boxing ring.
In certain embodiments, virtual arena can be the region of inside, virtual arena by " in border " zone definitions.In these embodiments, the region of outside, virtual arena can be regarded as " outside border ".For example, in " in border " region, one or more physics and/or virtual objects in fact can be broken, and not in fact physics and/or virtual objects can not smashed in the region on " outside border ".In alternate embodiments, fistfight can be taken a risk outside arena, and in this case, whole physical space is considered to " in border ".
Virtual arena 110 can be configured automatically by computing system 101.In certain embodiments, one or more featuress of terrain that can be based on physical space and adjust virtual arenic size and place virtual arena.The arenic supplementary features of feature configuration that can for example, based on (being identified by the depth camera of computing system) physical space, such as shape, landform, barrier etc.For example, virtual arena can be resized and place, and makes ground and/or the floor of virtual arenic floor and physical space integrated.In other examples, arena can be resized and place, and makes arenic floor on the ground and/or floor of physical space, is similar to the arena of rising, for example boxing ring.
Computing system 101 can detect the open area of physical space 104 the virtual arena of convergent-divergent automatically to adapt to open area.For example, open area can be defined as the physical space of minimum barrier, makes virtual arena be resized and be placed as to occupy to have than the still less position of the physical space of object of object threshold value.In alternate embodiments, open area can be defined by any suitable method.For example, open area can be defined as the physical space of the physical object of maximum quantity, and described physical space can be held interactive virtual object.
In that add or alternative embodiment, virtual arena can be scaled the function of upwards demarcating of physical space.Can indicate maximum arena parameter to coboundary, for example maximum arena size.Maximum arena size can be selected, for example, to make arena can not seem larger than material object.Maximum arena size can be subject to Breadth Maximum, the degree of depth and/or limitation in height.In certain embodiments, virtual arena can be scaled and have the upwards function of the size of the physical space of size restrictions.For example, virtual arena can be scaled to occupy physical space as much as possible in the case of being no more than maximum arenic size, for example, be the arena of full size.In additional or alternate embodiments, virtual arena can scaledly be no more than to coboundary to occupy the physical space of specified quantity.In another embodiment, the function that virtual arena can be used as the parameter (such as, object in size, lineament, physical space etc.) of physical space carrys out convergent-divergent, until to coboundary.In other words, in one example, maximum arena size can be defined, and arena can be scaled to be applicable to physical space or to meet maximum arena size, first do not reach person and be as the criterion.
In the instantiation of the function of upwards demarcating, maximum arena size can be 20 feet × 20 feet.If arena is placed in the room with the open area that is measured as 10 feet × 10 feet, arena can be scaled, makes it seem to be measured as 10 feet × 10 feet.Alternately, if arena is placed in the room with the open area that is measured as 30 feet × 30 feet, arena can be scaled, makes it seem to be measured as 20 feet × 20 feet, because this is the full-size for arena definition.But, the function convergent-divergent that can upwards demarcate with other in arena and do not depart from the scope of the present disclosure.
In other embodiments, arena can be configured by user, to select virtual arenic one or more parameters by user.For example, user can be by providing user to input to select virtual arenic size, position, direction, shape etc.In certain embodiments, user can point to the position in room, and selected virtual arena can be placed on this position.During configuring, virtual arena can have the outward appearance of change, to indicate virtual arena to be just customized.Once confirm virtual arenic parameter, virtual arenic outward appearance may change, and has configured to indicate.Virtual arena also can be placed by Modularly, and making user is virtual arenic each border chosen position, until virtual arena is defined.For example, user can be the virtual arenic each boundary point of modularization and points to a position, with the arenic border of defining virtual.
In certain embodiments, virtual arenic any parameter of not selected by user can be configured automatically by computing system 101.For example, user can be selected predefined virtual arena chosen position, and/or in the optional majority of user predefined virtual arena one.In some instances, the one or more parameters of predefined virtual arena definable.For example, predefined virtual arena can have specific shape scaled to adapt to room.In another example, predefined virtual arena can have specific size and be placed in the position that holds virtual arenic physical space.In further implementation column, it is interactively that user can select which object in real world in virtual arena.
Virtual arena can be totally enclosed, has the border of the closed area that limits Virtual Space and/or physical space.In alternate embodiments, virtual arena can and/or can occupy the zones of different of physical space in one or more open-ended.For example, virtual arenic some part may be separated by barrier, grapples to simulate on the platform raising at top being separated by virtual abyss.In a further embodiment, virtual arena can be unbounded, makes this virtual arena can occupy all physical spaces of being watched by perspective display device by user.
Computing system 101 can show the one or more incarnation in virtual arena by perspective display device 102.In certain embodiments, the upwards description function that one or more incarnation can be used as physical space carries out convergent-divergent.As above in greater detail, the function definable convergent-divergent of upwards demarcating, makes incarnation have size, and this size is based on for example, until the size of the physical space of full-size (appearing as full size).Upwards description function for incarnation may be different from for the arenic function of upwards demarcating, and makes to be independent of arena and carrys out convergent-divergent incarnation.Alternately, can, with identical for the arenic function of upwards demarcating, make to carry out convergent-divergent incarnation with the same mode in arena for the upwards description function of incarnation.For example, incarnation can have the full-size of independent definition.Alternately, incarnation can have and equals or be derived from the maximum sized full-size in arena.In addition, each incarnation can have independently the function of upwards demarcating, make independently of one another and/or with the arena each incarnation of convergent-divergent independently.Alternately, each incarnation can share one or more elements of the function of upwards demarcating.For example, each incarnation can have identical full-size.
This incarnation can comprise incarnation and the one or more opponent's incarnation that user controls.User controls incarnation and can be controlled in any suitable manner by user.One or more opponent's incarnation can be controlled and/or be controlled by artificial game intelligence by other users.
In certain embodiments, FTG can comprise team, and two or more users are with mode and one or more opponent fistfight of cooperation thus.In alternative embodiment, FTG may be that open escaping greatly killed, wherein each player and other each player's fistfights.
In certain embodiments, the outward appearance of the incarnation that user controls can scheduled justice, or select the incarnation outward appearance of controlling from multiple predefined users.In other embodiments, the outward appearance of incarnation that user controls may be derived from the outward appearance that the user that user inputs is provided.For example, user can for example, by camera (depth camera and/or color camera) imaging, and one or more physical features of user can be mapped to the incarnation that user controls.
User can provide user to input to control the incarnation of user's control and/or other elements of FTG.In response to receiving user's input, the incarnation that computing system 101 can be controlled the user who has an X-rayed display update on display device 102 based on user's input.For example, move and can input indication by user such as the attack of playing or box, and as response, see-through display can show carries out the incarnation that the user that moves of attack controls.
In certain embodiments, the image of renewal is used in the incarnation that new position, orientation, posture etc. show that user controls.In extra or alternative example, the incarnation that the user of renewal controls can be movable, inputs indicated order to illustrate by user.
User can provide user's input of the one or more orders of indication (for example movement directive, attack or defence order, the camera control commands of revising virtual arena view, game order, for example, finish fistfight etc.).For example, order to attack can comprise that various fistfights move, for example, box, play, virtual magic attacks, such as fireball etc.Order to attack also can comprise that combination is mobile, and wherein user input sequence is received, and perspective display device shows and carries out the incarnation that the user of the renewal that the attack that strengthens moves controls.
Can receive by several different methods and equipment user's input of the incarnation of controlling user's control.In certain embodiments, can receive user's input by offering the voice command of one or more voice capturing equipment.In these embodiments, microphone can detect voice command from user to provide user to input.For example, the incarnation of controlling by for example, carry out order user with voice command (" boxing ", " left sudden strain of a muscle ", " advancing " etc.), user can serve as virtual trainer.In certain embodiments, can receive user's input by game console.For example, user can provide input by one or more buttons of startup game console, control stick, elevator, switch etc.In certain embodiments, can pass through locus detector (for example Inertial Measurement Unit) and receive user's input.For example, the motion with one or more positions of interpreting user is detected in one or more positions that Inertial Measurement Unit can append to user.For example, the finger that Inertial Measurement Unit can append to user is to detect by the boxing of user's hand.
In certain embodiments, can input checkout equipment by gesture and receive user input, gesture input checkout equipment is configured to observe the gesture that the user that user inputs is provided.For example, user can carry out the gesture being detected by gesture input checkout equipment, and for example boxing is mobile.Gesture input checkout equipment can comprise one or more equipment that can detection and Identification gesture.For example, gesture input checkout equipment can comprise color camera, depth camera, accelerometer, Inertial Measurement Unit, touch-sensitive device etc.In certain embodiments, gesture can be by the camera calibration of opponent's perspective display device.In another embodiment, can receive user's input by the eye track checkout equipment of determining and identification eyes of user moves.For example, perspective display device can detect staring of user towards interior camera.
In certain embodiments, can provide user to input with multiple equipment simultaneously.For example, user can wear Inertial Measurement Unit, in order to boxing to be provided in the time providing voice command " to advance ".Correspondingly, in the time advancing, it is mobile that the incarnation that user controls can be carried out boxing.In addition, individual equipment can comprise multiple user's input capture abilities.For example, game console can comprise that accelerometer identifies specific gesture.Therefore, user can utilize game console, provides user to input by pressing button and gesture.
User can watch and participate in fistfight with multiple visual angles and distant view.For example, can be that user selects visual angle and/or distant view based on physical space.In another example, optional preferred visual angle and/or the distant view selected of user.In certain embodiments, user can dynamically be switched between visual angle and/or distant view.In alternative or extra embodiment, in response to one or more incarnation or user's movement, this system can automatically dynamically be switched between visual angle and/or distant view.
Fig. 2 illustrates the example at first visual angle (" limit " visual angle) of the third person.In third person, in the time watching by perspective display device 102, the incarnation 200 that user controls can be positioned in user 202 fronts.Incarnation 200 and opponent's incarnation 206 that virtual arena 204, user control are shown in broken lines to show not virtual environment.
In the view shown in Fig. 2, in certain embodiments, position that can be based on user 202 and dynamically update the position of the incarnation 200 that user controls, described user 202 provides user to input to control the incarnation that user controls.In other words, third person can have fixing viewpoint, makes the incarnation that user controls be maintained at identical direction and/or position with respect to user.For example, the incarnation that user controls can remain on the position in user front.In some instances, the back that fixed view can cause having an X-rayed the incarnation that display device controls user shows to user.In this layout, gesture or control that user can provide the incarnation of being controlled by user directly to imitate.For example, if user is switched control stick to the right, perspective display device can show the incarnation that the user of the renewal that has moved to incarnation the right controls.In other example, fixed view can cause having an X-rayed display device the front user oriented of the incarnation of user's control is shown.In this layout, user can provide the gesture of the incarnation institute mirror image of being controlled by user.For example, if user is switched control stick to the right, perspective display device can show the incarnation that the user of the renewal that has moved to the incarnation left side controls.
In certain embodiments, in the time that user has the third person of fistfight, the position of the incarnation that can be from user's position independently updated user controls.In other words, third person can have dynamic viewpoint.For example, user can move around arena, and the incarnation that user controls and/or arenic position keep identical with direction.This layout can allow user during grappling, to obtain the viewpoint of wanting.For example, user can move around arena, to watch fistfight from different angles, in order to identify possible fistfight strategy.
As mentioned above, the incarnation that user controls can be watched from third person viewpoint.In other words incarnation and user's physical separation that, user controls.In other embodiments, the incarnation that user controls can be implemented as user's covering.
For example, Fig. 3 illustrates the example of the first person of watching by perspective display device 102.In first person, perspective shows that expansion reality is to change user's outward appearance.With this visual angle, in the time watching user's hand 302 and/or leg by perspective display device 102, clothes element (for example boxing glove 300 and/or trunks) can cover hand 302 and/or leg.In other words,, by the feature of the incarnation of user's control is shown as to the individual features that covers user, perspective shows the outward appearance of scalable user.
First person can be shown fixing or dynamic viewpoint, as above in detail described in.Can be used for allowing the movement of the incarnation of controlling little physical space user's control for the dynamic viewpoint at this visual angle.For example, user can input by specific user the virtual location of the incarnation that changes user's control, and does not change his or her physical location.
In any visual angle or viewpoint, can show the incarnation that user controls with of the posture of the user based on providing user a to input posture.Alternately, can show the incarnation that user controls with a posture that is independent of the posture that the user that user inputs is provided.
Opponent's incarnation (opponent's incarnation 114 of for example Fig. 1 C) can be controlled by the AI (artificial intelligence) being provided by computing equipment.Opponent's incarnation also can be controlled by the opponent user that opponent user's input is provided.For example, in certain embodiments, opponent user can be positioned at identical physical space with primary user.In this embodiment, opponent user may watch virtual arenic expression, because it is displayed on opponent's perspective display device.Opponent has an X-rayed display device can show virtual arenic expression in the position of the identical physical space in the virtual arena showing with the perspective display device by primary user, makes two arena and incarnation in user awareness same, physical.
In certain embodiments, opponent user can be positioned at different physical spaces from primary user.For example, opponent user may watch virtual arenic expression with perspective display device.Like this, different see-through display is for creating the illusion of identical arena and incarnation two different physical spaces.
As another example, opponent user can for example, watch virtual arenic expression with fixing display (TV or computer display).For example, Fig. 4 for example illustrates, by computing system (video game system 402) and participates in grappling and using opponent's fixing display 404 to watch the opponent 400 of fistfight.
In certain embodiments, virtual arena can be configured by primary user, and is placed in the position corresponding to primary user's physical space.About then the information of the parameter of arena, physical space and one or more incarnation can be sent to video game system 402.Then the expression of the virtual arena 406 showing on opponent's fixing display 404 can reflect the characteristic of primary user's physical space.In alternate embodiments, can in the position of the physical space corresponding to opponent user, configure and place virtual arena.About then the information of the parameter of arena, physical space and one or more incarnation can be sent to primary user's see-through display.Primary user's see-through display can show the virtual arenic expression of the physical space characteristic that reflects opponent.
When participating in when real-time multiplayer's scene as above, can adapt to the difference between two or more users' physical space by the physical characteristic in mapping physical space each other.For example, primary user's physical space can be selected as holding arena.Therefore, in certain embodiments, in fact the virtual objects of the physical object in expression primary user's physical space can be merged in opponent's physical space.For example, being arranged in the relevant position that the desk of primary user's physical space can have an X-rayed the opponent's physical space on display device opponent shows.In other embodiments, only can be presented at opponent by the physical object of interactive virtual object encoding has an X-rayed on display device.
Fig. 5 illustrates the illustrative methods 500 that according to embodiment of the present disclosure, virtual arena is integrated into physical space.At 502 places, method 500 comprises that imaging comprises the physical space of features of terrain.In certain embodiments, can further identify by determining the depth value of object in physical space at 504 places the feature of physical space.At 506 places, method 500 comprises that demonstration is integrated in the virtual arena in physical space.In an example, at 508 places, can input and the arenic size of configuration virtual and position based on user.In that add or alternative example, at 510 places, computing system can be placed virtual arena automatically based on physical space.At 512 places, placement can comprise that the function of upwards demarcating using virtual arena as physical space carrys out convergent-divergent automatically.
At 514 places, method 500 comprises the one or more incarnation that show in virtual arena.In an example, at 516 places, incarnation can be used as the function convergent-divergent of upwards demarcating of physical space.At 518 places, the outward appearance of the personal outdoor sight of useful source shows incarnation.At 520 places, method 500 comprises based on user's input controls the incarnation that user controls.Next,, at 522 places, method 500 comprises based on user's input and the incarnation that the user of display update controls.
Fig. 6 shows the limiting examples of the perspective display device 102 that comprises see-through display 602.For example, perspective display device 102 can be the perspective display device of wear-type.See-through display 602 is transparent at least partly, thereby allows light to be delivered to user's eyes by see-through display.In addition, see-through display is configured to visually to watch the user of physical space to expand the outward appearance of physical space by see-through display.For example, in the time that user browses see-through display, see-through display can show the appreciable virtual objects of user.Thereby user can watch non-existent virtual objects in physical space when user watches physical space.This has created virtual objects is the illusion of the part of physical space.
Perspective display device 102 also comprises virtual reality engine 604.Virtual reality engine 604 can be configured such that see-through display visually presents virtual objects, one or more incarnation or other virtual objects with virtual arenic form.Virtual objects can simulate real world object outward appearance.For the user who watches physical space by see-through display, it is integrated with physical space that virtual objects seems.For example, the virtual objects showing by see-through display and/or other images can be placed with respect to user's eyes, make the virtual objects and/or the image that show to user seem to occupy the ad-hoc location in physical space.Like this, user can watch actual non-existent object in physical space.Virtual reality engine can comprise software, hardware, firmware or its any combination.
Perspective display device 102 can comprise speaker subsystem 606 and sensor subsystem 608.Sensor subsystem can comprise various different sensors in different embodiment.As nonrestrictive example, sensor subsystem can comprise microphone 610, one or more infrared and/or Visible Light Camera 612 that (leaves user) forward and/or one or more (towards users) backward infrared/or Visible Light Camera 614.Camera forward can comprise one or more depth cameras, and/or camera backward can comprise the camera of one or more tracking eyes.In certain embodiments, plate set sensor subsystem can be communicated by letter to the one or more non-plate set sensor of plate set sensor subsystem with transmission observation information.For example, game console depth camera used can send depth map and/or the modeled dummy skeleton sensor subsystem to head mounted display.
Perspective display device 102 also can comprise that allowing to have an X-rayed display device is worn on by the one or more features in account.Shown in example in, perspective display device 102 adopts the form of glasses, and comprises nose frame 616 and ear frame 618a and 618b.In other embodiments, head mounted display can comprise having cap or the helmet of having an X-rayed face shield in front.In addition, although be to be described in the context of wear-type see-through display, concept as herein described for example also may be used on not being, for example, on the display (present the real object observed by camera and not the opaque display of the virtual objects in the camera visual field) of the upper and non-perspective of the see-through display (windscreen) of wear-type.
Perspective display device 102 also can comprise communication subsystem 620.Communication subsystem 620 can be configured to carry computing device communication with one or more non-plates.As an example, communication subsystem can be configured to wireless receiving video flowing, audio stream, coordinate information, virtual objects describes and/or out of Memory presents virtual arena.
In certain embodiments, said method and process can be attached to the computing system of one or more computing equipments.Especially, such Method and Process can be implemented as computer applied algorithm or service, application programming interface (API), storehouse and/or other computer program.
Fig. 7 has schematically shown the non-limiting example of the computing system 700 that can formulate one or more said methods and process.Computing system 700 illustrates in simplified form.Will be appreciated that: in fact can use any Computer Architecture and not depart from the scope of the present disclosure.In different embodiment, computing system 700 can adopt the forms such as wear-type perspective display device, game station, mobile computing device, mobile communication equipment (such as smart mobile phone), desktop computer, notebook computer, panel computer, home entertaining computer, network computing device, mainframe computer, server computer.
Computing system 700 comprises logic subsystem 702 and storage subsystem 704.Computing system 700 optionally comprises for example, in display subsystem 706 (see-through display), input subsystem 708, communication subsystem 710 and/or Fig. 7 unshowned other assemblies.
Logic subsystem 702 comprises the one or more physical equipments that are configured to carry out instruction.For example, logic subsystem can be configured to carry out instruction, and this instruction is a part for one or more application programs, service, program, routine, storehouse, object, assembly, data structure or other logical construct.This instruction can be implemented as and execute the task, and realizes data type, converts the state of one or more assemblies, or otherwise obtains the result of expecting.
Logic subsystem can comprise the one or more processors that are configured to executive software instruction.Extraly or alternately, logic subsystem can comprise the one or more hardware or the firmware logic machine that are configured to carry out hardware or firmware instructions.The processor of logic subsystem can be monokaryon or multinuclear, and the program of carrying out thereon can be arranged to serial, parallel or distributed processing.Logic subsystem can comprise the single component being distributed between two or more equipment alternatively, and they can and/or be arranged to Coordination Treatment by long-range placement.During the aspect of logic subsystem can be configured by cloud computing configuration networking computing equipment capable of making remote access virtual and carry out.
Storage subsystem 704 comprises equipment one or more physics, non-transient, and this equipment is configured to keep being carried out to realize by logic subsystem data and/or the instruction of Method and Process as herein described.In the time realizing this method and flow process, the state of storage subsystem 704 can be changed, for example, in order to keep different data.
Storage subsystem 704 can comprise removable medium and/or built-in device.Wherein, storage subsystem 704 can comprise optical storage apparatus (such as CD, DVD, HD-DVD, Blu-ray Disc etc.), semiconductor memory apparatus (such as RAM, EPROM, EEPROM etc.) and/or magnetic storage apparatus (such as hard disk drive, floppy disk, tape drive, MRAM etc.).Storage subsystem 704 can comprise the equipment of volatibility, non-volatile, dynamic, static, read/write, read-only, arbitrary access, sequential access, position addressable, file addressable and/or content addressable.
Will be appreciated that: storage subsystem 704 comprises equipment one or more physics, non-transient.But in certain embodiments, the aspect of instruction described herein can such as, be propagated in instantaneous mode by pure signal (electromagnetic signal, optical signal etc.), described pure signal does not keep the limited duration by physical equipment.In addition, relate to data of the present disclosure and/or other forms of information can be propagated by pure signal.
In certain embodiments, the aspect of logic subsystem 702 and storage subsystem 704 can together be integrated in one or more hardware logic assemblies, can formulate functional description herein by this assembly.Such hardware logic assembly for example can comprise field programmable gate array (FPGA), program and application specific integrated circuit (PASIC/ASIC), program and application specific standardized product (PSSP/ASSP), SOC(system on a chip) (SOC) system and CPLD (CPLD).
Term " program " and " engine " can be used for describing the aspect that is embodied as the computing system 700 of carrying out specific function.In some cases, can come instantiation procedure or engine by the logic subsystem 702 of 704 hold instructions of storage subsystem by carrying out.Will be appreciated that: can be from identical application program, service, code block, object, storehouse, routine, API, function etc. different program and/or the engines of instantiation.Equally, can be by identical program and/or engines of instantiation such as different application programs, service, code block, object, routine, API, functions.Term " program " and " engine " can comprise independent or organize executable file, data file, storehouse, driver, script, data-base recording etc. more.
Will be appreciated that: as used herein " service " is across the executable application program of multiple user conversations.Service can be provided for one or more system components, program and/or other service.In some embodiments, service can move on one or more server computing devices.
When involved, display subsystem 706 can be used for presenting the visual representation by keeping data in storage subsystem 704.This visual representation can adopt the form that seems the image of expanding physical space, thereby produces the illusion of mixed reality.Because Method and Process as herein described changes the data that kept by storage subsystem, thereby converted the state of storage subsystem, the state of display subsystem 706 can be transformed to represent visually the variation in basic data equally.Display subsystem 706 can comprise one or more display devices of the technology of in fact utilizing any type.Such display device can for example be combined in, in shared shell (head mounted display) with logic subsystem 702 and/or storage subsystem 704, or such display device can be peripheral display device.
When involved, input subsystem 708 can comprise one or more user input devices (for example game console, gesture input checkout equipment, speech recognition device, Inertial Measurement Unit, keyboard, mouse or touch-screen) or dock with one or more user input devices.In certain embodiments, input subsystem can comprise selected natural user's input (NUI) assembly or dock with selected natural user's input (NUI) assembly.Such assembly can be integrated or peripheral, and the conversion of input action and/or process and can be carried ground by plate or non-plate carries and processes.The NUI assembly of example can comprise: for the microphone of speech and/or speech recognition; For infrared, colored, solid and/or the depth camera of machine vision and/or gesture identification; For head-tracker, eye tracker, accelerometer and/or the gyroscope of motion detection and/or intention identification; And for assessment of the electric field sensing assembly of brain activity.
When involved, communication subsystem 710 can be configured to can be communicatedly by computing system 700 and one or more other computing equipment couplings.Communication subsystem 710 can comprise the wired and/or Wireless Telecom Equipment from one or more different communication protocol compatibilities.As nonrestrictive example, communication subsystem can be arranged to by the communication of wireless telephony network or wired or wireless local or Wide Area Network.In certain embodiments, communication subsystem can allow computing system 700 for example, to send and/or receipt message to and/or from miscellaneous equipment by network (internet).
Will be appreciated that: configuration as herein described and/or method are actually exemplary, do not consider these specific embodiments or example with restrictive meaning, because many variations are possible.Concrete routine as herein described or method can represent the processing policy of one or more any amount.Thereby, illustrate and/or order that the exercises described can illustrate and/or describe, with other order, parallel or omit and carry out.Equally, the order of said process can be changed.
Theme of the present disclosure comprises various process disclosed herein, system and configuration and other features, function, action and/or attribute, and about its all novelties and non-obvious combination and the sub-portfolio of any and all equivalents.

Claims (10)

1. one kind provides the computing system (101) of the FTG of mixed reality, and this computing system comprises:
Perspective display device (102);
Logic subsystem (702);
The storage subsystem (704) of storage instruction, in the time carrying out described instruction by logic subsystem:
On described perspective display device, show incarnation (112) and opponent's incarnation (114) that (506) virtual arena (110), user control, in the time watching physical space by described perspective display device, virtual arena is integrated in physical space (104); And
In response to reception user input, on described perspective display device, show the incarnation of user's control of (522) upgrading based on this user's input.
2. computing system as claimed in claim 1, wherein the position of the user based on providing described user to input dynamically updates the position of the incarnation of described user's control.
3. computing system as claimed in claim 1, wherein inputs checkout equipment by gesture and receives described user's input, the gesture that described gesture input checkout equipment is configured to observe the user that described user's input is provided.
4. computing system as claimed in claim 1, wherein said virtual arenic expression is presented on opponent's fixing display.
5. computing system as claimed in claim 1, wherein said virtual arenic expression is presented at opponent and has an X-rayed on display device.
6. computing system as claimed in claim 1, also comprises depth camera, this depth camera Imaging physics space.
7. computing system as claimed in claim 6, wherein in the time watching the physical space with described one or more objects by described perspective display device, described perspective display device show have with physical space in the virtual arena of one or more interactive element of one or more object integrations.
8. one kind for providing the method (500) of FTG of mixed reality, and the method comprises:
Show (506) virtual arena (110) by perspective display device (102), first function of upwards demarcating that virtual arena is used as physical space (104) carries out convergent-divergent (512); And
Show the one or more incarnation (112) in (514) described virtual arena by perspective display device, second function of upwards demarcating that described one or more incarnation are used as physical space carries out convergent-divergent (516).
9. method as claimed in claim 8, the wherein said function that first is upwards demarcated is different from described second function of upwards demarcating.
10. method as claimed in claim 8, the wherein said function that first is upwards demarcated is identical with described second function of upwards demarcating.
CN201310757252.4A 2013-12-18 2013-12-18 Mixed reality arena Pending CN103785169A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310757252.4A CN103785169A (en) 2013-12-18 2013-12-18 Mixed reality arena

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310757252.4A CN103785169A (en) 2013-12-18 2013-12-18 Mixed reality arena

Publications (1)

Publication Number Publication Date
CN103785169A true CN103785169A (en) 2014-05-14

Family

ID=50661487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310757252.4A Pending CN103785169A (en) 2013-12-18 2013-12-18 Mixed reality arena

Country Status (1)

Country Link
CN (1) CN103785169A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106457039A (en) * 2016-07-07 2017-02-22 深圳狗尾草智能科技有限公司 Virtual game operation platform and virtual game operation method based on artificial intelligence
CN106528164A (en) * 2016-11-11 2017-03-22 上海远鉴信息科技有限公司 Two-way assistant operation method and system in virtual reality
CN106664401A (en) * 2014-08-19 2017-05-10 索尼互动娱乐股份有限公司 Systems and methods for providing feedback to a user while interacting with content
CN108292168A (en) * 2015-11-26 2018-07-17 日商可乐普拉股份有限公司 For the action indicating means and program of object in Virtual Space
CN108874126A (en) * 2018-05-30 2018-11-23 北京致臻智造科技有限公司 Exchange method and system based on virtual reality device
CN109069928A (en) * 2016-02-17 2018-12-21 株式会社万代南梦宫娱乐 simulation system and game system
CN109308115A (en) * 2017-07-27 2019-02-05 宏达国际电子股份有限公司 The mobile method of user and relevant apparatus are shown in virtual reality system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6182010B1 (en) * 1999-01-28 2001-01-30 International Business Machines Corporation Method and apparatus for displaying real-time visual information on an automobile pervasive computing client
EP1612707A2 (en) * 2004-06-30 2006-01-04 Navteq North America, LLC Method of collecting information for a geographic database for use with a navigation system
JP2008516642A (en) * 2004-08-31 2008-05-22 インフォメーション イン プレース インク Object-oriented mixed reality and video game authoring tool system and method
US20090325607A1 (en) * 2008-05-28 2009-12-31 Conway David P Motion-controlled views on mobile computing devices
CN101620790A (en) * 2009-08-03 2010-01-06 百世教育科技股份有限公司 Assisted teaching system
JP2011521318A (en) * 2008-04-16 2011-07-21 バーチュアル プロテインズ ベー.フェー. Interactive virtual reality image generation system
US8170795B2 (en) * 2005-01-18 2012-05-01 Harman Becker Automotive Systems Gmbh Navigation system with animated intersection view
CN102508991A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Method of constructing virtual experiment teaching scene based on image material
EP2477894A1 (en) * 2009-09-19 2012-07-25 Quan Xiao Method and apparatus of variable g force experience and create immersive vr sensations
CN103033936A (en) * 2011-08-30 2013-04-10 微软公司 Head mounted display with iris scan profiling
CN103080928A (en) * 2010-05-28 2013-05-01 诺基亚公司 Method and apparatus for providing a localized virtual reality environment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6182010B1 (en) * 1999-01-28 2001-01-30 International Business Machines Corporation Method and apparatus for displaying real-time visual information on an automobile pervasive computing client
EP1612707A2 (en) * 2004-06-30 2006-01-04 Navteq North America, LLC Method of collecting information for a geographic database for use with a navigation system
JP2008516642A (en) * 2004-08-31 2008-05-22 インフォメーション イン プレース インク Object-oriented mixed reality and video game authoring tool system and method
US8170795B2 (en) * 2005-01-18 2012-05-01 Harman Becker Automotive Systems Gmbh Navigation system with animated intersection view
JP2011521318A (en) * 2008-04-16 2011-07-21 バーチュアル プロテインズ ベー.フェー. Interactive virtual reality image generation system
US20090325607A1 (en) * 2008-05-28 2009-12-31 Conway David P Motion-controlled views on mobile computing devices
CN101620790A (en) * 2009-08-03 2010-01-06 百世教育科技股份有限公司 Assisted teaching system
EP2477894A1 (en) * 2009-09-19 2012-07-25 Quan Xiao Method and apparatus of variable g force experience and create immersive vr sensations
CN103080928A (en) * 2010-05-28 2013-05-01 诺基亚公司 Method and apparatus for providing a localized virtual reality environment
CN103033936A (en) * 2011-08-30 2013-04-10 微软公司 Head mounted display with iris scan profiling
CN102508991A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Method of constructing virtual experiment teaching scene based on image material

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10228811B2 (en) 2014-08-19 2019-03-12 Sony Interactive Entertainment Inc. Systems and methods for providing feedback to a user while interacting with content
CN106664401A (en) * 2014-08-19 2017-05-10 索尼互动娱乐股份有限公司 Systems and methods for providing feedback to a user while interacting with content
CN106664401B (en) * 2014-08-19 2019-07-26 索尼互动娱乐股份有限公司 System and method for providing a user feedback when interacting with content
CN108292168B (en) * 2015-11-26 2021-04-06 日商可乐普拉股份有限公司 Method and medium for indicating motion of object in virtual space
CN108292168A (en) * 2015-11-26 2018-07-17 日商可乐普拉股份有限公司 For the action indicating means and program of object in Virtual Space
CN109069928A (en) * 2016-02-17 2018-12-21 株式会社万代南梦宫娱乐 simulation system and game system
CN109069928B (en) * 2016-02-17 2022-03-01 株式会社万代南梦宫娱乐 Simulation system and game system
CN106457039A (en) * 2016-07-07 2017-02-22 深圳狗尾草智能科技有限公司 Virtual game operation platform and virtual game operation method based on artificial intelligence
WO2018006365A1 (en) * 2016-07-07 2018-01-11 深圳狗尾草智能科技有限公司 Artificial intelligence-based virtual game operation platform and method
CN106528164A (en) * 2016-11-11 2017-03-22 上海远鉴信息科技有限公司 Two-way assistant operation method and system in virtual reality
CN109308115A (en) * 2017-07-27 2019-02-05 宏达国际电子股份有限公司 The mobile method of user and relevant apparatus are shown in virtual reality system
US11054895B2 (en) 2017-07-27 2021-07-06 Htc Corporation Method of display user movement in virtual reality system and related device
CN109308115B (en) * 2017-07-27 2021-07-16 宏达国际电子股份有限公司 Method and related device for displaying user movement in virtual reality system
CN108874126A (en) * 2018-05-30 2018-11-23 北京致臻智造科技有限公司 Exchange method and system based on virtual reality device
CN108874126B (en) * 2018-05-30 2021-08-31 北京致臻智造科技有限公司 Interaction method and system based on virtual reality equipment

Similar Documents

Publication Publication Date Title
US11810244B2 (en) Devices, methods, and graphical user interfaces for interacting with three-dimensional environments
US20230302359A1 (en) Reconfiguring reality using a reality overlay device
JP6276882B1 (en) Information processing method, apparatus, and program for causing computer to execute information processing method
US9933851B2 (en) Systems and methods for interacting with virtual objects using sensory feedback
US20140125698A1 (en) Mixed-reality arena
CN110665230B (en) Virtual role control method, device, equipment and medium in virtual world
CN103785169A (en) Mixed reality arena
JP2015116336A (en) Mixed-reality arena
CN107656615A (en) The world is presented in a large amount of digital remotes simultaneously
JP6392911B2 (en) Information processing method, computer, and program for causing computer to execute information processing method
JP6290467B1 (en) Information processing method, apparatus, and program causing computer to execute information processing method
KR20220012990A (en) Gating Arm Gaze-Driven User Interface Elements for Artificial Reality Systems
CN103501869A (en) Manual and camera-based game control
JP6321263B1 (en) Information processing method, apparatus, and program for causing computer to execute information processing method
JP6201028B1 (en) Information processing method, apparatus, and program for causing computer to execute information processing method
KR20220018561A (en) Artificial Reality Systems with Personal Assistant Element for Gating User Interface Elements
KR20220018562A (en) Gating Edge-Identified Gesture-Driven User Interface Elements for Artificial Reality Systems
EP2886172A1 (en) Mixed-reality arena
JP2019032844A (en) Information processing method, device, and program for causing computer to execute the method
JP6554139B2 (en) Information processing method, apparatus, and program for causing computer to execute information processing method
JP2018192238A (en) Information processing method, apparatus, and program for implementing that information processing method in computer
JP2018092592A (en) Information processing method, apparatus, and program for implementing that information processing method on computer
JP6856572B2 (en) An information processing method, a device, and a program for causing a computer to execute the information processing method.
KR20150071611A (en) Mixed-reality arena
JP6722244B2 (en) Program, information processing method, and information processing apparatus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: MICROSOFT TECHNOLOGY LICENSING LLC

Free format text: FORMER OWNER: MICROSOFT CORP.

Effective date: 20150728

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20150728

Address after: Washington State

Applicant after: Micro soft technique license Co., Ltd

Address before: Washington State

Applicant before: Microsoft Corp.

WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140514

WD01 Invention patent application deemed withdrawn after publication