WO2018200315A1 - Method and apparatus for projecting collision-deterrents in virtual reality viewing environments - Google Patents

Method and apparatus for projecting collision-deterrents in virtual reality viewing environments Download PDF

Info

Publication number
WO2018200315A1
WO2018200315A1 PCT/US2018/028451 US2018028451W WO2018200315A1 WO 2018200315 A1 WO2018200315 A1 WO 2018200315A1 US 2018028451 W US2018028451 W US 2018028451W WO 2018200315 A1 WO2018200315 A1 WO 2018200315A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
probable
virtual
collision frame
hmd
Prior art date
Application number
PCT/US2018/028451
Other languages
French (fr)
Inventor
Sungjae Hwang
Junghyeon GIM
Original Assignee
Pcms Holdings, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pcms Holdings, Inc. filed Critical Pcms Holdings, Inc.
Publication of WO2018200315A1 publication Critical patent/WO2018200315A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0141Head-up displays characterised by optical features characterised by the informative content of the display
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Definitions

  • the associated display devices may be bulky and tethered to large processing units and may prevent a user from being aware of various hazards and events in the real-world environment.
  • these devices become untethered and users become more mobile, while still being isolated from the real- world physical reality around them, these systems become more dangerous to use independently.
  • current VR and immersive AR content consumption often relies on a human supervisor being present for safety purposes.
  • this is a prohibitive requirement as the human supervisor may interfere with the VR user's experience by getting in the way.
  • projecting the first indication of the first probable-collision frame and projecting the second indication of the second probable-collision frame may be performed by a common projector, and the first virtual space may be connected to the second virtual space.
  • determining the first probable-collision frame may be further based on dimensional body data for the first user.
  • a method may further include measuring a second distance between the first user and a second external user, and projecting the first indication of the first probable-collision frame may be responsive to at least one of the first measured distance and the second measured distance being less than a second threshold.
  • projecting the first indication of the first probable-collision frame may include rendering the first indication of the first probable-collision frame based on a point-of-view position of the first external user.
  • a method may further include adaptively changing the first indication of the first probable-collision frame in response to the first VR content displayed on the first HMD device.
  • projecting the first indication of the first probable-collision frame may include projecting a rendering of a heat map that indicates a probability of collision with the first user.
  • projecting the first indication of the first probable-collision frame may include projecting a warning image between the first user and another user in a real world space, wherein the another user may be within a threshold distance from the first user, and wherein the warning image may be configured to avoid a potential collision between the first user and the external user.
  • a method may include: displaying, in a virtual space, a virtual reality (VR) content to a user of a head-mounted display (HMD) device; determining a probable-collision frame representing a region with a probability of entry by the user; and projecting, on a surface proximate to the user, an indication of the probable-collision frame.
  • VR virtual reality
  • HMD head-mounted display
  • a method may further include: determining a profile information value as an average of user profile dimensional body data for the user; and determining a physical motion area ratio value based on the level of interactivity and the profile information value, wherein determining the probable-collision frame may be based on the level of interactivity for the one or more virtual objects and a location of the one or more virtual objects within the virtual space and the physical motion area ratio value.
  • determining the physical motion area ratio value may include determining an average of the profile information value and the level of interactivity for one of the one or more virtual objects within the virtual space.
  • projecting the indication of the probable-collision frame on the surface proximate to the user may include: selecting the surface proximate to the user, wherein the selected surface is located between the user and an external user outside of the virtual space within a threshold distance of the user; rendering the probable-collision frame; and projecting the rendered probable-collision frame on the selected surface.
  • rendering the probable-collision frame may include: mapping a point- of-view position of the external user to the virtual space; orienting the probable-collision frame to be facing the external user; and distorting a horizontal ratio and a vertical ratio of the probable-collision frame based on the point-of-view position of the external user.
  • projecting the indication of the probable-collision frame may include rendering a heat map as the probable-collision frame.
  • an apparatus may include: a projector; a processor; and a non- transitory computer-readable medium storing instructions that are operative, if executed on the processor, to perform the processes of: displaying, in a virtual space, a virtual reality (VR) content to a user of a head- mounted display (HMD) device; determining a probable-collision frame representing a region with a probability of entry by the user; and projecting, on a surface proximate to the user, an indication of the probable-collision frame.
  • VR virtual reality
  • HMD head- mounted display
  • an apparatus may include the HMD device, and the HMD device may include the projector and the processor.
  • FIG. 2 depicts a flow chart of a method for projecting collision-deterrents in virtual reality viewing environments for some embodiments.
  • FIG. 3 depicts virtual objects around an HMD user and a VR scene for some embodiments.
  • FIG. 4 depicts an HMD user detecting an external user for some embodiments.
  • FIG. 5 depicts a geometric explanation for calculating distances for some embodiments.
  • FIG. 6 depicts an HMD user surrounded by virtual objects detecting an external user for some embodiments.
  • FIG. 9 depicts a divided virtual frame having multiple virtual objects with interactivity scores for some embodiments.
  • FIG. 10 depicts a selection of a probable-collision frame for some embodiments.
  • FIG. 11 depicts a table of example characteristic display parameters, corresponding weight scores, and calculated interactivities for four virtual objects for some embodiments.
  • FIG. 12 depicts a top-down view having an overlap area that is used to determine a geographic coordinate position (GCP) weight score for some embodiments.
  • GCP geographic coordinate position
  • FIG. 13 depicts a chart that specifies attributes of each virtual object in a VR game for some embodiments.
  • FIG. 15 depicts an example scenario with an external user at a second distance for some embodiments.
  • FIG. 16 depicts a selection of a surface for projecting content for some embodiments.
  • FIG. 18 depicts an HMD projecting virtual contents based on a physical motion area ratio for some embodiments.
  • FIG. 19 depicts a comparison of the first distance and the second distance if the physical motion area ratio has a maximum value for some embodiments.
  • FIG. 20 depicts a virtual object that is rendered to be projected without distortion for some embodiments.
  • FIG. 21 depicts a virtual object that is rendered to be projected without distortion at multiple viewing angles for some embodiments.
  • FIG. 22 depicts an HMD projecting a heat map indicating a collision area for some embodiments.
  • FIG. 23 depicts a projection of a probable-collision frame that varies as an external user changes position for some embodiments.
  • FIG. 24 depicts a flow chart of a method for projecting a rendered probable-collision frame for some embodiments.
  • FIG. 25 depicts an HMD projecting a plurality of probable-collision frames based on respective physical motion area ratios for some embodiments.
  • FIG. 30 depicts an HMD user selectively projecting virtual objects for some embodiments.
  • FIG. 31 depicts a multi-user VR projection system having a high-performance projector for some embodiments.
  • FIG. 32 depicts a projection that indicates movement-likelihood information to an external user for some embodiments.
  • FIG. 33 depicts an HMD user reacting to a moving virtual object and a heat-map projection for some embodiments.
  • FIG. 35 depicts an external user with high level permissions viewing the VR content for some embodiments.
  • FIG. 36 is a flowchart for some embodiments for displaying an indication of a probable-collision frame of a surface proximate to an HMD wearer and adaptively changing the indication in response to VR content displayed to the HMD wearer.
  • FIG. 37 depicts an exemplary wireless transmit/receive unit (WTRU) that may be employed as an HMD, an external projection provider, or as projection unit for some embodiments.
  • WTRU wireless transmit/receive unit
  • FIG. 38 depicts an exemplary network entity that may be employed as server for some embodiments.
  • FIG. 39 is a system diagram illustrating an example communications system for some embodiments.
  • FIG. 40 is a system diagram illustrating an example radio access network (RAN) and an example core network (CN) that may be used within the communications system illustrated in FIG. 39 for some embodiments.
  • RAN radio access network
  • CN core network
  • FIG. 41 is a system diagram illustrating a further example RAN and a further example CN that may be used within the communications system illustrated in FIG. 39 for some embodiments.
  • the associated display devices may be bulky and tethered to large processing units and may prevent a user from being aware of various hazards and events in the real-world environment.
  • these devices become untethered and users become more mobile, while still being isolated from the real- world physical reality around them, these systems become more dangerous to use independently.
  • current VR and immersive AR content consumption often relies on a human supervisor being present for safety purposes.
  • this is a prohibitive requirement as the human supervisor may interfere with the VR user's experience by getting in the way.
  • VR systems make it possible to experience a virtual world as if it was the real world. While wearing an HMD capable of experiencing VR, the user is shielded from visual contact with the outside environment. Barring ancillary functionality, an external user (who is not wearing an HMD) is unaware of what content the HMD wearer is experiencing. However, contents of the HMD may be mirrored on an external monitor or projector, thereby allowing external users to view the VR content.
  • the HMD wearer uses their hands and moves around their present real-world environment, and these movements may cause collision with nearby people and objects.
  • one device displays a partial image of the external environment within the VR environment.
  • a wearer of an HMD may desire to reduce collisions with an external viewer (who may not know what is happening in the VR environment). Although some systems modify HMD content and/or adjust the transparency to prevent the wearer from colliding with the external viewer, some methods may interfere with the overall immersiveness of the VR experience.
  • the present disclosure teaches a method and apparatus according to some embodiments for projecting contents of a virtual world onto a surface in the real world so as to aid an external user in avoiding a collision with a head-mounted display (HMD) user.
  • the execution of the method is based on an HMD user's virtual context.
  • the external user may understand what is happening in the virtual world in an effective way.
  • the present method may reduce a danger of collision between the HMD user and the external user.
  • the external viewer not the HMD wearer, may take evasive action to avoid collisions. As such, it may decrease any likelihood that the HMD wearer will not be disturbed during the VR experience.
  • color mapping which may process motion information between moving virtual objects and the HMD user, may be used to intuitively visualizes degrees of probable motion.
  • the HMD user may render and project the virtual objects themselves.
  • Exemplary systems and processes disclosed herein in accordance with some embodiments determine if a virtual reality (VR) user is within a threshold distance of an external user and responsively project a warning image in between the two users to help mitigate a potential collision, thereby helping the VR user avoid the hazard without breaking the immersiveness of the VR experience. Calculations are made based on relative trajectories, and in some cases expected trajectories, to determine location of potential collisions. The context of the VR user's present VR scene is considered if determining expected trajectories. Information about virtual objects within the VR scene and user profile information are used as input for determining how to render and project the warning image.
  • the warning image may be a heat map indicating areas the VR user is likely to move to, a properly scaled rendering of a virtual object, a stop sign, and the like.
  • FIG. 1 depicts a high-level component diagram of an apparatus 150 for projecting collision- deterrents in virtual reality viewing environments for some embodiments.
  • FIG. 1 depicts (i) an HMD 152, (ii) a server 154, (iii) a projection unit 156, and (iv) an external projection provider 158, as well as a functional scope of each of the previously listed.
  • An HMD 152 may display VR contents to an HMD user and may measure a distance between an HMD user and an external user using sensors inside an HMD 152.
  • a server 154 may send characteristic display parameters and a user profile to the external projection provider.
  • the server 154 may be a component residing in local memory of the HMD 152 and/or a component existing in cloud storage or a network outside the HMD 152.
  • the server 154 may be embodied as the network entity 3890 of FIG. 38.
  • information about a present VR game and a current user may exist in a server 154.
  • a projection unit 156 may adjust a display device and project rendered VR contents (e.g., a rendered probable-collision frame) received from an external projection provider 158.
  • the projector unit 156 may receive the rendered information and project the rendered information in real space.
  • a projector unit 156 may rotate according to a selected projection direction.
  • the projector unit 156 may be in the HMD 152 or may be an external device installed on (or onto) the HMD 152.
  • a projector unit 156 may be mounted on the HMD 152 as an external device, unless explicitly stated otherwise.
  • An external projection provider 158 may calculate distances between users and compare this distance against a first predetermined distance (or a threshold distance) as well as a variable second distance.
  • the external projection provider 158 may monitor an external user's position and select a probable-collision frame.
  • the external projection provider 158 may also render the probable-collision frame and select a surface for projecting the rendered probable-collision frame.
  • the external projection provider 158 may send the rendered probable-collision frame to the projection unit.
  • images may be rendered for projecting to nullify (or prevent) a collision between users.
  • the external projection provider 158 may be implemented as a hardware module and/or software module. In some embodiments, the external projection provider is embedded in the HMD 152.
  • the external projection provider may be attached to an exterior of the HMD 152 or the user and may communicate with the HMD 152 via a wired connection or a wireless connection (USB/WiFi / Bluetooth/LAN).
  • the external projection provider may be located on a cloud network outside of the HMD 152.
  • the external projection provider 158 may be embedded in the HMD 152, unless explicitly stated otherwise.
  • projecting an indication of a probable-collision frame may include, e.g., selecting a surface upon which to project the indication of the probable-collision frame, adjusting a projection device to face the selected surface, rendering the probable-collision frame, and projecting the rendered probable-collision frame on the selected surface between the user and the external user.
  • projecting an indication of a probable-collision fame may include rendering a heat map as the probable-collision frame.
  • the indication of the probable-collision frame may be adaptively changed in response to VR content displayed on an HMD to an HMD user.
  • VR content may be displayed on an HMD, a mixed reality (MR) device, or a VR device.
  • MR mixed reality
  • an augmented reality (AR) device may likewise be used.
  • VR content is described herein with respect to some embodiments, AR content or MR content may be used as appropriate in the embodiments or in some other embodiments.
  • an HMD is described herein with respect to some embodiments and often with respect to VR content in particular, the HMD may be replaced with a head-mounted or other type of wearable display device, e.g., an MR device, an AR device, or a VR device, as appropriate.
  • rendering the probable-collision frame may include: mapping a point- of-view position of the external user to a virtual space, orienting the probable-collision frame to be facing the external user, and distorting a horizontal and vertical ratio of the probable-collision frame based on the point-of-view position of the external user, wherein the VR content is displayed to the user in a virtual space.
  • rendering the probable-collision frame also may include rendering one of the one or more virtual objects as the probable-collision frame.
  • FIG. 2 depicts a flow chart 200 of an example method for projecting collision-deterrents in virtual reality viewing environments, in accordance with some embodiments.
  • the example process may be partitioned into three high-level sub-processes (step one, step two, and step three).
  • the process continues to step two. If the distance is greater than or equal to the first predetermined distance, the process may return to calculate 208 the distance between the HMD wearer and the external user.
  • step two generally selects a probable-collision frame and renders the probable-collision frame.
  • Step two includes receiving 212 pixel data from a virtual pixel storage via a network, e.g., using a network transmission protocol.
  • Step two also may include determining 214 (e.g., calculating, retrieving from local or remote storage, receiving, etc.) an interactivity of a virtual object by referring to the characteristic display parameters of the virtual object.
  • the virtual object's interactivity may be calculated based on pixel data of the virtual object (or for some embodiments, may be indicated by an external service).
  • Step two also may include determining 216 a probable-collision frame based on the interactivity.
  • Step two also may include comparing 226 the distance between the HMD user and the external user with a second distance (or second distance threshold). If the distance is less than a second distance, the process may continue to step three. If the distance is not less than the second distance (threshold), the process may return to compare 210 the distance with a first predetermined distance.
  • a second distance or second distance threshold
  • step three generally projects VR contents (e.g., rendered probable- collision frames) onto the real-world environment.
  • Step three may include adjusting 228 the projection device to face the selected media (and the external user's location (and velocity)) and projecting 230 the rendered probable-collision frame.
  • the rendered probable-collision frame e.g., heat map or virtual object
  • the projector unit may be adjusted to project onto a previously selected medium (or surface).
  • the projector unit may include a motor driver to adjust the direction of projection. After the projector is oriented, the rendered collision-capable frame may be projected toward the selected medium (or selected surface).
  • An external user who does not wear the HMD may experience the contents of the virtual space. Because the rendered VR content may be modified to suit the context, the privacy of the HMD wearer may be able to be protected.
  • the external user since a visual indication of the predicted movements of the HMD wearer is provided, the external user may avoid collisions with the HMD wearer. Also, since the collision-avoidance contents may be viewed from the viewpoint of the external user rather than the viewpoint of the wearer of the HMD, the satisfaction provided by the immersiveness of the virtual space may be enhanced.
  • the example scenarios disclosed herein according to some embodiments are based on an AR game applied to a VR environment. For some embodiments, a visual indication may be projected on a floor surface within proximity of the user.
  • FIG. 3 depicts virtual objects 302, 304 around an HMD user and a VR scene 300 for some embodiments.
  • the VR content and virtual objects 302, 304 from the AR game are mapped and displayed in the 360-degree space on the HMD.
  • the VR content 306 may be a video stream generated by a 360- degree camera or an interactive game.
  • FIG. 4 depicts an environment 400 of an HMD user (or wearer) 402 detecting an external user 404 for some embodiments.
  • the HMD (or wearable device) 406 recognizes and distinguishes external users using a sensor (e.g., a depth camera, a proximity sensor or a fish-eye lens) and
  • a sensor e.g., a depth camera, a proximity sensor or a fish-eye lens
  • the sensor may be embedded in the HMD or may exist in an external device.
  • detection may use facial recognition 408 (as shown in FIG. 4), may use edge detection, and/or may communicate with a device worn by the external user 404.
  • an HMD sensor array may scan the surrounding environment. If the sensors detect an external user in the vicinity, the face of the external user may be analyzed. The HMD system may check whether the face of the external viewer is registered in the server or a database.
  • the HMD receives a connection from the wearable device worn by the external viewer.
  • the HMD device confirms from an authentication code of the wearable device whether the device is registered.
  • a received signal strength indication RSSI
  • FIG. 5 depicts a geometric explanation 500 for calculating distances for some embodiments. Shown is a diagram for calculating a horizontal distance from the HMD 502 to an external object 504.
  • the HMD 504 may measure the distance between a point-of-view of the external viewer and the HMD.
  • (X, Y, Z) coordinates 506 of the external user's point-of-view may be obtained using a trigonometric function. Moving the origin by the height of the HMD (or the HMD user's height) allows the external user's point-of-view to be expressed in (X, Y, Z) coordinates 506 if the floor is defined as the origin.
  • FIG. 6 depicts an environment 600 of an HMD user 602 surrounded by virtual objects 604, 606 detecting an external user 608 for some embodiments.
  • the HMD measures the distance 610 to the external user of FIG. 6.
  • the distance 610 between the HMD wearer 602 and the external user 604 may be continuously measured by the HMD's optical distance measuring sensor for some embodiments.
  • the optical distance measuring sensor such as an infrared sensor, detects a direction in which the external user is located and measures a distance 610 to the external user's body from the HMD wearer.
  • FIG. 7 depicts an environment 700 of an HMD user 702 surrounded by virtual objects 704, 706 measuring an external user 708 at a first predetermined distance 710 for some embodiments. If the external user is at or within the first predetermined distance 710, a probable-collision frame is selected by the external projection provider. The projection provider is configured to render the probable-collision frame into the external virtual projection content within the first predetermined distance 710. The rendered external virtual projection content is designed to nullify a probable collision between the HMD wearer (or user) 702 and the external user 708. Therefore, the first predetermined distance 710 is a safety clearance distance for preparing (e.g., selecting and rendering) content that may be sent to the projector if a physical collision is determined to be imminent.
  • preparing e.g., selecting and rendering
  • the first predetermined distance may be set to about 4m. That is, in some embodiments, the first predetermined distance may be set to 4m, although other distances may be used.
  • the first predetermined distance varies depending on the type of VR content.
  • a karate action game which mainly uses kicking as user input may have a larger first predetermined distance than a VR trivia game in which the user is not expected to move much.
  • FIG. 8 depicts a virtual frame 800 having a virtual object 804 at two different time instances (t1 and t2) for some embodiments.
  • a 360-degree virtual shell surrounding the HMD user 802 in a virtual world may be divided into a plurality of virtual frames.
  • the indicated virtual frame in FIG. 8 has information about pixel motion and an interactivity property 806 of the virtual object of the virtual frame at two different times.
  • FIG. 9 depicts a divided virtual frame 900 having multiple virtual objects 904, 906 with interactivity scores 908, 910 for some embodiments.
  • a virtual object corresponding to a virtual frame is an item to be acquired in a game
  • the value of the interactivity is increased.
  • the virtual object is to be avoided (e.g., a bomb)
  • the HMD wearer 902 is expected to move in the opposite direction, so the interactivity is lowered.
  • a higher value of interactivity may imply higher desirability for interactivity, while a lower value may imply a lower desirability.
  • interactivity values (or for other values, e.g., characteristic display parameters), other values may be used or, e.g., the meanings of the value(s) and what they imply may be reversed or a different ordering scheme may be used than the examples used herein.
  • FIG. 10 depicts a selection of a probable-collision frame for some embodiments. If an external user arrives at the first predetermined distance 1006, a plurality of virtual objects present in the VR scene 1000 and their corresponding attribute information (e.g., characteristic display parameters) are received from the server. Weight scores are calculated based on characteristic display parameters and are in turn used to calculate respective interactivities of the virtual frames. The virtual frame having the highest interactivity is selected as the probable-collision frame. It is most likely that the external user 1004 and the HMD wearer 1002 would collide at that point.
  • attribute information e.g., characteristic display parameters
  • FIG. 11 depicts a table 1100 for some embodiments of example characteristic display parameters 1104, corresponding weight scores, and calculated interactivities for four virtual objects 1102 for some embodiments.
  • FIG. 11 is an example table 1100 specifying parameter information (e.g., characteristic display parameters 1104) and weight scores (e.g., pixel size 1108, interest of virtual object 1110, degree of pixel change 1112, display time 1114, degree of interaction induction 1116, and geographic coordinate position 1118) of virtual objects appearing at a specific point in the virtual space.
  • the interactivity 1106 of each virtual object is calculated.
  • Pixel size (PS) 1108 is the size of the virtual object displayed within the virtual environment measured in pixels. The larger the pixel size, the larger the weight score becomes. This is because the larger the area of the virtual object results in a larger the range of motion in which the user interacts with the virtual object.
  • Interest of virtual object (IVO) 1110 refers to the degree of interest the HMD user is likely to have in the displayed virtual object. For example, if a particular virtual object is a rare item, the user's rapid movement towards the virtual object is expected. Therefore, in case of high interest of virtual object, the weight score is increased. For example, an item (monster) that has a low frequency of appearance in a game and may earn massive experience points (XP) at the time of acquisition will be of great interest to the user. Therefore, the user's motion toward the item to obtain the item is expected.
  • Degree of pixel change (DPC) 1112 is a change in pixel position of a virtual object within a predetermined time in the virtual space. That is, it indicates the speed at which a virtual object moves in a 360-degree virtual environment.
  • Various motions of HMD's viewer are expected due to pixel change in virtual space. For example, if a virtual object is zooming past at a high speed, the HMD's user is likely to jump away out of surprise or to take a large lunge at a high speed in an attempt to grab the virtual object. Therefore, the larger the degree of pixel change value, the higher the weight score will be.
  • Display time (DT) 1114 is the amount of time it takes for a virtual object to appear in virtual space until it disappears. Logically, movement for accessing or interacting with the virtual object may only occur during the limited display time. The shorter the limited display time, the greater the possibility of the user immediately approaching or moving toward the virtual object. For at least this reason, short display times receive higher weight scores. In the example table, the weight is given on the assumption that the shorter the time, the greater the user's motivation. However, if the limited time is short and the user's pursuit is abandoned, a low weight may be given.
  • Degree of interaction induction (D II) 1116 represents a degree of inducing a user's motion or interaction with respect to a virtual object.
  • predetermined interaction methods catching, hitting, throwing, etc.
  • the degree of interaction induction varies depending on the predetermined interaction method. The larger the degree of interaction induction, the greater the probability of collision with an external user. If a virtual object (e.g. a ghost Character) that surprises the user appears, the user movement may occur mainly in the opposite direction of the virtual object.
  • the geographic coordinate position (GCP) (or location) 1118 is the polar coordinate at which the virtual object is located in the 360-degree virtual space. If a virtual object is closer to the external user, the weight of the geographic coordinate position is increased.
  • calculating the level of interactivity for a virtual object may comprise calculating an average of one or more characteristic display parameters for the virtual object.
  • FIGs. 20 and 21 illustrate embodiments in which an image is distorted during rendering and projected so that an external user may view the image without distortion (e.g., if projected on a surface that may be oblique with respect to the projector and/or the external viewer).
  • the distortion of the virtual object may be performed at the time of rendering but for other embodiments, distortion of the virtual object may be performed at the projector unit.
  • a separate interactivity value may be determined (e.g., calculated, or estimating) for each virtual object. For some embodiments, determining (e.g., calculating) a level of interactivity for a virtual object within a virtual space may include determining a likelihood of the user interacting with the respective virtual object within the virtual space. For some embodiments, determining a level of interactivity for the virtual object within a virtual space may include calculating an interactivity value. In some embodiments, the interactivity value may include an estimated likelihood of a user moving toward the virtual object within the first virtual space. For some embodiments, the level of interactivity may be a proxy for a level of attractiveness of the virtual object to the user.
  • three concentric regions around the HMD user may be projected, which may be labeled as safe, warning, and danger to indicate a probability of collision with one or more external users.
  • FIGs. 26A and 26B two external users 2604, 2606 are shown, though for some embodiments, more than two external users may exist.
  • a vector radiating away from the HMD user in the x- y plane with a length equal to the second distance value may be divided into three regions.
  • the outer third of the second distance vector may designate a safe region.
  • the middle third of the second distance vector may designate a warning region.
  • the inner third of the second distance vector may designate a danger region.
  • the safe, warning, and danger regions may be labeled and colored with yellow, orange, and red projections.
  • FIG. 27 depicts an example time sequence diagram 2700 of an example method and apparatus for projecting collision-deterrents in virtual reality viewing environments for some embodiments.
  • the HMD 2702 displays 2712 VR contents.
  • the HMD 2702 also receives 2714 external user information from the external environment 2710.
  • the HMD 2702 detects 2716 the external user and sends 2718 sensor data for the external user to the external projection provider 2704.
  • the HMD 2702 sends 2718 sensor data to the external projection provider 2704 so that the external projection provider 2704 may calculate 2720 the distance between the HMD 2702 user and the external user. Following the distance calculation, the external projection provider 2704 compares 2720 the calculated distance with the first predetermined distance.
  • the external projection provider 2704 receives sensor information from the HMD 2702 and calculates the probability of various virtual frames colliding with the external user.
  • the probable-collision frame is extracted from among a plurality of virtual frames existing in the virtual space between the external user and the HMD wearer.
  • the HMD 2702 sends 2740 more sensor data to the external projection provider 2704, and the external projection provider 2704 uses this sensor data to calculate 2742 a distance between an HMD wearer and an external user.
  • the external projection provider 2704 compares 2744 this updated distance with the second distance. If the updated distance is less than the second distance, the external projection provider 2704 sends 2746 projection information (e.g., projection direction and rendered content) to the projection unit 2706.
  • the projection unit 2706 adjusts 2748 the orientation and settings of the projection device and projects 2750 the rendered probable-collision frame.
  • Projecting information about the movement of the HMD wearer interacting with a virtual object without projecting the virtual object itself may be performed in some embodiments. For example, based on the interaction attributes between the object and the HMD wearer and the motion prediction statistics, the HMD system may display information about the probability of motion of the HMD wearer, for example as a heatmap.
  • FIG. 30 depicts an environment 3000 for an HMD user 3002 selectively projecting virtual objects for some embodiments.
  • certain objects 3004 may be excluded from projection.
  • certain objects 3004 may be excluded from projection.
  • the HMD wearer 3002 wants to see only himself (or herself).
  • a web page displaying an ID and password or other private data should be displayed only to the HMD wearer 3002.
  • FIG. 31 depicts a multi-user VR projection system 3100 having a high-performance projector (or external projector) 3102 for some embodiments.
  • a high-performance projector or external projector
  • projection for multiple HMD wearers 3106 which may be viewed by external users 3112, is facilitated using a high-performance projector 3102.
  • FIG. 31 Depicted in FIG. 31 is a large space 3104 where several HMD wearers (or users) 3106 may experience virtual reality games at the same time.
  • the high-performance projector 3102 may be installed near the ceiling to cover the whole room for some embodiments.
  • FIG. 32 depicts an environment 3200 for a projection that indicates to an external user 3202 movement-likelihood information 3206, 3208, 3210 of an HMD wearer 3204 for some embodiments.
  • FIG. 32 depicts an HMD wearer 3204 playing a VR game in a room having an external user 3202.
  • determinations are made for directions emanating from an HMD wearer 3204 regarding likelihood an HMD wearer will move in each direction.
  • determinations are made regarding likelihood of an HMD wearer to move left, right, and forward.
  • a portion of the floor associated with each directional movement is shown in FIG. 32.
  • the projection on the floor may be colored, with red (shown with a first pattern of parallel lines in FIG. 32) used for an area associated with a direction an HMD wearer 3204 is likely to move and yellow (shown with a second pattern of parallel lines in FIG. 32) used for an area associated with a direction an HMD wearer 3204 is less likely to move.
  • FIG. 33 depicts an environment 3300 for an HMD wearer (or user) 3302 reacting to a moving virtual object 3304 while projecting a heat-map 3306 for some embodiments.
  • FIG. 33 depicts the HMD user 3204 of FIG. 32 reacting to a moving virtual object 3304.
  • the left-most circle of FIG. 33 shows a moving virtual object 3304 coming towards an HMD wearer 3302.
  • the middle circle shows the HMD wearer 3302 moving to his or her left (to the right as shown to the reader) as the moving virtual object 3304 comes toward the HMD wearer 3302.
  • the third circle shows the real world relationship between an HMD wearer 3302 and an external user 3308 when the HMD wearer 3304 sees a moving virtual object 3304 moving past within the virtual world.
  • FIG. 34 depicts a projection of information about virtual walls in a VR scene 3400 for some embodiments.
  • FIG. 34 depicts a VR view (virtual world) 3402, an external user's third person view (real world - external viewer's view) 3404, and a bird's-eye-view (real world - top view) 3406 of a VR scene 3400 displaying a confined corridor 3408.
  • the HMD user 3410 will not move across the virtual wall object bounding the corridor 3408.
  • Path information for the corridor 3408 may be projected on the floor surface of the real world.
  • Information about the area where the HMD user 3410 will move may be highlighted in red (which is shown with a first pattern of parallel lines in FIG. 34), and other areas (e.g., all regions outside of the corridor walls 3414) may be marked in yellow (which is shown with a second pattern of parallel lines in FIG. 34).
  • a probable-collision frame representing a region with a probability of entry by the user based on the level of interactivity and location of the virtual object may be determined 3606.
  • An indication of the probable-collision frame may be projected 3608 on a surface proximate to the user.
  • a method may include: measuring a distance between a head- mounted display (HMD) user that is viewing a virtual reality (VR) scene and an external user; responsive to the external user being measured within a first predetermined distance, receiving (i) a set of virtual objects present in the VR scene and (ii) respective attribute information associated with each virtual object in the set; calculating a respective interactivity score for each virtual object in the set based at least in part on the corresponding attribute information; selecting a probable-collision frame of the VR scene based on the respective interactivity scores; determining a physical motion area ratio of the selected probable-collision frame based on the interactivity score of the virtual object located within the selected probable-collision frame and a HMD user profile score; calculating a second distance by summing an adjustable distance that is a function of the physical motion area ratio and a critical distance; rendering the probable-collision frame based on the determined physical motion area ratio; and responsive to the external user being measured within the second distance, projecting the
  • measuring the distance between the HMD user and the external user may include using depth sensors and or image sensors embedded in the HMD.
  • measuring the distance between the HMD user and the external user may include using depth sensors and or image sensors external to the HMD.
  • respective attribute information may include a set of characteristic display parameters, the set may include a pixel size, an interest of the virtual object, a degree of pixel change, a degree of interaction induction, and a geographic coordinate position.
  • the HMD user profile score may be received from the server.
  • the HMD user profile score may be based on physical characteristics of the HMD user.
  • the physical characteristics may include gender, height, weight, leg length, and a motion habit history.
  • rendering the probable-collision frame based on the determined physical motion area ratio may include selecting a size to render the probable-collision frame based on the determined physical motion area ratio.
  • rendering the probable-collision frame may include: mapping a point- of-view position of the external user to the virtual space; orienting the probable-collision frame to be facing the external user; and distorting the horizontal and vertical ratio of the probable-collision frame based on the point-of-view position of the external user.
  • rendering the probable-collision frame may include rendering the virtual object as the probable-collision frame.
  • rendering the probable-collision frame may include rendering a heat map as the probable-collision frame.
  • calculating the interactivity score may include referencing a game rule book on an as needed basis.
  • an apparatus may include: a head-mounted display (HMD) configured to display virtual reality (VR) content and send sensor data to an external projection provider; a server configured send characteristic display parameters and user profile information to the external projection provider; the external projection provider configured to monitor a distance between the HMD and an external user, select a probable-collision frame when the distance is below a threshold value, and render the probable-collision frame; and a projection unit configured to adjust a display device and project the rendered probable-collision frame.
  • HMD head-mounted display
  • VR virtual reality
  • server configured send characteristic display parameters and user profile information to the external projection provider
  • the external projection provider configured to monitor a distance between the HMD and an external user, select a probable-collision frame when the distance is below a threshold value, and render the probable-collision frame
  • a projection unit configured to adjust a display device and project the rendered probable-collision frame.
  • a method may include: on a surface proximate to a user of a head- mounted display (HMD), projecting a display indicating a probable region of motion of the user; and adaptively changing the displayed probable region of motion in response to content presented on the head- mounted display (HMD).
  • HMD head- mounted display
  • the projection may be performed by the HMD.
  • the projection may be performed by a projector separate from the HMD.
  • the projection may be performed only in response to detection of a person within a threshold distance of the user.
  • the surface may be a floor.
  • a method may include: displaying, in a first virtual space, a first virtual reality (VR) content to a first user of a first head-mounted display (HMD) device; determining a first level of interactivity for a virtual object within the first virtual space; determining a first probable-collision frame representing a first region with a first probability of entry by the first user based on the first level of interactivity for the virtual object and a location of the virtual object within the first virtual space; and projecting, on a surface proximate to the first user, a first indication of the first probable-collision frame.
  • VR virtual reality
  • HMD head-mounted display
  • determining the first level of interactivity for the virtual object within the first virtual space may include calculating an interactivity value, wherein the interactivity value may include an estimated likelihood of the first user moving toward the virtual object within the first virtual space.
  • a method may further include: displaying, in a second virtual space, a second virtual reality (VR) content to a second user of a second head-mounted display (HMD) device; determining a second level of interactivity for a second virtual object within the second virtual space;
  • VR virtual reality
  • HMD head-mounted display
  • projecting the first indication of the first probable-collision frame and projecting the second indication of the second probable-collision frame may be performed by a common projector, and the first virtual space may be connected to the second virtual space.
  • determining the first probable-collision frame may be further based on dimensional body data for the first user.
  • a method may further include determining an average of the dimensional body data for the first user, and determining the first probable-collision frame may be further based on the average of the dimensional body data for the first user.
  • determining the first probable-collision frame may include selecting a first probable-collision frame that includes the virtual object within the first virtual space having a largest determined level of interactivity.
  • a method may further include measuring a first distance between the first user and a first external user, wherein projecting the first indication of the first probable-collision frame is responsive to the first measured distance being less than a first threshold.
  • a method may further include measuring a second distance between the first user and a second external user, and projecting the first indication of the first probable-collision frame may be responsive to at least one of the first measured distance and the second measured distance being less than a second threshold.
  • projecting the first indication of the first probable-collision frame may include rendering the first indication of the first probable-collision frame based on a point-of-view position of the first external user.
  • projecting the first indication of the first probable-collision frame may include projecting a rendering of one of the one or more virtual objects.
  • a method may further include adaptively changing the first indication of the first probable-collision frame in response to the first VR content displayed on the first HMD device.
  • projecting the first indication of the first probable-collision frame may include projecting a rendering of a heat map that indicates a probability of collision with the first user.
  • projecting the first indication of the first probable-collision frame may include projecting a warning image between the first user and another user in a real world space, wherein the another user may be within a threshold distance from the first user, and wherein the warning image may be configured to avoid a potential collision between the first user and the external user.
  • an apparatus may include: a projector; a processor; and a non- transitory computer-readable medium storing instructions that are operative, if executed on the processor, to perform the processes of: displaying, in a virtual space, a virtual reality (VR) content to a user of a head- mounted display (HMD) device; determining a level of interactivity for one or more virtual objects within the virtual space; determining a probable-collision frame representing a region with a probability of entry by the user based on the level of interactivity for the one or more virtual objects and a location of the one or more virtual objects within the virtual space; and projecting, on a surface proximate to the user, an indication of the probable-collision frame.
  • VR virtual reality
  • HMD head- mounted display
  • a method may include: displaying, in a virtual space, a virtual reality (VR) content to a user of a head-mounted display (HMD) device; determining a probable-collision frame representing a region with a probability of entry by the user; and projecting, on a surface proximate to the user, an indication of the probable-collision frame.
  • VR virtual reality
  • HMD head-mounted display
  • a method may further include: determining a level of interactivity for one or more virtual objects within the virtual space, wherein determining the level of interactivity may include determining an average of one or more characteristic display parameters for each of the one or more virtual objects within the virtual space, wherein the characteristic display parameters may include one or more parameters selected from the group consisting of pixel size, interest of the virtual object, degree of pixel change, display time, degree of interaction induction, and location.
  • a method may further include: determining a profile information value as an average of user profile dimensional body data for the user; and determining a physical motion area ratio value based on the level of interactivity and the profile information value, wherein determining the probable-collision frame may be based on the level of interactivity for the one or more virtual objects and a location of the one or more virtual objects within the virtual space and the physical motion area ratio value.
  • determining the physical motion area ratio value may include determining an average of the profile information value and the level of interactivity for one of the one or more virtual objects within the virtual space.
  • projecting the indication of the probable-collision frame on the surface proximate to the user may include: selecting the surface proximate to the user, wherein the selected surface is located between the user and an external user outside of the virtual space within a threshold distance of the user; rendering the probable-collision frame; and projecting the rendered probable-collision frame on the selected surface.
  • rendering the probable-collision frame may include: mapping a point- of-view position of the external user to the virtual space; orienting the probable-collision frame to be facing the external user; and distorting a horizontal ratio and a vertical ratio of the probable-collision frame based on the point-of-view position of the external user.
  • projecting the indication of the probable-collision frame may include rendering a heat map as the probable-collision frame.
  • an apparatus may include: a projector; a processor; and a non- transitory computer-readable medium storing instructions that are operative, if executed on the processor, to perform the processes of: displaying, in a virtual space, a virtual reality (VR) content to a user of a head- mounted display (HMD) device; determining a probable-collision frame representing a region with a probability of entry by the user; and projecting, on a surface proximate to the user, an indication of the probable-collision frame.
  • VR virtual reality
  • HMD head- mounted display
  • an apparatus may include the HMD device, and the HMD device may include the projector and the processor.
  • the projector may be external to and separate from the HMD device.
  • a wireless transmit/receive unit may be used as a real-time 3D content capture server in some embodiments and as a real-time 3D content rendering client in some embodiments described herein.
  • Exemplary embodiments disclosed herein are implemented using one or more wired and/or wireless network nodes, such as a wireless transmit/receive unit (WTRU) or other network entity.
  • WTRU wireless transmit/receive unit
  • Any of the disclosed real-time 3D content capture server embodiments and real-time 3D content rendering client embodiments may be implemented using one or both of the systems depicted in FIGs. 37 and 38. All other embodiments discussed in this detailed description may be implemented using either or both of FIG. 37 and FIG. 38 as well.
  • various hardware and software elements required for the execution of the processes described in this disclosure, such as sensors, dedicated processing modules, user interfaces, important algorithms, etc. may be omitted from FIGs. 37 and 38 for the sake of visual simplicity.
  • FIG. 37 depicts an exemplary wireless transmit/receive unit (WTRU) that may be employed as a real-time 3D content capture server in some embodiments and as a real-time 3D content rendering client in other embodiments.
  • WTRU wireless transmit/receive unit
  • FIG. 37 may be employed to execute any of the processes disclosed herein (e.g., the processes described in relation to FIGs. 2, 14, 24, and 27). As shown in FIG.
  • the WTRU 3702 may include a processor 3718, a communication interface 3719 including a transceiver 3720, a transmit/receive element 3722, a speaker/microphone 3724, a keypad 3726, a display/touchpad 3728, a non-removable memory 3730, a removable memory 3732, a power source 3734, a global positioning system (GPS) chipset 3736, and sensors 3738.
  • GPS global positioning system
  • the processor 3718 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like.
  • the processor 3718 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 3702 to operate in a wireless environment.
  • the processor 3718 may be coupled to the transceiver 3720, which may be coupled to the transmit/receive element 3722. While FIG. 37 depicts the processor 3718 and the transceiver 3720 as separate components, it will be appreciated that the processor 3718 and the transceiver 3720 may be integrated together in an electronic package or chip.
  • the transmit/receive element 3722 may be configured to transmit signals to, or receive signals from, a base station over the air interface 3716.
  • the transmit/receive element 3722 may be an antenna configured to transmit and/or receive RF signals.
  • the transmit/receive element 3722 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, as examples.
  • the transmit/receive element 3722 may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receive element 3722 may be configured to transmit and/or receive any combination of wireless signals.
  • the WTRU 3702 may include any number of transmit/receive elements 3722. More specifically, the WTRU 3702 may employ MIMO technology. Thus, in one embodiment, the WTRU 3702 may include two or more transmit/receive elements 3722 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 3716.
  • the WTRU 3702 may include two or more transmit/receive elements 3722 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 3716.
  • the transceiver 3720 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 3722 and to demodulate the signals that are received by the transmit/receive element 3722.
  • the WTRU 3702 may have multi-mode capabilities.
  • the transceiver 3720 may include multiple transceivers for enabling the WTRU 3702 to communicate via multiple RATs, such as UTRA and IEEE 802.11 , as examples.
  • the processor 3718 of the WTRU 3702 may be coupled to, and may receive user input data from, the speaker/microphone 3724, the keypad 3726, and/or the display/touchpad 3728 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit).
  • the processor 3718 may also output user data to the speaker/microphone 3724, the keypad 3726, and/or the display/touchpad 3728.
  • the processor 3718 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 3730 and/or the removable memory 3732.
  • the nonremovable memory 3730 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device.
  • the removable memory 3732 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like.
  • SIM subscriber identity module
  • SD secure digital
  • the processor 3718 may access information from, and store data in, memory that is not physically located on the WTRU 3702, such as on a server or a home computer (not shown).
  • the WTRU 3702 may include a full duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for both the UL (e.g., for transmission) and downlink (e.g., for reception) may be concurrent and/or simultaneous.
  • the full duplex radio may include an interference management unit to reduce and or substantially eliminate self-interference via either hardware (e.g., a choke) or signal processing via a processor (e.g., a separate processor (not shown) or via processor 3718).
  • Communication interface 3892 may include one or more wired communication interfaces and/or one or more wireless-communication interfaces. With respect to wired communication,
  • Processor 3894 may include one or more processors of any type deemed suitable by those of skill in the relevant art, some examples including a general-purpose microprocessor and a dedicated DSP.
  • Data storage 3896 may take the form of any non-transitory computer-readable medium or combination of such media, some examples including flash memory, read-only memory (ROM), and random-access memory (RAM) to name but a few, as any one or more types of non-transitory data storage deemed suitable by those of skill in the relevant art could be used.
  • data storage 3896 contains program instructions 3897 executable by processor 3894 for carrying out various combinations of the various network-entity functions described herein.
  • the WTRUs 102a, 102b, 102c, 102d may be configured to transmit and/or receive wireless signals and may include a user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a subscription-based unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, a hotspot or Mi-Fi device, an Internet of Things (loT) device, a watch or other wearable, a head-mounted display (HMD), a vehicle, a drone, a medical device and applications (e.g., remote surgery), an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts), a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and the like.
  • UE user equipment
  • PDA personal digital assistant
  • HMD head-mounted display
  • a vehicle a drone
  • the communications systems 100 may also include a base station 114a and/or a base station
  • Each of the base stations 114a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the CN 106/115, the I nternet 110, and/or the other networks 112.
  • the base stations 114a, 114b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a gNB, a NR NodeB, a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
  • the base station 114a may be part of the RAN 104/113, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc.
  • BSC base station controller
  • RNC radio network controller
  • the base station 114a and/or the base station 114b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, which may be referred to as a cell (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum.
  • a cell may provide coverage for a wireless service to a specific geographical area that may be relatively fixed or that may change over time. The cell may further be divided into cell sectors.
  • the cell associated with the base station 114a may be divided into three sectors.
  • the base station 114a may include three transceivers, i.e., one for each sector of the cell.
  • the base station 114a may employ multiple-input multiple output (MIMO) technology and may utilize multiple transceivers for each sector of the cell.
  • MIMO multiple-input multiple output
  • beamforming may be used to transmit and/or receive signals in desired spatial directions.
  • the base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, centimeter wave, micrometer wave, infrared (IR), ultraviolet (UV), visible light, etc.).
  • the air interface 116 may be established using any suitable radio access technology (RAT).
  • RAT radio access technology
  • the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like.
  • the base station 114a in the RAN 104/113 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 115/116/117 using wideband CDMA (WCDMA).
  • WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+).
  • HSPA High-Speed Packet Access
  • HSPA+ Evolved HSPA
  • HSPA may include High-Speed Downlink (DL) Packet Access (HSDPA) and/or High-Speed UL Packet Access (HSUPA).
  • DL High-Speed Downlink
  • HSDPA High-Speed Downlink Packet Access
  • HSUPA High-Speed UL Packet Access
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A) and/or LTE-Advanced Pro (LTE-A Pro).
  • LTE Long Term Evolution
  • LTE-A LTE-Advanced
  • LTE-A Pro LTE-Advanced Pro
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as NR Radio Access, which may establish the air interface 116 using New Radio (NR).
  • a radio technology such as NR Radio Access, which may establish the air interface 116 using New Radio (NR).
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement multiple radio access technologies.
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement LTE radio access and NR radio access together, for instance using dual connectivity (DC) principles.
  • DC dual connectivity
  • the air interface utilized by WTRUs 102a, 102b, 102c may be characterized by multiple types of radio access technologies and/or transmissions sent to/from multiple types of base stations (e.g., a eNB and a gNB).
  • the base station 114a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.11 (i.e., Wireless Fidelity (WiFi), IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1X, CDMA2000 EV- DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
  • IEEE 802.11 i.e., Wireless Fidelity (WiFi)
  • IEEE 802.16 i.e., Worldwide Interoperability for Microwave Access (WiMAX)
  • CDMA2000, CDMA2000 1X, CDMA2000 EV- DO Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for
  • the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN).
  • the base station 114b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR etc.) to establish a picocell or femtocell.
  • the base station 114b may have a direct connection to the Internet 110.
  • the base station 114b may not be required to access the Internet 110 via the CN 106/115.
  • the RAN 104/113 may be in communication with the CN 106/115, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d.
  • the data may have varying quality of service (QoS) requirements, such as differing throughput requirements, latency requirements, error tolerance requirements, reliability requirements, data throughput requirements, mobility requirements, and the like.
  • QoS quality of service
  • the CN 106/115 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication.
  • the CN 106/115 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or the other networks 112.
  • the PSTN 108 may include circuit- switched telephone networks that provide plain old telephone service (POTS).
  • POTS plain old telephone service
  • the Internet 110 may include a global system of interconnected computer networks and devices that use common
  • the networks 112 may include wired and/or wireless communications networks owned and/or operated by other service providers.
  • the networks 112 may include another CN connected to one or more RANs, which may employ the same RAT as the RAN 104/113 or a different RAT.
  • Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links).
  • the WTRU 102c shown in FIG. 39 may be configured to communicate with the base station 114a, which may employ a cellular-based radio technology, and with the base station 114b, which may employ an IEEE 802 radio technology.
  • FIG. 40 is a system diagram illustrating the RAN 104 and the CN 106 according to an embodiment.
  • the RAN 104 may employ an E-UTRA radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the RAN 104 may also be in communication with the CN 106.
  • the RAN 104 may include eNode-Bs 160a, 160b, 160c, though it will be appreciated that the RAN 104 may include any number of eNode-Bs while remaining consistent with an embodiment.
  • the eNode-Bs 160a, 160b, 160c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116.
  • the eNode-Bs 160a, 160b, 160c may implement MIMO technology.
  • the eNode-B 160a for example, may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a.
  • Each of the eNode-Bs 160a, 160b, 160c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, and the like. As shown in FIG. 40, the eNode-Bs 160a, 160b, 160c may communicate with one another over an X2 interface.
  • the CN 106 shown in FIG. 40 may include a mobility management entity (MME) 162, a serving gateway (SGW) 164, and a packet data network (PDN) gateway (or PGW) 166. While each of the foregoing elements are depicted as part of the CN 106, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
  • MME mobility management entity
  • SGW serving gateway
  • PGW packet data network gateway
  • the MME 162 may be connected to each of the eNode-Bs 162a, 162b, 162c in the RAN 104 via an S1 interface and may serve as a control node.
  • the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102a, 102b, 102c, and the like.
  • the MME 162 may provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM and/or WCDMA.
  • the SGW 164 may be connected to each of the eNode Bs 160a, 160b, 160c in the RAN 104 via the S1 interface.
  • the SGW 164 may generally route and forward user data packets to/from the WTRUs 102a, 102b, 102c.
  • the SGW 164 may perform other functions, such as anchoring user planes during inter- eNode B handovers, triggering paging if DL data is available for the WTRUs 102a, 102b, 102c, managing and storing contexts of the WTRUs 102a, 102b, 102c, and the like.
  • the SGW 164 may be connected to the PGW 166, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
  • packet-switched networks such as the Internet 110
  • the CN 106 may facilitate communications with other networks.
  • the CN 106 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices.
  • the CN 106 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 106 and the PSTN 108.
  • IMS IP multimedia subsystem
  • the CN 106 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers.
  • the WTRU is described in FIGs. 39-41 as a wireless terminal, in certain embodiments.
  • such a terminal may use (e.g., temporarily or permanently) wired communication interfaces with the communication network.
  • a WLAN in Infrastructure Basic Service Set (BSS) mode may have an Access Point (AP) for the BSS and one or more stations (STAs) associated with the AP.
  • the AP may have an access or an interface to a Distribution System (DS) or another type of wired/wireless network that carries traffic in to and/or out of the BSS.
  • Traffic to STAs that originates from outside the BSS may arrive through the AP and may be delivered to the STAs.
  • Traffic originating from STAs to destinations outside the BSS may be sent to the AP to be delivered to respective destinations.
  • Traffic between STAs within the BSS may be sent through the AP, for example, where the source STA may send traffic to the AP and the AP may deliver the traffic to the destination STA.
  • the traffic between STAs within a BSS may be considered and/or referred to as peer-to-peer traffic.
  • the peer-to-peer traffic may be sent between (e.g., directly between) the source and destination STAs with a direct link setup (DLS).
  • the DLS may use an 802.11e DLS or an 802.11 z tunneled DLS (TDLS).
  • a WLAN using an Independent BSS (I BSS) mode may not have an AP, and the STAs (e.g., all of the STAs) within or using the IBSS may communicate directly with each other.
  • the IBSS mode of communication may sometimes be referred to herein as an "ad- hoc" mode of communication.
  • the AP may transmit a beacon on a fixed channel, such as a primary channel.
  • the primary channel may be a fixed width (e.g., 20 MHz wide bandwidth) or a dynamically set width via signaling.
  • the primary channel may be the operating channel of the BSS and may be used by the STAs to establish a connection with the AP.
  • Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) may be implemented, for example in in 802.11 systems.
  • the STAs e.g., every STA, including the AP, may sense the primary channel. If the primary channel is sensed/detected and/or determined to be busy by a particular STA, the particular STA may back off.
  • One STA (e.g., only one station) may transmit at any given time in a given BSS.
  • High Throughput (HT) STAs may use a 40 MHz wide channel for communication, for example, via a combination of the primary 20 MHz channel with an adjacent or nonadjacent 20 MHz channel to form a 40 MHz wide channel.
  • VHT Very High Throughput
  • STAs may support 20MHz, 40 MHz, 80 MHz, and/or 160 MHz wide channels.
  • the 40 MHz, and/or 80 MHz, channels may be formed by combining contiguous 20 MHz channels.
  • a 160 MHz channel may be formed by combining 8 contiguous 20 MHz channels, or by combining two non-contiguous 80 MHz channels, which may be referred to as an 80+80 configuration.
  • the data may be passed through a segment parser that may divide the data into two streams. Inverse Fast Fourier Transform (IFFT) processing, and time domain processing, may be done on each stream separately.
  • IFFT Inverse Fast Fourier Transform
  • the streams may be mapped on to the two 80 MHz channels, and the data may be transmitted by a transmitting STA.
  • the above described operation for the 80+80 configuration may be reversed, and the combined data may be sent to the Medium Access Control (MAC).
  • MAC Medium Access Control
  • Sub 1 GHz modes of operation are supported by 802.11 af and 802.11 ah.
  • the channel operating bandwidths, and carriers, are reduced in 802.11 af and 802.11 ah relative to those used in 802.11 n, and 802.11 ac.
  • 802.11 af supports 5 MHz, 10 MHz and 20 MHz bandwidths in the TV White Space (TVWS) spectrum
  • 802.11 ah supports 1 MHz, 2 MHz, 4 MHz, 8 MHz, and 16 MHz bandwidths using non-TVWS spectrum.
  • 802.11 ah may support Meter Type Control/Machine-Type Communications, such as MTC devices in a macro coverage area.
  • the WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using transmissions associated with a scalable numerology. For example, the OFDM symbol spacing and/or OFDM subcarrier spacing may vary for different transmissions, different cells, and/or different portions of the wireless transmission spectrum.
  • the WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using subframe or transmission time intervals (TTIs) of various or scalable lengths (e.g., containing varying number of OFDM symbols and/or lasting varying lengths of absolute time).
  • TTIs subframe or transmission time intervals
  • ROM read only memory
  • RAM random access memory
  • register cache memory
  • semiconductor memory devices magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
  • a processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.
  • processors such as microprocessors, digital signal processors, customized processors and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus described herein.
  • processors or “processing devices”
  • FPGAs field programmable gate arrays
  • unique stored program instructions including both software and firmware
  • some or all functions could be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic.
  • ASICs application specific integrated circuits
  • some embodiments of the present disclosure may combine one or more processing devices with one or more software components (e.g., program code, firmware, resident software, micro-code, etc.) stored in a tangible computer-readable memory device, which in combination from a specifically configured apparatus that performs the functions as described herein.
  • software components e.g., program code, firmware, resident software, micro-code, etc.
  • modules The software component portions of the modules may be written in any computer language and may be a portion of a monolithic code base or may be developed in more discrete code portions such as is typical in object-oriented computer languages.
  • the modules may be distributed across a plurality of computer platforms, servers, terminals, and the like. A given module may even be implemented such that separate processor devices and/or computing hardware platforms perform the described functions.
  • an embodiment may be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (e.g., comprising a processor) to perform a method as described and claimed herein.
  • Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

Some embodiments of methods and systems disclosed herein include displaying, in a virtual space, a virtual reality (VR) content to a user of a head-mounted display (HMD) device; determining a probable-collision frame representing a region with a probability of entry by the user; and projecting, on a surface proximate to the user, an indication of the probable-collision frame.

Description

METHOD AND APPARATUS FOR PROJECTING COLLISION-DETERRENTS IN VIRTUAL REALITY
VIEWING ENVIRONMENTS
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application is a non-provisional filing of, and claims benefit under 35 U.S.C.
§119(e) from, U.S. Provisional Patent Application Serial No. 62/490,402, entitled "Method and Apparatus for Projecting Collision-Deterrents in Virtual Reality Viewing Environments," filed April 26, 2017, the entirety of which is incorporated herein by reference.
BACKGROUND
[0002] In the case of VR content and certain immersive AR experiences, the associated display devices may be bulky and tethered to large processing units and may prevent a user from being aware of various hazards and events in the real-world environment. As processing power continues to evolve and these devices become untethered and users become more mobile, while still being isolated from the real- world physical reality around them, these systems become more dangerous to use independently. For at least this reason, current VR and immersive AR content consumption often relies on a human supervisor being present for safety purposes. However, this is a prohibitive requirement as the human supervisor may interfere with the VR user's experience by getting in the way.
SUMMARY
[0003] For some embodiments, a method may include: displaying, in a first virtual space, a first virtual reality (VR) content to a first user of a first head-mounted display (HMD) device; determining a first level of interactivity for a virtual object within the first virtual space; determining a first probable-collision frame representing a first region with a first probability of entry by the first user based on the first level of interactivity for the virtual object and a location of the virtual object within the first virtual space; and projecting, on a surface proximate to the first user, a first indication of the first probable-collision frame.
[0004] For some embodiments, determining the first level of interactivity for the virtual object within the first virtual space may include calculating an interactivity value, wherein the interactivity value may include an estimated likelihood of the first user moving toward the virtual object within the first virtual space. [0005] For some embodiments, a method may further include: displaying, in a second virtual space, a second virtual reality (VR) content to a second user of a second head-mounted display (HMD) device; determining a second level of interactivity for a second virtual object within the second virtual space;
determining a second probable-collision frame representing a second region with a second probability of entry by the second user based on the second level of interactivity for the second virtual object and a location of the second virtual object within the second virtual space; and projecting, on a second surface proximate to the second user, a second indication of the second probable-collision frame.
[0006] For some embodiments, projecting the first indication of the first probable-collision frame and projecting the second indication of the second probable-collision frame may be performed by a common projector, and the first virtual space may be connected to the second virtual space.
[0007] For some embodiments, determining the first probable-collision frame may be further based on dimensional body data for the first user.
[0008] For some embodiments, a method may further include determining an average of the dimensional body data for the first user, and determining the first probable-collision frame may be further based on the average of the dimensional body data for the first user.
[0009] For some embodiments, determining the first probable-collision frame may include selecting a first probable-collision frame that includes the virtual object within the first virtual space having a largest determined level of interactivity.
[0010] For some embodiments, a method may further include measuring a first distance between the first user and a first external user, wherein projecting the first indication of the first probable-collision frame is responsive to the first measured distance being less than a first threshold.
[0011] For some embodiments, a method may further include measuring a second distance between the first user and a second external user, and projecting the first indication of the first probable-collision frame may be responsive to at least one of the first measured distance and the second measured distance being less than a second threshold.
[0012] For some embodiments, projecting the first indication of the first probable-collision frame may include rendering the first indication of the first probable-collision frame based on a point-of-view position of the first external user.
[0013] For some embodiments, projecting the first indication of the first probable-collision frame may include projecting a rendering of one of the one or more virtual objects.
[0014] For some embodiments, a method may further include adaptively changing the first indication of the first probable-collision frame in response to the first VR content displayed on the first HMD device. [0015] For some embodiments, projecting the first indication of the first probable-collision frame may include projecting a rendering of a heat map that indicates a probability of collision with the first user.
[0016] For some embodiments, projecting the first indication of the first probable-collision frame may include projecting a warning image between the first user and another user in a real world space, wherein the another user may be within a threshold distance from the first user, and wherein the warning image may be configured to avoid a potential collision between the first user and the external user.
[0017] For some embodiments, an apparatus may include: a projector; a processor; and a non- transitory computer-readable medium storing instructions that are operative, if executed on the processor, to perform the processes of: displaying, in a virtual space, a virtual reality (VR) content to a user of a head- mounted display (HMD) device; determining a level of interactivity for one or more virtual objects within the virtual space; determining a probable-collision frame representing a region with a probability of entry by the user based on the level of interactivity for the one or more virtual objects and a location of the one or more virtual objects within the virtual space; and projecting, on a surface proximate to the user, an indication of the probable-collision frame.
[0018] For some embodiments, a method may include: displaying, in a virtual space, a virtual reality (VR) content to a user of a head-mounted display (HMD) device; determining a probable-collision frame representing a region with a probability of entry by the user; and projecting, on a surface proximate to the user, an indication of the probable-collision frame.
[0019] For some embodiments, a method may further include: determining a level of interactivity for one or more virtual objects within the virtual space, wherein determining the level of interactivity may include determining an average of one or more characteristic display parameters for each of the one or more virtual objects within the virtual space, wherein the characteristic display parameters may include one or more parameters selected from the group consisting of pixel size, interest of the virtual object, degree of pixel change, display time, degree of interaction induction, and location.
[0020] For some embodiments, a method may further include: determining a profile information value as an average of user profile dimensional body data for the user; and determining a physical motion area ratio value based on the level of interactivity and the profile information value, wherein determining the probable-collision frame may be based on the level of interactivity for the one or more virtual objects and a location of the one or more virtual objects within the virtual space and the physical motion area ratio value.
[0021] For some embodiments, determining the physical motion area ratio value may include determining an average of the profile information value and the level of interactivity for one of the one or more virtual objects within the virtual space. [0022] For some embodiments, projecting the indication of the probable-collision frame on the surface proximate to the user may include: selecting the surface proximate to the user, wherein the selected surface is located between the user and an external user outside of the virtual space within a threshold distance of the user; rendering the probable-collision frame; and projecting the rendered probable-collision frame on the selected surface.
[0023] For some embodiments, rendering the probable-collision frame may include: mapping a point- of-view position of the external user to the virtual space; orienting the probable-collision frame to be facing the external user; and distorting a horizontal ratio and a vertical ratio of the probable-collision frame based on the point-of-view position of the external user.
[0024] For some embodiments, projecting the indication of the probable-collision frame may include rendering a heat map as the probable-collision frame.
[0025] For some embodiments, an apparatus may include: a projector; a processor; and a non- transitory computer-readable medium storing instructions that are operative, if executed on the processor, to perform the processes of: displaying, in a virtual space, a virtual reality (VR) content to a user of a head- mounted display (HMD) device; determining a probable-collision frame representing a region with a probability of entry by the user; and projecting, on a surface proximate to the user, an indication of the probable-collision frame.
[0026] For some embodiments, an apparatus may include the HMD device, and the HMD device may include the projector and the processor.
[0027] For some embodiments, the projector may be external to and separate from the HMD device. BRIEF DESCRIPTION OF THE DRAWINGS
[0028] The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views, together with the detailed description below, are incorporated in and form part of the specification, and serve to further illustrate embodiments of concepts that include the claimed invention and explain various principles and advantages of those embodiments.
[0029] FIG. 1 depicts a high-level component diagram of an apparatus for projecting collision- deterrents in virtual reality viewing environments for some embodiments.
[0030] FIG. 2 depicts a flow chart of a method for projecting collision-deterrents in virtual reality viewing environments for some embodiments.
[0031] FIG. 3 depicts virtual objects around an HMD user and a VR scene for some embodiments.
[0032] FIG. 4 depicts an HMD user detecting an external user for some embodiments. [0033] FIG. 5 depicts a geometric explanation for calculating distances for some embodiments.
[0034] FIG. 6 depicts an HMD user surrounded by virtual objects detecting an external user for some embodiments.
[0035] FIG. 7 depicts an HMD user surrounded by virtual objects measuring an external user at a first predetermined distance for some embodiments.
[0036] FIG. 8 depicts a divided virtual frame having a virtual object at two different time instances for some embodiments.
[0037] FIG. 9 depicts a divided virtual frame having multiple virtual objects with interactivity scores for some embodiments.
[0038] FIG. 10 depicts a selection of a probable-collision frame for some embodiments.
[0039] FIG. 11 depicts a table of example characteristic display parameters, corresponding weight scores, and calculated interactivities for four virtual objects for some embodiments.
[0040] FIG. 12 depicts a top-down view having an overlap area that is used to determine a geographic coordinate position (GCP) weight score for some embodiments.
[0041] FIG. 13 depicts a chart that specifies attributes of each virtual object in a VR game for some embodiments.
[0042] FIG. 14 depicts a flow chart of a method for selecting a probable-collision frame for some embodiments.
[0043] FIG. 15 depicts an example scenario with an external user at a second distance for some embodiments.
[0044] FIG. 16 depicts a selection of a surface for projecting content for some embodiments.
[0045] FIG. 17 depicts a table of example user profile parameters, corresponding weight scores, and calculated user profile score for four HMD users for some embodiments.
[0046] FIG. 18 depicts an HMD projecting virtual contents based on a physical motion area ratio for some embodiments.
[0047] FIG. 19 depicts a comparison of the first distance and the second distance if the physical motion area ratio has a maximum value for some embodiments.
[0048] FIG. 20 depicts a virtual object that is rendered to be projected without distortion for some embodiments.
[0049] FIG. 21 depicts a virtual object that is rendered to be projected without distortion at multiple viewing angles for some embodiments. [0050] FIG. 22 depicts an HMD projecting a heat map indicating a collision area for some embodiments.
[0051] FIG. 23 depicts a projection of a probable-collision frame that varies as an external user changes position for some embodiments.
[0052] FIG. 24 depicts a flow chart of a method for projecting a rendered probable-collision frame for some embodiments.
[0053] FIG. 25 depicts an HMD projecting a plurality of probable-collision frames based on respective physical motion area ratios for some embodiments.
[0054] FIGs. 26A and 26B depict a projection for preventing a collision with two external users for some embodiments.
[0055] FIG. 27 depicts a time sequence diagram of a method and apparatus for projecting collision- deterrents in virtual reality viewing environments for some embodiments.
[0056] FIG. 28 depicts an example of an HMD projection direction in view of virtual object motion for some embodiments.
[0057] FIG. 29 depicts a non-moving virtual object projected onto a floor surface for some embodiments.
[0058] FIG. 30 depicts an HMD user selectively projecting virtual objects for some embodiments.
[0059] FIG. 31 depicts a multi-user VR projection system having a high-performance projector for some embodiments.
[0060] FIG. 32 depicts a projection that indicates movement-likelihood information to an external user for some embodiments.
[0061] FIG. 33 depicts an HMD user reacting to a moving virtual object and a heat-map projection for some embodiments.
[0062] FIG. 34 depicts a projection of information about virtual walls in a VR scene for some embodiments.
[0063] FIG. 35 depicts an external user with high level permissions viewing the VR content for some embodiments.
[0064] FIG. 36 is a flowchart for some embodiments for displaying an indication of a probable-collision frame of a surface proximate to an HMD wearer and adaptively changing the indication in response to VR content displayed to the HMD wearer. [0065] FIG. 37 depicts an exemplary wireless transmit/receive unit (WTRU) that may be employed as an HMD, an external projection provider, or as projection unit for some embodiments.
[0066] FIG. 38 depicts an exemplary network entity that may be employed as server for some embodiments.
[0067] FIG. 39 is a system diagram illustrating an example communications system for some embodiments.
[0068] FIG. 40 is a system diagram illustrating an example radio access network (RAN) and an example core network (CN) that may be used within the communications system illustrated in FIG. 39 for some embodiments.
[0069] FIG. 41 is a system diagram illustrating a further example RAN and a further example CN that may be used within the communications system illustrated in FIG. 39 for some embodiments.
[0070] Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention.
[0071] The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to
understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
[0072] The entities, connections, arrangements, and the like that are depicted in— and described in connection with— the various figures are presented by way of example and not by way of limitation. As such, any and all statements or other indications as to what a particular figure "depicts," what a particular element or entity in a particular figure "is" or "has," and any and all similar statements— that may in isolation and out of context be read as absolute and therefore limiting— may only properly be read as being constructively preceded by a clause such as "In at least one embodiment, For brevity and clarity of presentation, this implied leading clause is not repeated ad nauseum in the detailed description of the drawings.
DETAILED DESCRIPTION
[0073] In today's internet age, there is a trend towards consuming richer and more immersive digital content. How we access this content is changing at a rapid pace. Streaming digital data has become a standard means by which a user receives digital content. Digital media with greater levels of realism are encoded using high-resolution formats which demand large file sizes. Transporting this information requires a proportionally large allocation of communication resources. Visually rich virtual reality (VR) content and augmented reality (AR) content both require novel display devices for proper rendering.
[0074] In the case of VR content and certain immersive AR experiences, the associated display devices may be bulky and tethered to large processing units and may prevent a user from being aware of various hazards and events in the real-world environment. As processing power continues to evolve and these devices become untethered and users become more mobile, while still being isolated from the real- world physical reality around them, these systems become more dangerous to use independently. For at least this reason, current VR and immersive AR content consumption often relies on a human supervisor being present for safety purposes. However, this is a prohibitive requirement as the human supervisor may interfere with the VR user's experience by getting in the way.
[0075] VR systems make it possible to experience a virtual world as if it was the real world. While wearing an HMD capable of experiencing VR, the user is shielded from visual contact with the outside environment. Barring ancillary functionality, an external user (who is not wearing an HMD) is unaware of what content the HMD wearer is experiencing. However, contents of the HMD may be mirrored on an external monitor or projector, thereby allowing external users to view the VR content.
[0076] Meanwhile, the HMD wearer uses their hands and moves around their present real-world environment, and these movements may cause collision with nearby people and objects. In order to prevent such a collision, one device displays a partial image of the external environment within the VR environment.
[0077] A wearer of an HMD (who may be disconnected from the outside world) may desire to reduce collisions with an external viewer (who may not know what is happening in the VR environment). Although some systems modify HMD content and/or adjust the transparency to prevent the wearer from colliding with the external viewer, some methods may interfere with the overall immersiveness of the VR experience.
[0078] The present disclosure teaches a method and apparatus according to some embodiments for projecting contents of a virtual world onto a surface in the real world so as to aid an external user in avoiding a collision with a head-mounted display (HMD) user. The execution of the method is based on an HMD user's virtual context. The external user may understand what is happening in the virtual world in an effective way. Furthermore, the present method according to some embodiments may reduce a danger of collision between the HMD user and the external user. In some embodiments, by projecting relevant information associated with the HMD user's VR experience, the external viewer, not the HMD wearer, may take evasive action to avoid collisions. As such, it may decrease any likelihood that the HMD wearer will not be disturbed during the VR experience.
[0079] In some embodiments, virtual objects are analyzed in order to determine which portions of a VR scene the HMD user is likely to move towards and away from. Methods described herein in accordance with some embodiments further consider specific HMD user characteristics to facilitate a highly- personalized collision avoidance system.
[0080] Included in this disclosure are several methods in accordance with some embodiments for rendering virtual objects and intuitively expressing them. For example, in some embodiments, color mapping, which may process motion information between moving virtual objects and the HMD user, may be used to intuitively visualizes degrees of probable motion. In another example in accordance with some embodiments, depending on, e.g., the rights of an external user requesting information, the HMD user may render and project the virtual objects themselves.
[0081] Exemplary systems and processes disclosed herein in accordance with some embodiments determine if a virtual reality (VR) user is within a threshold distance of an external user and responsively project a warning image in between the two users to help mitigate a potential collision, thereby helping the VR user avoid the hazard without breaking the immersiveness of the VR experience. Calculations are made based on relative trajectories, and in some cases expected trajectories, to determine location of potential collisions. The context of the VR user's present VR scene is considered if determining expected trajectories. Information about virtual objects within the VR scene and user profile information are used as input for determining how to render and project the warning image. The warning image may be a heat map indicating areas the VR user is likely to move to, a properly scaled rendering of a virtual object, a stop sign, and the like.
[0082] Moreover, any of the embodiments, variations, and permutations described in the preceding paragraphs and anywhere else in this disclosure may be implemented with respect to any embodiments, including with respect to any method embodiments and with respect to any system embodiments.
[0083] Before proceeding with this detailed description, it is noted that the entities, connections, arrangements, and the like that are depicted in— and described in connection with— the various figures are presented by way of example and not by way of limitation. As such, any and all statements or other indications as to what a particular figure "depicts," what a particular element or entity in a particular figure "is" or "has," and any and all similar statements— that may in isolation and out of context be read as absolute and therefore limiting— may only properly be read as being constructively preceded by a clause such as "In at least one embodiment,...." And it is for reasons akin to brevity and clarity of presentation that this implied leading clause is not repeated ad nauseum in this detailed description.
[0084] FIG. 1 depicts a high-level component diagram of an apparatus 150 for projecting collision- deterrents in virtual reality viewing environments for some embodiments. In particular, FIG. 1 depicts (i) an HMD 152, (ii) a server 154, (iii) a projection unit 156, and (iv) an external projection provider 158, as well as a functional scope of each of the previously listed.
[0085] An HMD 152 may display VR contents to an HMD user and may measure a distance between an HMD user and an external user using sensors inside an HMD 152.
[0086] A server 154 may send characteristic display parameters and a user profile to the external projection provider. The server 154 may be a component residing in local memory of the HMD 152 and/or a component existing in cloud storage or a network outside the HMD 152. The server 154 may be embodied as the network entity 3890 of FIG. 38. For some embodiments, information about a present VR game and a current user may exist in a server 154.
[0087] A projection unit 156 may adjust a display device and project rendered VR contents (e.g., a rendered probable-collision frame) received from an external projection provider 158. The projector unit 156 may receive the rendered information and project the rendered information in real space. A projector unit 156 may rotate according to a selected projection direction. The projector unit 156 may be in the HMD 152 or may be an external device installed on (or onto) the HMD 152. For some embodiments, a projector unit 156 may be mounted on the HMD 152 as an external device, unless explicitly stated otherwise.
[0088] An external projection provider 158 may calculate distances between users and compare this distance against a first predetermined distance (or a threshold distance) as well as a variable second distance. The external projection provider 158 may monitor an external user's position and select a probable-collision frame. The external projection provider 158 may also render the probable-collision frame and select a surface for projecting the rendered probable-collision frame. The external projection provider 158 may send the rendered probable-collision frame to the projection unit. For some embodiments, images may be rendered for projecting to nullify (or prevent) a collision between users. The external projection provider 158 may be implemented as a hardware module and/or software module. In some embodiments, the external projection provider is embedded in the HMD 152.
[0089] In some embodiments, the external projection provider may be attached to an exterior of the HMD 152 or the user and may communicate with the HMD 152 via a wired connection or a wireless connection (USB/WiFi / Bluetooth/LAN). In some embodiments, the external projection provider may be located on a cloud network outside of the HMD 152. In some embodiments, the external projection provider 158 may be embedded in the HMD 152, unless explicitly stated otherwise. For some embodiments, projecting an indication of a probable-collision frame may include, e.g., selecting a surface upon which to project the indication of the probable-collision frame, adjusting a projection device to face the selected surface, rendering the probable-collision frame, and projecting the rendered probable-collision frame on the selected surface between the user and the external user. For some embodiments, projecting an indication of a probable-collision fame may include rendering a heat map as the probable-collision frame. For some embodiments, the indication of the probable-collision frame may be adaptively changed in response to VR content displayed on an HMD to an HMD user. For some embodiments, VR content may be displayed on an HMD, a mixed reality (MR) device, or a VR device. In some embodiments, an augmented reality (AR) device may likewise be used. Although VR content is described herein with respect to some embodiments, AR content or MR content may be used as appropriate in the embodiments or in some other embodiments. Although an HMD is described herein with respect to some embodiments and often with respect to VR content in particular, the HMD may be replaced with a head-mounted or other type of wearable display device, e.g., an MR device, an AR device, or a VR device, as appropriate.
[0090] For some embodiments, rendering the probable-collision frame may include: mapping a point- of-view position of the external user to a virtual space, orienting the probable-collision frame to be facing the external user, and distorting a horizontal and vertical ratio of the probable-collision frame based on the point-of-view position of the external user, wherein the VR content is displayed to the user in a virtual space. For some embodiments, rendering the probable-collision frame also may include rendering one of the one or more virtual objects as the probable-collision frame.
[0091] FIG. 2 depicts a flow chart 200 of an example method for projecting collision-deterrents in virtual reality viewing environments, in accordance with some embodiments. In particular, the example process may be partitioned into three high-level sub-processes (step one, step two, and step three).
[0092] For some embodiments, step one generally compares a measured distance between users with a first predetermined distance. Step one may include displaying 202 VR contents on an HMD. Step one also may include monitoring for 204 and detecting 206 an external user using, e.g., a sensor such as, e.g., a depth camera, a proximity sensor and/or a fish-eye lens and calculating 208 the distance between the HMD user and the external user. If an external user is not detected, a process may return to check for detection 204 of an external user. Step one also may include comparing 210 the distance between the HMD user and the external user with first predetermined distance and verifying that the external user is located at or within first predetermined distance. If the distance between the HMD user and the external user is less than a first predetermined distance, then the process continues to step two. If the distance is greater than or equal to the first predetermined distance, the process may return to calculate 208 the distance between the HMD wearer and the external user.
[0093] For some embodiments, step two generally selects a probable-collision frame and renders the probable-collision frame. Step two includes receiving 212 pixel data from a virtual pixel storage via a network, e.g., using a network transmission protocol. Step two also may include determining 214 (e.g., calculating, retrieving from local or remote storage, receiving, etc.) an interactivity of a virtual object by referring to the characteristic display parameters of the virtual object. The virtual object's interactivity may be calculated based on pixel data of the virtual object (or for some embodiments, may be indicated by an external service). Step two also may include determining 216 a probable-collision frame based on the interactivity. Step two also may include calculating 218 profile information by referencing a user profile (which may include user profile data). Step two also may include calculating 220 a physical motion area ratio (or physical motion area ratio value) by referencing the interactivity and the profile information. Step two also may include rendering 222 the probable-collision frame into the external virtual projection content based on the predicted physical motion area ratio (e.g., in some embodiments, an HMD user profile and the distance between the HMD user and the external user. A probable collision frame may be rendered 222 to nullify (or prevent) a collision between users. Step two also may include calculating 224 the distance between the HMD wearer and the external user. Step two also may include comparing 226 the distance between the HMD user and the external user with a second distance (or second distance threshold). If the distance is less than a second distance, the process may continue to step three. If the distance is not less than the second distance (threshold), the process may return to compare 210 the distance with a first predetermined distance.
[0094] For some embodiments, step three generally projects VR contents (e.g., rendered probable- collision frames) onto the real-world environment. Step three may include adjusting 228 the projection device to face the selected media (and the external user's location (and velocity)) and projecting 230 the rendered probable-collision frame. If the external user is positioned within the second distance, the rendered probable-collision frame (e.g., heat map or virtual object) may be displayed using the projector unit. The projector unit may be adjusted to project onto a previously selected medium (or surface). The projector unit may include a motor driver to adjust the direction of projection. After the projector is oriented, the rendered collision-capable frame may be projected toward the selected medium (or selected surface). The area of projection may be mainly the area between the HMD wearer and the external user. This space may be used to provide sufficient information such as text/image data. In addition, if the projection region is limited to only the adjustable distance area, contextual information may be provided with a minuscule amount of image resources. If a rendered image (or probable-collision frame) is projected, an external user may see the projected image (or probable-collision frame) and stop moving, but additional motion may occur and a collision may occur. Therefore, in some embodiments, the example process may further render and project while tracking the position and motion of the external user.
[0095] An external user who does not wear the HMD may experience the contents of the virtual space. Because the rendered VR content may be modified to suit the context, the privacy of the HMD wearer may be able to be protected. In addition, since a visual indication of the predicted movements of the HMD wearer is provided, the external user may avoid collisions with the HMD wearer. Also, since the collision-avoidance contents may be viewed from the viewpoint of the external user rather than the viewpoint of the wearer of the HMD, the satisfaction provided by the immersiveness of the virtual space may be enhanced. The example scenarios disclosed herein according to some embodiments are based on an AR game applied to a VR environment. For some embodiments, a visual indication may be projected on a floor surface within proximity of the user.
[0096] FIG. 3 depicts virtual objects 302, 304 around an HMD user and a VR scene 300 for some embodiments. The VR content and virtual objects 302, 304 from the AR game are mapped and displayed in the 360-degree space on the HMD. The VR content 306 may be a video stream generated by a 360- degree camera or an interactive game.
[0097] FIG. 4 depicts an environment 400 of an HMD user (or wearer) 402 detecting an external user 404 for some embodiments. In particular, the HMD (or wearable device) 406 recognizes and distinguishes external users using a sensor (e.g., a depth camera, a proximity sensor or a fish-eye lens) and
corresponding processing. The sensor may be embedded in the HMD or may exist in an external device. Those with skill in the art will understand that there are many techniques that may be employed to detect (or recognize) the external user 404. For example, detection may use facial recognition 408 (as shown in FIG. 4), may use edge detection, and/or may communicate with a device worn by the external user 404.
[0098] For some embodiments, while the HMD user 402 wears the HMD, and consumes VR content, an HMD sensor array may scan the surrounding environment. If the sensors detect an external user in the vicinity, the face of the external user may be analyzed. The HMD system may check whether the face of the external viewer is registered in the server or a database.
[0099] If the external user is located in the vicinity of the HMD device, the HMD receives a connection from the wearable device worn by the external viewer. The HMD device confirms from an authentication code of the wearable device whether the device is registered. At this time, a received signal strength indication (RSSI) may be used to determine how close the HMD is to the wearable device 406. [0100] FIG. 5 depicts a geometric explanation 500 for calculating distances for some embodiments. Shown is a diagram for calculating a horizontal distance from the HMD 502 to an external object 504. The HMD 504 may measure the distance between a point-of-view of the external viewer and the HMD. (X, Y, Z) coordinates 506 of the external user's point-of-view (the origin is the HMD position) may be obtained using a trigonometric function. Moving the origin by the height of the HMD (or the HMD user's height) allows the external user's point-of-view to be expressed in (X, Y, Z) coordinates 506 if the floor is defined as the origin.
[0101] FIG. 6 depicts an environment 600 of an HMD user 602 surrounded by virtual objects 604, 606 detecting an external user 608 for some embodiments. The HMD then measures the distance 610 to the external user of FIG. 6. The distance 610 between the HMD wearer 602 and the external user 604 may be continuously measured by the HMD's optical distance measuring sensor for some embodiments. The optical distance measuring sensor, such as an infrared sensor, detects a direction in which the external user is located and measures a distance 610 to the external user's body from the HMD wearer.
[0102] FIG. 7 depicts an environment 700 of an HMD user 702 surrounded by virtual objects 704, 706 measuring an external user 708 at a first predetermined distance 710 for some embodiments. If the external user is at or within the first predetermined distance 710, a probable-collision frame is selected by the external projection provider. The projection provider is configured to render the probable-collision frame into the external virtual projection content within the first predetermined distance 710. The rendered external virtual projection content is designed to nullify a probable collision between the HMD wearer (or user) 702 and the external user 708. Therefore, the first predetermined distance 710 is a safety clearance distance for preparing (e.g., selecting and rendering) content that may be sent to the projector if a physical collision is determined to be imminent.
[0103] Many rationales may be employed for selecting the first predetermined distance. For example, a person is likely to move farthest if the person jumps in a certain direction. The jump motion is similar to the standing long jump. The world record for the standing long jump is 3.73 meters, and considering this figure, it may be thought that the movement radius of the HMD wearer is very difficult to exceed 3.73 meters even if it they are actively moving. Therefore, the minimum distance for ensuring safety between the HMD wearer and the external user may be set to about 4m. That is, in some embodiments, the first predetermined distance may be set to 4m, although other distances may be used.
[0104] In some embodiments, the first predetermined distance varies depending on the type of VR content. As an example, a karate action game which mainly uses kicking as user input may have a larger first predetermined distance than a VR trivia game in which the user is not expected to move much. [0105] FIG. 8 depicts a virtual frame 800 having a virtual object 804 at two different time instances (t1 and t2) for some embodiments. A 360-degree virtual shell surrounding the HMD user 802 in a virtual world may be divided into a plurality of virtual frames. The indicated virtual frame in FIG. 8 has information about pixel motion and an interactivity property 806 of the virtual object of the virtual frame at two different times.
[0106] FIG. 9 depicts a divided virtual frame 900 having multiple virtual objects 904, 906 with interactivity scores 908, 910 for some embodiments. For example, if a virtual object corresponding to a virtual frame is an item to be acquired in a game, the value of the interactivity is increased. However, if the virtual object is to be avoided (e.g., a bomb), the HMD wearer 902 is expected to move in the opposite direction, so the interactivity is lowered. Thus, in some embodiments, a higher value of interactivity may imply higher desirability for interactivity, while a lower value may imply a lower desirability. But in other embodiments, interactivity values (or for other values, e.g., characteristic display parameters), other values may be used or, e.g., the meanings of the value(s) and what they imply may be reversed or a different ordering scheme may be used than the examples used herein.
[0107] FIG. 10 depicts a selection of a probable-collision frame for some embodiments. If an external user arrives at the first predetermined distance 1006, a plurality of virtual objects present in the VR scene 1000 and their corresponding attribute information (e.g., characteristic display parameters) are received from the server. Weight scores are calculated based on characteristic display parameters and are in turn used to calculate respective interactivities of the virtual frames. The virtual frame having the highest interactivity is selected as the probable-collision frame. It is most likely that the external user 1004 and the HMD wearer 1002 would collide at that point.
[0108] FIG. 11 depicts a table 1100 for some embodiments of example characteristic display parameters 1104, corresponding weight scores, and calculated interactivities for four virtual objects 1102 for some embodiments. FIG. 11 is an example table 1100 specifying parameter information (e.g., characteristic display parameters 1104) and weight scores (e.g., pixel size 1108, interest of virtual object 1110, degree of pixel change 1112, display time 1114, degree of interaction induction 1116, and geographic coordinate position 1118) of virtual objects appearing at a specific point in the virtual space. The interactivity 1106 of each virtual object is calculated.
[0109] Pixel size (PS) 1108 is the size of the virtual object displayed within the virtual environment measured in pixels. The larger the pixel size, the larger the weight score becomes. This is because the larger the area of the virtual object results in a larger the range of motion in which the user interacts with the virtual object. [0110] Interest of virtual object (IVO) 1110 refers to the degree of interest the HMD user is likely to have in the displayed virtual object. For example, if a particular virtual object is a rare item, the user's rapid movement towards the virtual object is expected. Therefore, in case of high interest of virtual object, the weight score is increased. For example, an item (monster) that has a low frequency of appearance in a game and may earn massive experience points (XP) at the time of acquisition will be of great interest to the user. Therefore, the user's motion toward the item to obtain the item is expected.
[0111] Degree of pixel change (DPC) 1112 is a change in pixel position of a virtual object within a predetermined time in the virtual space. That is, it indicates the speed at which a virtual object moves in a 360-degree virtual environment. Various motions of HMD's viewer are expected due to pixel change in virtual space. For example, if a virtual object is zooming past at a high speed, the HMD's user is likely to jump away out of surprise or to take a large lunge at a high speed in an attempt to grab the virtual object. Therefore, the larger the degree of pixel change value, the higher the weight score will be.
[0112] Display time (DT) 1114 is the amount of time it takes for a virtual object to appear in virtual space until it disappears. Logically, movement for accessing or interacting with the virtual object may only occur during the limited display time. The shorter the limited display time, the greater the possibility of the user immediately approaching or moving toward the virtual object. For at least this reason, short display times receive higher weight scores. In the example table, the weight is given on the assumption that the shorter the time, the greater the user's motivation. However, if the limited time is short and the user's pursuit is abandoned, a low weight may be given.
[0113] Degree of interaction induction (D II) 1116 represents a degree of inducing a user's motion or interaction with respect to a virtual object. In some games, predetermined interaction methods (catching, hitting, throwing, etc.) are required of the user and may vary depending on the attributes of the virtual object. The degree of interaction induction varies depending on the predetermined interaction method. The larger the degree of interaction induction, the greater the probability of collision with an external user. If a virtual object (e.g. a Ghost Character) that surprises the user appears, the user movement may occur mainly in the opposite direction of the virtual object.
[0114] The geographic coordinate position (GCP) (or location) 1118 is the polar coordinate at which the virtual object is located in the 360-degree virtual space. If a virtual object is closer to the external user, the weight of the geographic coordinate position is increased.
[0115] The weight scores of the characteristic display parameters are obtained and an average is calculated, generating an interactivity score that reflects the degree of motion of the wearer of the HMD. Eq. 1 shows an example formula for calculating interactivity: . ^ (PS Score+IVO Score+DPC Score+DT Score+DII Score+GCP Score) .- λ interactivity = Eq. 1
[0116] For some embodiments, calculating the level of interactivity for a virtual object may comprise calculating an average of one or more characteristic display parameters for the virtual object.
[0117] For the table in FIG. 11 , interactivity of virtual object "A" may be calculated as shown in Eqs. 2 and 3:
Interactivity of Virtual Object "A" =
(PS(400,400)+IVO(90)+DPC(300)+DT(20)+DII(Swing) + GCP(¾,rB,Zc)) p „
Interactivity of Virtual Object "A" = (*o÷ o+70÷6o+8o+70) = η ι η Eq 3
[0118] The interactivity of virtual objects "B," "C," and "D" may calculated with similar calculations to Eq. 2 and Eq. 3. Among the four virtual objects in the table, the interactivity of the virtual object "A" is the highest. Therefore, for this example, the virtual frame including the virtual object "A" is set as the probable- collision frame. For some embodiments, determining the probable-collision frame may include selecting the probable-collision frame that includes a virtual object with the largest (or highest) calculated interactivity.
[0119] The characteristic display parameters for the virtual objects are based on the rules of the virtual content. For example, a characteristic display parameter may be constructed by referring to a table or a chart that specifies the attributes of each character of a game. Tables and charts may be in code in the game or in separate database files. Table and chart data for rules of the virtual content may be received from a server or from local storage of the HMD.
[0120] Some examples of characteristic display parameters 1104 and example weight scores 1108, 1110, 1112, 1114, 1116, 1118 to be used in determining an example interactivity 1106 are presented in FIG. 11 in accordance with some embodiments, but other example parameters (which, for some embodiments, may be parameters other than characteristic display parameters 1104) and associated weight scores (and assumptions underlying other example parameters and associated weight scores) as well as relative weights may be used in these and other embodiments.
[0121] FIG. 12 depicts an example top view 1200 for some embodiments having an overlap area 1202 that is used to determine a geographic coordinate position (GCP) weight score for some
embodiments. More specifically, the overlap region 1202 represents the intersection of the two user's respective motion boundaries 1204, 1206. The overlap 1202 is identified as the region where there is a possibility of collision (or, for some embodiments, the possibility of collision is above a threshold). The weight of the virtual frame 1208 in the virtual environment corresponding to the overlapping area 1202 in the actual space is increased. In this example, the motion boundary 1204 of the HMD wearer is the radius of the first predetermined distance, and the motion boundary 1206 of the external user is a radius determined by considering the arm length of the external user. The arm length of the external user may be measured or estimated by measuring a height of the external user with the HMD sensor (depth camera).
[0122] FIG. 13 depicts a chart 1300 that specifies attributes of each virtual object 1304 in a VR game for some embodiments. In particular, FIG. 13 depicts a chart 1300 that specifies a rarity 1302 (which corresponds with the IVO characteristic display parameter 1110) for each character in the game.
[0123] FIG. 14 depicts a flow chart 1400 of an example method for selecting a probable-collision frame for some embodiments. A 360-degree virtual space is divided 1402 into a plurality of virtual frames. A virtual objects' characteristic display parameter list is obtained 1404 from a server (which may include for some embodiments checking whether there are virtual objects in the virtual space and receiving characteristic display parameters of present virtual objects from the server). For some embodiments, each parameter for calculating interactivity may be predefined in a table. Weight scores for the virtual objects may be collected 1406 for a referring parameter table (which may use the characteristic parameters). The interactivity of each virtual object may be calculated 1408 using a formula. Different weighting methods may be applied depending on characteristic display parameters. A virtual object with the highest interactivity may be obtained 1410. The location of the obtained virtual object is checked and a virtual frame that includes the virtual object is chosen (or selected) 1412 as a probable-collision virtual frame.
[0124] FIG. 15 depicts an example scenario 1500 with an external user 1502 at a second distance 1504 for some embodiments. A first predetermined distance 1514 may be calculated for some
embodiments as a distance between an HMD wearer 1512 and an external user 1502. The second distance 1504 is the sum of a critical distance 1508 and an adjustable distance 1510. If the external user 1502 is detected to be within the first predetermined distance 1514, the projection unit projects the rendered probable-collision frame 1506. The second distance 1504 (which is the sum of the critical distance 1508 and the adjustable distance 1510) is calculated by referencing physique information of the HMD wearer 1512 as well as attributes of virtual object.
[0125] The critical distance defines a radius of a region in which collisions occur with a very high probability regardless of the attributes of the virtual object or the virtual frame. The critical distance 1508 is determined based on physical profiles such as the arm length of the HMD user 1512. In one embodiment, only the arm length information is considered (or used for calculations) because the HMD user 1512 is assumed to be in a standing position. For some embodiments, the critical distance may be a threshold distance that indicates a higher probability of a collision between the HMD user and an external user or viewer.
[0126] The adjustable distance 1510 is determined at least in part by calculating a physical motion area ratio. The physical motion area ratio is the average of the interactivity value and user profile value. An interactivity value is discussed in relation to FIG. 11, and a user profile value is discussed in relation to FIG. 17. Because the size of the physical motion area ratio increases or decreases according to a user profile value and the interactivity value of the probable-collision frame, the adjustable distance is dependent on the attributes of the selected probable-collision frame. Example distances and terminology for distances as well as what the distances (and combinations of distances) represent are merely examples in accordance with some embodiments and other distances may be, e.g., used, defined, and/or characterized.
[0127] FIG. 16 depicts a scenario 1600 for a selection of a surface for projecting content for some embodiments. The HMD 1602 is configured to measure the distance from the HMD 1602 to the wall 1604 and the floor 1606 using a camera (depth camera) or an IR sensor, and to project virtual content on the selected projection surface (or plane). If multiple projectable surfaces are detected, the selected projection surface (e.g., final projection surface) may be determined by, e.g., (i) comparing the distance between the projection surface and the projector and (ii) referencing the projectable distance of the projector. For some embodiments, the selected projection surface may be selected based on one or more of the following example determinations: determining whether a surface perpendicular to the ground is within range of a projector, determining the distance between the ground and a projector, determining whether a surface perpendicular to the ground is a flat surface, and determining whether the ground is a flat surface.
[0128] Alternatively, virtual contents may be projected to a selected position where the contents may be, e.g., most easily viewed by the external viewer. For example, if a wall positioned to the left of the HMD wearer is selected as a potential projection plane, and if the external viewer is also located on the left side of the HMD wearer and is looking at the HMD wearer, the left wall is not the final projection surface.
Instead, the floor surface between the HMD wearer and external viewer may be selected as the projection surface. In some embodiments, the rendered VR content is projected in the direction of approach of the external user, thereby causing a collision-mitigating interference between the external user and the wearer of the HMD.
[0129] FIG. 17 depicts a table 1700 of example user profile parameters 1702, corresponding weight scores 1704, and calculated user profile score 1706 for four HMD users for some embodiments. If the external user is positioned within second distance, the rendered probable-collision frame may be projected onto the real world. The rendered probable-collision frame may depict the virtual object or an abstract representation of motion information. In some embodiments, the probable-collision frame is rendered based on a physical motion area ratio that is calculated by referring to the interactivity of the probable- collision frame and the HMD user's profile information.
[0130] The user's profile information 1702 may include quantitative body information (or user profile dimensional body data) such as gender, height, weight, leg length, and user's movement habits. The weight is applied by comparing the user's body information with the average value of the body information. The weighted scores for the user profile are summed and added to the interactivity of the conflict frame.
[0131] For gender 1708, men are given a larger static weight score than women to account for greater average mobility. For height 1710, weight 1712, leg length 1714, these weight scores are given based on an average value of each measurement. In the case of user 1 and user 2, the median score was given because he (or she) has the average physical condition of a male (or female) respectively. For motion habit history 1716, this weight score is assigned in view of the magnitude of the movement.
[0132] According to some embodiments of a weighting method, a score is given to each attribute value and an average is obtained. Eq. 4 shows an example formula for calculating profile information:
Profile Information =
(Gender Score+Height Score+Weight Score+Leg Length. Score+Motion Habit History Score)
Eq. 4
[0133] For some embodiments, calculating profile information for a user may include calculating an average of one or more user profile parameters for the user. For user 1 , Eqs. 5 and 6 show an example calculation of profile information, which may be added to interactivity of virtual objects for some
embodiments:
Profile Information of User 1 =
(^Gender Ma\e)+Height 170)+Weight 68)+LegLengtfl(gS')+Motion Habit History Score(Swing)
Eq. 5
Profile Information of User 1 = J-"""rj""r^""rj""r7"-' _ £g g
[0134] The interactivity value of the selected probable-collision frame (for virtual object "A") is 71.7 (shown in FIGs. 10 and 11), and the profile information value of user 1 is 68 (Eq. 6). As shown in Eqs. 7 and 8, the physical motion area ratio for some embodiments may be the average of an interactivity value (for a virtual object) and a profile information value: r P nhyvsuiirtnail M Mnottiino-nn t Are eaa κ Unαtιιinο -— nteractivity+Pr° -file Information) f cq. / Physical Motion Area Ratio = -— - = 69.85 Eq. 8
[0135] FIG. 17 shows a table 1700 of user profile parameters 1702 and a profile information value 1706 for HMD users 1718. Some examples of user profile parameters 1702 and example weight scores 1708, 1710, 1712, 1714, 1716 to be used in determining an example profile information value 1706 (and, for example, if averaged with an example interactivity score, an example physical motion area ratio) are presented in FIG. 17 in accordance with some embodiments, but other example user profile parameters (which, for some embodiments, may be parameters other than user body information) and associated weight scores (and assumptions underlying other example parameters and associated weight scores). For some embodiments, other physical motion area ratio formulae and determinations may be used.
[0136] FIG. 18 depicts a scenario 1800 for an HMD projecting virtual contents based on a physical motion area ratio 1806 for some embodiments. An example physical motion area ratio 1806 is the average of the interactivity 1804 and the profile information value 1802 and may be stored as reference information for rendering of the probable-collision frame. The area wherein the rendered image is projected is proportional to the physical motion area ratio 1806.
[0137] According to an example, the HMD user's arm length may be assumed to be 0.69 meters and may serve as the critical distance. If the example first predetermined distance 1808 is set to four meters, the physical motion area ratio corresponding to 69.85 may be used to calculate an adjustable distance as shown in example Eqs. 9 and 10:
. , . „. Physical Motion Area Ratio * (First Predetermined Dist. - Critical Dist.) _- n
Ad Jj. Dist. =— 100 Eq ^. 9
Adjustable Distance = 69,85 * 4 meters0 69 meters) _ 2.312 meters Eq. 10
J 100 ^
[0138] The second distance 1810 is set to 3.002 meters according to example Eqs. 11 and 12:
Second Distance = Critical Distance + Adjustable Distance Eq. 11
Second Distance = 0.69 meters + 2.312 meters = 3.002 meters Eq. 12
[0139] For some embodiments, while the external user is, e.g., 3.002 meters or less from the center of the HMD wearer, a projector unit projects the rendered probable-collision frame.
[0140] The particular distance values used in the examples disclosed herein are merely examples and other distance values may be used. Likewise, it should be understood that the particular distance formulations and relationships are merely examples and other distance formulations and relationships may be used. [0141] FIG. 19 depicts a comparison 1900 for some embodiments of the first distance 1902 and the second distance 1904 if the physical motion area ratio has a maximum value. If the physical motion area ratio is equal to 100, the first predetermined distance 1902 may be set equal to the second distance 1904. In calculating an adjustable distance 1908, Eq. 9 shows the critical distance 1906 being subtracted from the first predetermined distance 1902 so that the first predetermined distance 1902 and the second distance 1904 are equal if the physical motion area ratio has an extreme (maximum) value of 100.
[0142] FIGs. 20 and 21 illustrate embodiments in which an image is distorted during rendering and projected so that an external user may view the image without distortion (e.g., if projected on a surface that may be oblique with respect to the projector and/or the external viewer). For some embodiments, the distortion of the virtual object may be performed at the time of rendering but for other embodiments, distortion of the virtual object may be performed at the projector unit.
[0143] FIG. 20 depicts an environment 2000 for a virtual object 2002 that is rendered with distortion and projected to appear without distortion if viewed for some embodiments. The virtual object 2002 in the selected probable-collision frame is rendered and projected. The virtual object 2002 may be rendered and projected so that is the virtual object 2002 is not distorted and is properly oriented if viewed by an external user 2004.
[0144] FIG. 21 depicts a virtual object 2102 that is rendered with distortion to be projected at multiple viewing angles for some embodiments. FIG. 21 shows that rendering a virtual 3D object 2102 and projecting it onto a 2D surface may appear realistic only from a particular viewpoint. The following example process relates to a method 2100 for enabling VR content to be viewed at a viewpoint of an external viewer 2104. A gaze position of the external viewer 2104 is mapped to the virtual world (or space) 2106. The HMD projects a view 2108 for looking at the floor of the VR environment as seen from the virtual gaze position. The view 2108 may be of the real world from an external viewer's view. The real world - external viewer's view 2108 may show the projected image rotated in the direction of the external viewer 2104. The projected image may have a distorted horizontal and vertical ratio of the image, as shown in the real world - ancillary view 2110. In addition, by distorting the horizontal and vertical ratio of the image, the real world (or environment) - ancillary view 2110 may match the virtual environment (or world) view 2106. The view of the real world may be seen from a top view 2112.
[0145] FIG. 22 depicts an environment 2200 for an HMD 2202 projecting a heat map 2204 indicating a collision area for some embodiments. A hazard area is rendered as a border / plane so that the position of the second distance 2206 (the boundary indicating the possibility of collision) may be more clearly observed. In addition, a gradation effect may indicate a degree of danger and may be applied to form a heat map 2204.
[0146] For some embodiments, the projected interface may be displayed in red, which generally indicates dangerous conditions, and yellow, which means that it is less dangerous but not completely safe. These colors in the heat map 2204 are represented as a series of lines with increasing thickness to indicate a gradual change from red to yellow. As the external user 2208 moves towards the HMD wearer, the yellow border gradually changes to a red border. Regions close to the HMD user (e.g., within the critical distance) are red because a collision may occur with a high probability there. For some embodiments, a heat map may indicate a probability of collision with a user. Areas associated with higher probabilities of a collision with the user may be colored in red (or another color), while areas associated with lower probabilities of a collision with the user may be colored in yellow (or another color).
[0147] FIG. 23 depicts a set of environments 2300 with a projection 2302 of a probable-collision frame that varies as an external user 2304 changes position for some embodiments. A physical motion area ratio is calculated for three changes of a probable-collision frame. While the external user 2304 is moving within the second distance, the probable-collision frame having the highest interactivity may change because of the change of the position of the external user. Therefore, if the external user 2304 changes position, the external projection provider may determine a virtual object most relevant to a potential collision (e.g., a virtual object with a newly updated highest interactivity) and sets the probable-collision frame to the virtual frame containing the virtual object. The user profile may be recalculated to reset, render, and project the physical motion area. At this time, relative motion of the external user may be detected through a motion sensor, such as a depth camera mounted on the HMD.
[0148] For some embodiments, a level of interactivity (e.g., an interactivity value) is a measure of a likelihood that a user will make a motion toward (or, e.g., away from, or otherwise interact with or attempt to avoid) a virtual object, e.g., displayed to the user. This level of interactivity (e.g., interactivity value, e.g., likelihood measure) may be, e.g., determined on an absolute basis with respect to a particular virtual object, or, e.g., determined with respect to other virtual objects in a user's view (or, e.g., that might be about to presented to a user) that might contend for the user's attention. Higher interactivity levels or values may indicate a higher likelihood that a user may interact with the virtual object, and lower interactivity levels or values may indicate a lower likelihood that a user may interact with the virtual object.
For some embodiments, a separate interactivity value may be determined (e.g., calculated, or estimating) for each virtual object. For some embodiments, determining (e.g., calculating) a level of interactivity for a virtual object within a virtual space may include determining a likelihood of the user interacting with the respective virtual object within the virtual space. For some embodiments, determining a level of interactivity for the virtual object within a virtual space may include calculating an interactivity value. In some embodiments, the interactivity value may include an estimated likelihood of a user moving toward the virtual object within the first virtual space. For some embodiments, the level of interactivity may be a proxy for a level of attractiveness of the virtual object to the user.
[0149] FIG. 24 is a flowchart 2400 for example processes for some embodiments for rendering and projecting virtual objects if an external user moves while within a second distance of an HMD wearer. For reference, FIG. 23 is an example of projecting a virtual object if additional movement or position changes occur while the external user is within the second distance. Although the probable-collision frame may be determined based on an external user being less than a first predetermined distance away from the HMD wearer, the external user may generate additional movement after being within a second distance of the HMD wearer.
[0150] FIG. 24 is similar to FIG. 2 with the addition of a few more example processes for some embodiments. For some embodiments, VR contents are displayed 2402 on an HMD. An external user may be monitored 2404 and detected 2406 and a distance between an external user and an HMD wearer may be calculated 2408. The distance between an HMD user and an external user may be determined 2410, and if the distance is less than a first predetermined distance, pixel data may be received 2412 from a virtual pixel storage via a network. Interactivity for a characteristic display parameter of a virtual object may be calculated 2414 based on pixel data. A probable-collision frame selected from at least one virtual frame is determined 2416 based on interactivity for some embodiments. Profile information for a user profile is calculated 2418. A physical motion area ratio for a set of profile information and an interactivity are calculated 2420. A probable-collision frame is rendered 2422 to nullify (or avoid) a collision between an external user and an HMD wearer. The distance between an HMD wearer and an external user is calculated 2424.
[0151] If an external user is determined 2426 to be less than a second distance away from the HMD wearer, the distance between the external user and the HMD wearer is saved 2428 as the previous distance. Otherwise, control returns to determine 2410 if the distance is less than the first predetermined distance. The projection device is adjusted 2430 to face to the selected medium (or surface). The rendered probable collision frame is projected 2432. A new distance between the HMD wearer and an external user is calculated 2434. The new distance is compared 2436 with the previous distance, and if the new distance is different from the previous distance, the process cycles back to repeat receiving 2412 pixel data from storage via a network. Otherwise, the process ends. [0152] FIG. 25 depicts an environment 2500 for an HMD 2502 projecting a plurality of probable- collision frames 2504 based on respective physical motion area ratios 2506 for some embodiments. In some embodiments, information about a plurality of probable-collision frames is rendered and projected on a medium (or surface) surrounding the HMD user. Each probable-collision frame 2504 may be displayed visually differently, in a manner, e.g., commensurate with its corresponding physical motion area ratio. In at least this way, in some embodiments, an external user 2508 may be notified of the possibility (or probability) of collision with multiple virtual objects (which may occur simultaneously). In some
embodiments, an audible alert is provided toward an external user of a possible collision.
[0153] FIGs. 26A and 26B depict an environment 2600 for some embodiments of a projection 2602 for preventing a collision with two external users 2604, 2606. For some embodiments, the second distance 2608 (such as shown in FIG. 26A) may be calculated in forward, backward, leftward, and rightward directions from an HMD user's perspective. The second distance 2610 (such as shown in FIG. 26B) may be calculated as a series of values that are calculated based on the adjustable distance calculations for a series of virtual frames.
[0154] For some embodiments, three concentric regions around the HMD user may be projected, which may be labeled as safe, warning, and danger to indicate a probability of collision with one or more external users. For FIGs. 26A and 26B, two external users 2604, 2606 are shown, though for some embodiments, more than two external users may exist. A vector radiating away from the HMD user in the x- y plane with a length equal to the second distance value may be divided into three regions. The outer third of the second distance vector may designate a safe region. The middle third of the second distance vector may designate a warning region. The inner third of the second distance vector may designate a danger region. The safe, warning, and danger regions may be labeled and colored with yellow, orange, and red projections.
[0155] For some embodiments, the safe, warning, and danger regions may be non-uniform shapes with the second distance being calculated to have different values in different directions radiating away from the HMD user. See, e.g., FIG. 26B. The three regions may be labeled with text, such as "Safe," "Warning," and "Danger," or with letters or numbers, such as A, B, C or 1, 2, 3. Additionally, the regions may be designated with a pattern or grayscale gradation. For some embodiments, the second distance may be calculated for a ring of virtual frames that encircle the HMD user.
[0156] FIG. 27 depicts an example time sequence diagram 2700 of an example method and apparatus for projecting collision-deterrents in virtual reality viewing environments for some embodiments. The HMD 2702 displays 2712 VR contents. The HMD 2702 also receives 2714 external user information from the external environment 2710. The HMD 2702 detects 2716 the external user and sends 2718 sensor data for the external user to the external projection provider 2704. The HMD 2702 sends 2718 sensor data to the external projection provider 2704 so that the external projection provider 2704 may calculate 2720 the distance between the HMD 2702 user and the external user. Following the distance calculation, the external projection provider 2704 compares 2720 the calculated distance with the first predetermined distance.
[0157] If the calculated distance is less than or equal to the first predetermined distance, the external projection provider 2704 receives 2724 pixel data 2722 and characteristic display parameters 2726 from a server 2708. The external projection provider 2704 calculates 2728 interactivity scores based on the characteristic display parameters 2726 and determines 2730 a probable-collision frame based of the interactivity scores.
[0158] In some embodiments, the external projection provider 2704 receives sensor information from the HMD 2702 and calculates the probability of various virtual frames colliding with the external user. For some embodiments, the probable-collision frame is extracted from among a plurality of virtual frames existing in the virtual space between the external user and the HMD wearer.
[0159] The external projection provider 2704 receives 2732 a user profile from a server 2708 and calculates 2734 profile information. The external projection provider 2704 calculates 2736 a physical motion area ratio using both the interactivity score and the profile information. After this, the external projection provider 2704 renders 2738 the probable-collision frame to nullify (or avoid) a potential collision between users.
[0160] The HMD 2702 sends 2740 more sensor data to the external projection provider 2704, and the external projection provider 2704 uses this sensor data to calculate 2742 a distance between an HMD wearer and an external user. The external projection provider 2704 compares 2744 this updated distance with the second distance. If the updated distance is less than the second distance, the external projection provider 2704 sends 2746 projection information (e.g., projection direction and rendered content) to the projection unit 2706. The projection unit 2706 adjusts 2748 the orientation and settings of the projection device and projects 2750 the rendered probable-collision frame.
[0161] FIG. 28 depicts an environment 2800 for an example projection direction in response to detecting incoming objects for some embodiments. In some embodiments, if an object is moving toward the HMD wearer, a projection direction may be selected to be facing the oncoming object. For example, if an object 2804 comes from the front of the HMD wearer 2802, the project unit may face the forward direction. For example, if an object 2806 comes from the right side of the HMD wearer 2802, the project unit may face the right. Of course, the projector may be tilted up and down to project onto the ceiling and floor between the HMD user (or HMD viewer) 2802 and the moving object.
[0162] FIG. 29 depicts an environment 2900 for a non-moving virtual object 2902 projected onto a floor surface for some embodiments. With stationary virtual objects, a top view 2904 of the content may be projected onto the floor surface. The fixed virtual object 2902 cannot directly interfere with the HMD wearer 2906, but the fixed virtual object has an indirect effect. For example, the HMD wearer 2906 walking through virtual hallway or along a virtual wall will move in a confined manner. Therefore, information on the space in which the HMD wearer 2906 will and will not move to is generated.
[0163] For some embodiments, only the top view 2904, which allows a cross-sectional area of the virtual object (e.g., wall or road) is projected onto the floor around the HMD user. Additionally, richer content may be projected by visually distinguishing between areas where user motion occurs and areas where no user motion occurs. Furthermore, an extra heft may be given to objects in the direction in which the user moves. For example, if an HMD wearer moves to the front, the direction of projection is shifted a little further toward the front side.
[0164] Projecting information about the movement of the HMD wearer interacting with a virtual object without projecting the virtual object itself may be performed in some embodiments. For example, based on the interaction attributes between the object and the HMD wearer and the motion prediction statistics, the HMD system may display information about the probability of motion of the HMD wearer, for example as a heatmap.
[0165] FIG. 30 depicts an environment 3000 for an HMD user 3002 selectively projecting virtual objects for some embodiments. Depending on selected privacy settings of the HMD user 3002, certain objects 3004 may be excluded from projection. Depending on inherent privacy tags of virtual objects, certain objects 3004 may be excluded from projection. Among the various objects existing in the virtual space, there may be things that the HMD wearer 3002 wants to see only himself (or herself). For example, in some embodiments, a web page displaying an ID and password or other private data should be displayed only to the HMD wearer 3002. However, it may be appropriate to display and share information about photographs taken with friends to external viewers. For some embodiments, if an authority of the external viewer 3008 is superior to the HMD wearer 3002, all the objects 3004, 3006 in the virtual environment ("Virtual World") 3010 may be exposed to the outside ("Real World") 3012 regardless of the attribute of the object or HMD user preferences.
[0166] FIG. 31 depicts a multi-user VR projection system 3100 having a high-performance projector (or external projector) 3102 for some embodiments. In some embodiments, instead of, or in addition to, projecting the probable-collision frame using a projector mounted on the HMD, projection for multiple HMD wearers 3106, which may be viewed by external users 3112, is facilitated using a high-performance projector 3102. Depicted in FIG. 31 is a large space 3104 where several HMD wearers (or users) 3106 may experience virtual reality games at the same time. The high-performance projector 3102 may be installed near the ceiling to cover the whole room for some embodiments.
[0167] After an HMD measures information about the surrounding users, the corresponding sensor data 3114 is transmitted to, e.g., a cloud server 3108. For some embodiments, the cloud server 3108 may include an external projection provider that calculates and renders probable-collision frames. Projection information 3110 processed in the cloud server 3108 is transmitted to the high-performance projector 3102 which in turn projects the rendered probable-collision frames associated with each HMD wearer 3106.
[0168] FIG. 32 depicts an environment 3200 for a projection that indicates to an external user 3202 movement-likelihood information 3206, 3208, 3210 of an HMD wearer 3204 for some embodiments. In particular, FIG. 32 depicts an HMD wearer 3204 playing a VR game in a room having an external user 3202. For some embodiments, determinations are made for directions emanating from an HMD wearer 3204 regarding likelihood an HMD wearer will move in each direction. For some embodiments, determinations are made regarding likelihood of an HMD wearer to move left, right, and forward. A portion of the floor associated with each directional movement is shown in FIG. 32. For some embodiments, the projection on the floor may be colored, with red (shown with a first pattern of parallel lines in FIG. 32) used for an area associated with a direction an HMD wearer 3204 is likely to move and yellow (shown with a second pattern of parallel lines in FIG. 32) used for an area associated with a direction an HMD wearer 3204 is less likely to move.
[0169] FIG. 33 depicts an environment 3300 for an HMD wearer (or user) 3302 reacting to a moving virtual object 3304 while projecting a heat-map 3306 for some embodiments. In particular, FIG. 33 depicts the HMD user 3204 of FIG. 32 reacting to a moving virtual object 3304. The left-most circle of FIG. 33 shows a moving virtual object 3304 coming towards an HMD wearer 3302. The middle circle shows the HMD wearer 3302 moving to his or her left (to the right as shown to the reader) as the moving virtual object 3304 comes toward the HMD wearer 3302. The third circle shows the real world relationship between an HMD wearer 3302 and an external user 3308 when the HMD wearer 3304 sees a moving virtual object 3304 moving past within the virtual world.
[0170] FIGs. 32 and 33 depict a single scenario from a plurality of viewing angles. In particular, FIGs. 32 and 33 depict an HMD user 3204, 3302 playing a VR game, if suddenly a moving object 3304 zooms past the right side of the HMD user 3302. An attribute indicating whether or not the object 3304 is good for achieving a given task in the game may be obtained. The HMD wearer 3204, 3302 may move towards the object 3304, towards the direction of incidence or avoid the object 3304 by moving in the opposite direction. Information about the area where the HMD wearer 3204, 3302 will move may be highlighted in red (shown with a first pattern of parallel lines in FIG. 32), and other areas may be marked in yellow (shown with a second pattern of parallel lines in FIG. 32). Thus, an external viewer 3202, 3308 may discern information about the potential movement of the HMD user 3204, 3302 and avoid collisions.
[0171] FIG. 34 depicts a projection of information about virtual walls in a VR scene 3400 for some embodiments. In particular, FIG. 34 depicts a VR view (virtual world) 3402, an external user's third person view (real world - external viewer's view) 3404, and a bird's-eye-view (real world - top view) 3406 of a VR scene 3400 displaying a confined corridor 3408. For this example, the HMD user 3410 will not move across the virtual wall object bounding the corridor 3408. Path information for the corridor 3408 may be projected on the floor surface of the real world. Information about the area where the HMD user 3410 will move (e.g., the corridor pathway 3412) may be highlighted in red (which is shown with a first pattern of parallel lines in FIG. 34), and other areas (e.g., all regions outside of the corridor walls 3414) may be marked in yellow (which is shown with a second pattern of parallel lines in FIG. 34).
[0172] FIG. 35 depicts an environment 3500 for an external user 3502 with high level permissions viewing the VR content for some embodiments. An external user 3502 who may be verified by the HMD system and who has higher authority than the HMD wearer 3504, may select the full contents of the VR scene to be projected to the outside world without any processing or modifying. For example, if an external user (or viewer) 3502 (e.g., a parent of a child) enters the vicinity of the HMD wearer (or user) 3504 (e.g., a child), the external viewer 3502 may request to view all contents and may be made aware of a situation in which the HMD wearer (or user) 3504 is viewing material that is not suitable for their age.
[0173] FIG. 36 shows a flowchart 3600 for some embodiments for projecting an indication of a probable-collision frame. For some embodiments, virtual reality (VR) content may be displayed 3602, in a virtual space, to a user of a head-mounted display (HMD) device. A level of interactivity for a virtual object within the virtual space may be determined 3604. In some embodiments, determining, e.g., a value, may include calculating (e.g., locally) the value. In some embodiments, determining, e.g., a value may include, e.g., receiving the value, or retrieving the value (e.g., a previously calculated or stored value) from, e.g., a local or remote memory or storage. A probable-collision frame representing a region with a probability of entry by the user based on the level of interactivity and location of the virtual object may be determined 3606. An indication of the probable-collision frame may be projected 3608 on a surface proximate to the user. [0174] For some embodiments, a method may include: measuring a distance between a head- mounted display (HMD) user that is viewing a virtual reality (VR) scene and an external user; responsive to the external user being measured within a first predetermined distance, receiving (i) a set of virtual objects present in the VR scene and (ii) respective attribute information associated with each virtual object in the set; calculating a respective interactivity score for each virtual object in the set based at least in part on the corresponding attribute information; selecting a probable-collision frame of the VR scene based on the respective interactivity scores; determining a physical motion area ratio of the selected probable-collision frame based on the interactivity score of the virtual object located within the selected probable-collision frame and a HMD user profile score; calculating a second distance by summing an adjustable distance that is a function of the physical motion area ratio and a critical distance; rendering the probable-collision frame based on the determined physical motion area ratio; and responsive to the external user being measured within the second distance, projecting the rendered probable-collision frame between the HMD user and the external user.
[0175] For some embodiments, measuring the distance between the HMD user and the external user may include using depth sensors and or image sensors embedded in the HMD.
[0176] For some embodiments, measuring the distance between the HMD user and the external user may include using depth sensors and or image sensors external to the HMD.
[0177] For some embodiments, first predetermined distance may include a safety clearance distance for preparing and rendering content that may be utilized to deter a physical collision with an external user.
[0178] For some embodiments, the first predetermined distance may be four meters.
[0179] For some embodiments, receiving the set of virtual objects present in the VR scene and respective attribute information associated with each virtual object in the set may include receiving from a server.
[0180] For some embodiments, respective attribute information may include a set of characteristic display parameters, the set may include a pixel size, an interest of the virtual object, a degree of pixel change, a degree of interaction induction, and a geographic coordinate position.
[0181] For some embodiments, calculating a respective interactivity score for each virtual object in the set based at least in part on the corresponding attribute information may include converting each characteristic display parameter into a respective weight score and averaging the weight scores.
[0182] For some embodiments, selecting the probable-collision frame of the VR scene based on the respective interactivity scores may include selecting a frame of the VR scene that includes a virtual object having a largest calculated interactivity. [0183] For some embodiments, determining the physical motion area ratio of the selected probable- collision frame may include averaging the interactivity score and the HMD user profile score.
[0184] For some embodiments, the HMD user profile score may be received from the server.
[0185] For some embodiments, the HMD user profile score may be based on physical characteristics of the HMD user.
[0186] For some embodiments, the physical characteristics may include gender, height, weight, leg length, and a motion habit history.
[0187] For some embodiments, calculating a user profile score may include converting each physical characteristic into a respective weight score and averaging the weight scores.
[0188] For some embodiments, the critical distance may be a length of an arm of the HMD user.
[0189] For some embodiments, the adjustable distance may be the physical motion area ratio multiplied by a difference between the critical distance and the first predetermined distance.
[0190] For some embodiments, rendering the probable-collision frame based on the determined physical motion area ratio may include selecting a size to render the probable-collision frame based on the determined physical motion area ratio.
[0191] For some embodiments, rendering the probable-collision frame may include: mapping a point- of-view position of the external user to the virtual space; orienting the probable-collision frame to be facing the external user; and distorting the horizontal and vertical ratio of the probable-collision frame based on the point-of-view position of the external user.
[0192] For some embodiments, rendering the probable-collision frame may include rendering the virtual object as the probable-collision frame.
[0193] For some embodiments, rendering the probable-collision frame may include rendering a motion-likelihood image as the probable-collision frame.
[0194] For some embodiments, rendering the probable-collision frame may include rendering a border having the second distance as the probable-collision frame.
[0195] For some embodiments, rendering the probable-collision frame may include rendering a heat map as the probable-collision frame.
[0196] For some embodiments, calculating the interactivity score may include referencing a game rule book on an as needed basis.
[0197] For some embodiments, an apparatus may include: a head-mounted display (HMD) configured to display virtual reality (VR) content and send sensor data to an external projection provider; a server configured send characteristic display parameters and user profile information to the external projection provider; the external projection provider configured to monitor a distance between the HMD and an external user, select a probable-collision frame when the distance is below a threshold value, and render the probable-collision frame; and a projection unit configured to adjust a display device and project the rendered probable-collision frame.
[0198] For some embodiments, a method may include: on a surface proximate to a user of a head- mounted display (HMD), projecting a display indicating a probable region of motion of the user; and adaptively changing the displayed probable region of motion in response to content presented on the head- mounted display (HMD).
[0199] For some embodiments, the projection may be performed by the HMD.
[0200] For some embodiments, the projection may be performed by a projector separate from the HMD.
[0201] For some embodiments, the projection may be performed only in response to detection of a person within a threshold distance of the user.
[0202] For some embodiments, the surface may be a floor.
[0203] For some embodiments, a method may include: displaying, in a first virtual space, a first virtual reality (VR) content to a first user of a first head-mounted display (HMD) device; determining a first level of interactivity for a virtual object within the first virtual space; determining a first probable-collision frame representing a first region with a first probability of entry by the first user based on the first level of interactivity for the virtual object and a location of the virtual object within the first virtual space; and projecting, on a surface proximate to the first user, a first indication of the first probable-collision frame.
[0204] For some embodiments, determining the first level of interactivity for the virtual object within the first virtual space may include calculating an interactivity value, wherein the interactivity value may include an estimated likelihood of the first user moving toward the virtual object within the first virtual space.
[0205] For some embodiments, a method may further include: displaying, in a second virtual space, a second virtual reality (VR) content to a second user of a second head-mounted display (HMD) device; determining a second level of interactivity for a second virtual object within the second virtual space;
determining a second probable-collision frame representing a second region with a second probability of entry by the second user based on the second level of interactivity for the second virtual object and a location of the second virtual object within the second virtual space; and projecting, on a second surface proximate to the second user, a second indication of the second probable-collision frame. [0206] For some embodiments, projecting the first indication of the first probable-collision frame and projecting the second indication of the second probable-collision frame may be performed by a common projector, and the first virtual space may be connected to the second virtual space.
[0207] For some embodiments, determining the first probable-collision frame may be further based on dimensional body data for the first user.
[0208] For some embodiments, a method may further include determining an average of the dimensional body data for the first user, and determining the first probable-collision frame may be further based on the average of the dimensional body data for the first user.
[0209] For some embodiments, determining the first probable-collision frame may include selecting a first probable-collision frame that includes the virtual object within the first virtual space having a largest determined level of interactivity.
[0210] For some embodiments, a method may further include measuring a first distance between the first user and a first external user, wherein projecting the first indication of the first probable-collision frame is responsive to the first measured distance being less than a first threshold.
[0211] For some embodiments, a method may further include measuring a second distance between the first user and a second external user, and projecting the first indication of the first probable-collision frame may be responsive to at least one of the first measured distance and the second measured distance being less than a second threshold.
[0212] For some embodiments, projecting the first indication of the first probable-collision frame may include rendering the first indication of the first probable-collision frame based on a point-of-view position of the first external user.
[0213] For some embodiments, projecting the first indication of the first probable-collision frame may include projecting a rendering of one of the one or more virtual objects.
[0214] For some embodiments, a method may further include adaptively changing the first indication of the first probable-collision frame in response to the first VR content displayed on the first HMD device.
[0215] For some embodiments, projecting the first indication of the first probable-collision frame may include projecting a rendering of a heat map that indicates a probability of collision with the first user.
[0216] For some embodiments, projecting the first indication of the first probable-collision frame may include projecting a warning image between the first user and another user in a real world space, wherein the another user may be within a threshold distance from the first user, and wherein the warning image may be configured to avoid a potential collision between the first user and the external user. [0217] For some embodiments, an apparatus may include: a projector; a processor; and a non- transitory computer-readable medium storing instructions that are operative, if executed on the processor, to perform the processes of: displaying, in a virtual space, a virtual reality (VR) content to a user of a head- mounted display (HMD) device; determining a level of interactivity for one or more virtual objects within the virtual space; determining a probable-collision frame representing a region with a probability of entry by the user based on the level of interactivity for the one or more virtual objects and a location of the one or more virtual objects within the virtual space; and projecting, on a surface proximate to the user, an indication of the probable-collision frame.
[0218] For some embodiments, a method may include: displaying, in a virtual space, a virtual reality (VR) content to a user of a head-mounted display (HMD) device; determining a probable-collision frame representing a region with a probability of entry by the user; and projecting, on a surface proximate to the user, an indication of the probable-collision frame.
[0219] For some embodiments, a method may further include: determining a level of interactivity for one or more virtual objects within the virtual space, wherein determining the level of interactivity may include determining an average of one or more characteristic display parameters for each of the one or more virtual objects within the virtual space, wherein the characteristic display parameters may include one or more parameters selected from the group consisting of pixel size, interest of the virtual object, degree of pixel change, display time, degree of interaction induction, and location.
[0220] For some embodiments, a method may further include: determining a profile information value as an average of user profile dimensional body data for the user; and determining a physical motion area ratio value based on the level of interactivity and the profile information value, wherein determining the probable-collision frame may be based on the level of interactivity for the one or more virtual objects and a location of the one or more virtual objects within the virtual space and the physical motion area ratio value.
[0221] For some embodiments, determining the physical motion area ratio value may include determining an average of the profile information value and the level of interactivity for one of the one or more virtual objects within the virtual space.
[0222] For some embodiments, projecting the indication of the probable-collision frame on the surface proximate to the user may include: selecting the surface proximate to the user, wherein the selected surface is located between the user and an external user outside of the virtual space within a threshold distance of the user; rendering the probable-collision frame; and projecting the rendered probable-collision frame on the selected surface. [0223] For some embodiments, rendering the probable-collision frame may include: mapping a point- of-view position of the external user to the virtual space; orienting the probable-collision frame to be facing the external user; and distorting a horizontal ratio and a vertical ratio of the probable-collision frame based on the point-of-view position of the external user.
[0224] For some embodiments, projecting the indication of the probable-collision frame may include rendering a heat map as the probable-collision frame.
[0225] For some embodiments, an apparatus may include: a projector; a processor; and a non- transitory computer-readable medium storing instructions that are operative, if executed on the processor, to perform the processes of: displaying, in a virtual space, a virtual reality (VR) content to a user of a head- mounted display (HMD) device; determining a probable-collision frame representing a region with a probability of entry by the user; and projecting, on a surface proximate to the user, an indication of the probable-collision frame.
[0226] For some embodiments, an apparatus may include the HMD device, and the HMD device may include the projector and the processor.
[0227] For some embodiments, the projector may be external to and separate from the HMD device.
EXAMPLE NETWORKS FOR IMPLEMENTATION OF THE EMBODIMENTS
[0228] A wireless transmit/receive unit (WTRU) may be used as a real-time 3D content capture server in some embodiments and as a real-time 3D content rendering client in some embodiments described herein.
[0229] Exemplary embodiments disclosed herein are implemented using one or more wired and/or wireless network nodes, such as a wireless transmit/receive unit (WTRU) or other network entity. Any of the disclosed real-time 3D content capture server embodiments and real-time 3D content rendering client embodiments may be implemented using one or both of the systems depicted in FIGs. 37 and 38. All other embodiments discussed in this detailed description may be implemented using either or both of FIG. 37 and FIG. 38 as well. Furthermore, various hardware and software elements required for the execution of the processes described in this disclosure, such as sensors, dedicated processing modules, user interfaces, important algorithms, etc., may be omitted from FIGs. 37 and 38 for the sake of visual simplicity.
[0230] FIG. 37 depicts an exemplary wireless transmit/receive unit (WTRU) that may be employed as a real-time 3D content capture server in some embodiments and as a real-time 3D content rendering client in other embodiments. FIG. 37 may be employed to execute any of the processes disclosed herein (e.g., the processes described in relation to FIGs. 2, 14, 24, and 27). As shown in FIG. 37, the WTRU 3702 may include a processor 3718, a communication interface 3719 including a transceiver 3720, a transmit/receive element 3722, a speaker/microphone 3724, a keypad 3726, a display/touchpad 3728, a non-removable memory 3730, a removable memory 3732, a power source 3734, a global positioning system (GPS) chipset 3736, and sensors 3738. It will be appreciated that the WTRU 3702 may include any subcombination of the foregoing elements while remaining consistent with an embodiment.
[0231] The processor 3718 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor 3718 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 3702 to operate in a wireless environment. The processor 3718 may be coupled to the transceiver 3720, which may be coupled to the transmit/receive element 3722. While FIG. 37 depicts the processor 3718 and the transceiver 3720 as separate components, it will be appreciated that the processor 3718 and the transceiver 3720 may be integrated together in an electronic package or chip.
[0232] The transmit/receive element 3722 may be configured to transmit signals to, or receive signals from, a base station over the air interface 3716. For example, in one embodiment, the transmit/receive element 3722 may be an antenna configured to transmit and/or receive RF signals. In another embodiment, the transmit/receive element 3722 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, as examples. In yet another embodiment, the transmit/receive element 3722 may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receive element 3722 may be configured to transmit and/or receive any combination of wireless signals.
[0233] In addition, although the transmit/receive element 3722 is depicted in FIG. 37 as a single element, the WTRU 3702 may include any number of transmit/receive elements 3722. More specifically, the WTRU 3702 may employ MIMO technology. Thus, in one embodiment, the WTRU 3702 may include two or more transmit/receive elements 3722 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 3716.
[0234] The transceiver 3720 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 3722 and to demodulate the signals that are received by the transmit/receive element 3722. As noted above, the WTRU 3702 may have multi-mode capabilities. Thus, the transceiver 3720 may include multiple transceivers for enabling the WTRU 3702 to communicate via multiple RATs, such as UTRA and IEEE 802.11 , as examples.
[0235] The processor 3718 of the WTRU 3702 may be coupled to, and may receive user input data from, the speaker/microphone 3724, the keypad 3726, and/or the display/touchpad 3728 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit). The processor 3718 may also output user data to the speaker/microphone 3724, the keypad 3726, and/or the display/touchpad 3728. In addition, the processor 3718 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 3730 and/or the removable memory 3732. The nonremovable memory 3730 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 3732 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor 3718 may access information from, and store data in, memory that is not physically located on the WTRU 3702, such as on a server or a home computer (not shown).
[0236] The processor 3718 may receive power from the power source 3734 and may be configured to distribute and/or control the power to the other components in the WTRU 3702. The power source 3734 may be any suitable device for powering the WTRU 3702. As examples, the power source 3734 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), and the like), solar cells, fuel cells, and the like.
[0237] The processor 3718 may also be coupled to the GPS chipset 3736, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 3702. In addition to, or in lieu of, the information from the GPS chipset 3736, the WTRU 3702 may receive location information over the air interface 3716 from a base station and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 3702 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment.
[0238] The processor 3718 may further be coupled to other peripherals 3738, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity. For example, the peripherals 3738 may include sensors such as an accelerometer, an e-compass, a satellite transceiver, a projector, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like. The peripherals 3738 may include one or more sensors, the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.
[0239] The WTRU 3702 may include a full duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for both the UL (e.g., for transmission) and downlink (e.g., for reception) may be concurrent and/or simultaneous. The full duplex radio may include an interference management unit to reduce and or substantially eliminate self-interference via either hardware (e.g., a choke) or signal processing via a processor (e.g., a separate processor (not shown) or via processor 3718). In an embodiment, the WRTU 3702 may include a half-duplex radio for which transmission and reception of some or all of the signals (e.g., associated with particular subframes for either the UL (e.g., for transmission) or the downlink (e.g., for reception)).
[0240] FIG. 38 depicts an exemplary network entity that may be employed as a server in some embodiments. FIG. 38 may be employed to execute any of the relevant processes disclosed herein. As depicted in FIG. 38, network entity 3890 includes a communication interface 3892, a processor 3894, and non-transitory data storage 3896, all of which are communicatively linked by a bus, network, or other communication path 3898.
[0241] Communication interface 3892 may include one or more wired communication interfaces and/or one or more wireless-communication interfaces. With respect to wired communication,
communication interface 3892 may include one or more interfaces such as Ethernet interfaces, as an example. With respect to wireless communication, communication interface 3892 may include components such as one or more antennae, one or more transceivers/chipsets designed and configured for one or more types of wireless (e.g., LTE) communication, and/or any other components deemed suitable by those of skill in the relevant art. And further with respect to wireless communication, communication interface 3892 may be equipped at a scale and with a configuration appropriate for acting on the network side— as opposed to the client side— of wireless communications (e.g., LTE communications, Wi Fi communications, and the like). Thus, communication interface 3892 may include the appropriate equipment and circuitry (perhaps including multiple transceivers) for serving multiple mobile stations, UEs, or other access terminals in a coverage area.
[0242] Processor 3894 may include one or more processors of any type deemed suitable by those of skill in the relevant art, some examples including a general-purpose microprocessor and a dedicated DSP. [0243] Data storage 3896 may take the form of any non-transitory computer-readable medium or combination of such media, some examples including flash memory, read-only memory (ROM), and random-access memory (RAM) to name but a few, as any one or more types of non-transitory data storage deemed suitable by those of skill in the relevant art could be used. As depicted in FIG. 38, data storage 3896 contains program instructions 3897 executable by processor 3894 for carrying out various combinations of the various network-entity functions described herein.
[0244] FIG. 39 is a diagram illustrating an example communications system 100 in which one or more disclosed embodiments may be implemented. The communications system 100 may be a multiple access system that provides content, such as voice, data, video, messaging, broadcast, etc., to multiple wireless users. The communications system 100 may enable multiple wireless users to access such content through the sharing of system resources, including wireless bandwidth. For example, the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-carrier FDMA (SC-FDMA), zero-tail unique-word DFT-Spread OFDM (ZT UW DTS-s OFDM), unique word OFDM (UW-OFDM), resource block-filtered OFDM, filter bank multicarrier (FBMC), and the like.
[0245] As shown in FIG. 39, the communications system 100 may include wireless transmit/receive units ( TRUs) 102a, 102b, 102c, 102d, a RAN 104/113, a CN 106/115, a public switched telephone network (PSTN) 108, the Internet 110, and other networks 112, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements. Each of the WTRUs 102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment. By way of example, the WTRUs 102a, 102b, 102c, 102d, any of which may be referred to as a "station" and/or a "STA", may be configured to transmit and/or receive wireless signals and may include a user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a subscription-based unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, a hotspot or Mi-Fi device, an Internet of Things (loT) device, a watch or other wearable, a head-mounted display (HMD), a vehicle, a drone, a medical device and applications (e.g., remote surgery), an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts), a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and the like. Any of the WTRUs 102a, 102b, 102c and 102d may be interchangeably referred to as a UE.
[0246] The communications systems 100 may also include a base station 114a and/or a base station
114b. Each of the base stations 114a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the CN 106/115, the I nternet 110, and/or the other networks 112. By way of example, the base stations 114a, 114b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a gNB, a NR NodeB, a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
[0247] The base station 114a may be part of the RAN 104/113, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc. The base station 114a and/or the base station 114b may be configured to transmit and/or receive wireless signals on one or more carrier frequencies, which may be referred to as a cell (not shown). These frequencies may be in licensed spectrum, unlicensed spectrum, or a combination of licensed and unlicensed spectrum. A cell may provide coverage for a wireless service to a specific geographical area that may be relatively fixed or that may change over time. The cell may further be divided into cell sectors. For example, the cell associated with the base station 114a may be divided into three sectors. Thus, in one embodiment, the base station 114a may include three transceivers, i.e., one for each sector of the cell. In an embodiment, the base station 114a may employ multiple-input multiple output (MIMO) technology and may utilize multiple transceivers for each sector of the cell. For example, beamforming may be used to transmit and/or receive signals in desired spatial directions.
[0248] The base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 116, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, centimeter wave, micrometer wave, infrared (IR), ultraviolet (UV), visible light, etc.). The air interface 116 may be established using any suitable radio access technology (RAT).
[0249] More specifically, as noted above, the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, the base station 114a in the RAN 104/113 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 115/116/117 using wideband CDMA (WCDMA). WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+). HSPA may include High-Speed Downlink (DL) Packet Access (HSDPA) and/or High-Speed UL Packet Access (HSUPA). [0250] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 116 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A) and/or LTE-Advanced Pro (LTE-A Pro).
[0251] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as NR Radio Access, which may establish the air interface 116 using New Radio (NR).
[0252] In an embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement multiple radio access technologies. For example, the base station 114a and the WTRUs 102a, 102b, 102c may implement LTE radio access and NR radio access together, for instance using dual connectivity (DC) principles. Thus, the air interface utilized by WTRUs 102a, 102b, 102c may be characterized by multiple types of radio access technologies and/or transmissions sent to/from multiple types of base stations (e.g., a eNB and a gNB).
[0253] In other embodiments, the base station 114a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.11 (i.e., Wireless Fidelity (WiFi), IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1X, CDMA2000 EV- DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
[0254] The base station 114b in FIG. 39 may be a wireless router, Home Node B, Home eNode B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, an industrial facility, an air corridor (e.g., for use by drones), a roadway, and the like. In one embodiment, the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.11 to establish a wireless local area network (WLAN). In an embodiment, the base station 114b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN). In yet another embodiment, the base station 114b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, LTE-A Pro, NR etc.) to establish a picocell or femtocell. As shown in FIG. 39, the base station 114b may have a direct connection to the Internet 110. Thus, the base station 114b may not be required to access the Internet 110 via the CN 106/115.
[0255] The RAN 104/113 may be in communication with the CN 106/115, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d. The data may have varying quality of service (QoS) requirements, such as differing throughput requirements, latency requirements, error tolerance requirements, reliability requirements, data throughput requirements, mobility requirements, and the like. The CN 106/115 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication. Although not shown in FIG. 39, it will be appreciated that the RAN 104/113 and/or the CN 106/115 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 104/113 or a different RAT. For example, in addition to being connected to the RAN 104/113, which may be utilizing a NR radio technology, the CN 106/115 may also be in communication with another RAN (not shown) employing a GSM, UMTS, CDMA 2000, WiMAX, E-UTRA, or WiFi radio technology.
[0256] The CN 106/115 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or the other networks 112. The PSTN 108 may include circuit- switched telephone networks that provide plain old telephone service (POTS). The Internet 110 may include a global system of interconnected computer networks and devices that use common
communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and/or the internet protocol (IP) in the TCP/IP internet protocol suite. The networks 112 may include wired and/or wireless communications networks owned and/or operated by other service providers. For example, the networks 112 may include another CN connected to one or more RANs, which may employ the same RAT as the RAN 104/113 or a different RAT.
[0257] Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities (e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links). For example, the WTRU 102c shown in FIG. 39 may be configured to communicate with the base station 114a, which may employ a cellular-based radio technology, and with the base station 114b, which may employ an IEEE 802 radio technology.
[0258] FIG. 40 is a system diagram illustrating the RAN 104 and the CN 106 according to an embodiment. As noted above, the RAN 104 may employ an E-UTRA radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116. The RAN 104 may also be in communication with the CN 106.
[0259] The RAN 104 may include eNode-Bs 160a, 160b, 160c, though it will be appreciated that the RAN 104 may include any number of eNode-Bs while remaining consistent with an embodiment. The eNode-Bs 160a, 160b, 160c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116. In one embodiment, the eNode-Bs 160a, 160b, 160c may implement MIMO technology. Thus, the eNode-B 160a, for example, may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a.
[0260] Each of the eNode-Bs 160a, 160b, 160c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, and the like. As shown in FIG. 40, the eNode-Bs 160a, 160b, 160c may communicate with one another over an X2 interface.
[0261] The CN 106 shown in FIG. 40 may include a mobility management entity (MME) 162, a serving gateway (SGW) 164, and a packet data network (PDN) gateway (or PGW) 166. While each of the foregoing elements are depicted as part of the CN 106, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
[0262] The MME 162 may be connected to each of the eNode-Bs 162a, 162b, 162c in the RAN 104 via an S1 interface and may serve as a control node. For example, the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102a, 102b, 102c, and the like. The MME 162 may provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM and/or WCDMA.
[0263] The SGW 164 may be connected to each of the eNode Bs 160a, 160b, 160c in the RAN 104 via the S1 interface. The SGW 164 may generally route and forward user data packets to/from the WTRUs 102a, 102b, 102c. The SGW 164 may perform other functions, such as anchoring user planes during inter- eNode B handovers, triggering paging if DL data is available for the WTRUs 102a, 102b, 102c, managing and storing contexts of the WTRUs 102a, 102b, 102c, and the like.
[0264] The SGW 164 may be connected to the PGW 166, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
[0265] The CN 106 may facilitate communications with other networks. For example, the CN 106 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices. For example, the CN 106 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 106 and the PSTN 108. In addition, the CN 106 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers.
[0266] Although the WTRU is described in FIGs. 39-41 as a wireless terminal, in certain
representative embodiments, such a terminal may use (e.g., temporarily or permanently) wired communication interfaces with the communication network.
[0267] In representative embodiments, the other network 112 may be a WLAN.
[0268] A WLAN in Infrastructure Basic Service Set (BSS) mode may have an Access Point (AP) for the BSS and one or more stations (STAs) associated with the AP. The AP may have an access or an interface to a Distribution System (DS) or another type of wired/wireless network that carries traffic in to and/or out of the BSS. Traffic to STAs that originates from outside the BSS may arrive through the AP and may be delivered to the STAs. Traffic originating from STAs to destinations outside the BSS may be sent to the AP to be delivered to respective destinations. Traffic between STAs within the BSS may be sent through the AP, for example, where the source STA may send traffic to the AP and the AP may deliver the traffic to the destination STA. The traffic between STAs within a BSS may be considered and/or referred to as peer-to-peer traffic. The peer-to-peer traffic may be sent between (e.g., directly between) the source and destination STAs with a direct link setup (DLS). In certain representative embodiments, the DLS may use an 802.11e DLS or an 802.11 z tunneled DLS (TDLS). A WLAN using an Independent BSS (I BSS) mode may not have an AP, and the STAs (e.g., all of the STAs) within or using the IBSS may communicate directly with each other. The IBSS mode of communication may sometimes be referred to herein as an "ad- hoc" mode of communication.
[0269] If using the 802.11 ac infrastructure mode of operation or a similar mode of operations, the AP may transmit a beacon on a fixed channel, such as a primary channel. The primary channel may be a fixed width (e.g., 20 MHz wide bandwidth) or a dynamically set width via signaling. The primary channel may be the operating channel of the BSS and may be used by the STAs to establish a connection with the AP. In certain representative embodiments, Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) may be implemented, for example in in 802.11 systems. For CSMA/CA, the STAs (e.g., every STA), including the AP, may sense the primary channel. If the primary channel is sensed/detected and/or determined to be busy by a particular STA, the particular STA may back off. One STA (e.g., only one station) may transmit at any given time in a given BSS.
[0270] High Throughput (HT) STAs may use a 40 MHz wide channel for communication, for example, via a combination of the primary 20 MHz channel with an adjacent or nonadjacent 20 MHz channel to form a 40 MHz wide channel. [0271] Very High Throughput (VHT) STAs may support 20MHz, 40 MHz, 80 MHz, and/or 160 MHz wide channels. The 40 MHz, and/or 80 MHz, channels may be formed by combining contiguous 20 MHz channels. A 160 MHz channel may be formed by combining 8 contiguous 20 MHz channels, or by combining two non-contiguous 80 MHz channels, which may be referred to as an 80+80 configuration. For the 80+80 configuration, the data, after channel encoding, may be passed through a segment parser that may divide the data into two streams. Inverse Fast Fourier Transform (IFFT) processing, and time domain processing, may be done on each stream separately. The streams may be mapped on to the two 80 MHz channels, and the data may be transmitted by a transmitting STA. At the receiver of the receiving STA, the above described operation for the 80+80 configuration may be reversed, and the combined data may be sent to the Medium Access Control (MAC).
[0272] Sub 1 GHz modes of operation are supported by 802.11 af and 802.11 ah. The channel operating bandwidths, and carriers, are reduced in 802.11 af and 802.11 ah relative to those used in 802.11 n, and 802.11 ac. 802.11 af supports 5 MHz, 10 MHz and 20 MHz bandwidths in the TV White Space (TVWS) spectrum, and 802.11 ah supports 1 MHz, 2 MHz, 4 MHz, 8 MHz, and 16 MHz bandwidths using non-TVWS spectrum. According to a representative embodiment, 802.11 ah may support Meter Type Control/Machine-Type Communications, such as MTC devices in a macro coverage area. MTC devices may have certain capabilities, for example, limited capabilities including support for (e.g., only support for) certain and/or limited bandwidths. The MTC devices may include a battery with a battery life above a threshold (e.g., to maintain a very long battery life).
[0273] WLAN systems, which may support multiple channels, and channel bandwidths, such as 802.11 η, 802.11ac, 802.11 af, and 802.11 ah, include a channel which may be designated as the primary channel. The primary channel may have a bandwidth equal to the largest common operating bandwidth supported by all STAs in the BSS. The bandwidth of the primary channel may be set and/or limited by a STA, from among all STAs in operating in a BSS, which supports the smallest bandwidth operating mode. In the example of 802.11 ah, the primary channel may be 1 MHz wide for STAs (e.g., MTC type devices) that support (e.g., only support) a 1 MHz mode, even if the AP, and other STAs in the BSS support 2 MHz, 4 MHz, 8 MHz, 16 MHz, and/or other channel bandwidth operating modes. Carrier sensing and/or Network Allocation Vector (NAV) settings may depend on the status of the primary channel. If the primary channel is busy, for example, due to a STA (which supports only a 1 MHz operating mode), transmitting to the AP, the entire available frequency bands may be considered busy even though a majority of the frequency bands remains idle and may be available.
[0274] In the United States, the available frequency bands, which may be used by 802.11 ah, are from
902 MHz to 928 MHz. In Korea, the available frequency bands are from 917.5 MHz to 923.5 MHz. In Japan, the available frequency bands are from 916.5 MHz to 927.5 MHz. The total bandwidth available for 802.11 ah is 6 MHz to 26 MHz depending on the country code.
[0275] FIG. 41 is a system diagram illustrating the RAN 113 and the CN 115 according to an embodiment. As noted above, the RAN 113 may employ an NR radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 116. The RAN 113 may also be in communication with the CN 115.
[0276] The RAN 113 may include gNBs 180a, 180b, 180c, though it will be appreciated that the RAN 113 may include any number of gNBs while remaining consistent with an embodiment. The gNBs 180a, 180b, 180c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116. In one embodiment, the gNBs 180a, 180b, 180c may implement MIMO technology. For example, gNBs 180a, 108b may utilize beamforming to transmit signals to and/or receive signals from the gNBs 180a, 180b, 180c. Thus, the gNB 180a, for example, may use multiple antennas to transmit wireless signals to, and/or receive wireless signals from, the WTRU 102a. In an embodiment, the gNBs 180a, 180b, 180c may implement carrier aggregation technology. For example, the gNB 180a may transmit multiple component carriers to the WTRU 102a (not shown). A subset of these component carriers may be on unlicensed spectrum while the remaining component carriers may be on licensed spectrum. In an embodiment, the gNBs 180a, 180b, 180c may implement Coordinated Multi-Point (CoMP) technology. For example, WTRU 102a may receive coordinated transmissions from gNB 180a and gNB 180b (and/or gNB 180c).
[0277] The WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using transmissions associated with a scalable numerology. For example, the OFDM symbol spacing and/or OFDM subcarrier spacing may vary for different transmissions, different cells, and/or different portions of the wireless transmission spectrum. The WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using subframe or transmission time intervals (TTIs) of various or scalable lengths (e.g., containing varying number of OFDM symbols and/or lasting varying lengths of absolute time).
[0278] The gNBs 180a, 180b, 180c may be configured to communicate with the WTRUs 102a, 102b, 102c in a standalone configuration and/or a non-standalone configuration. In the standalone configuration, WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c without also accessing other RANs (e.g., such as eNode-Bs 160a, 160b, 160c). In the standalone configuration, WTRUs 102a, 102b, 102c may utilize one or more of gNBs 180a, 180b, 180c as a mobility anchor point. In the standalone configuration, WTRUs 102a, 102b, 102c may communicate with gNBs 180a, 180b, 180c using signals in an unlicensed band. In a non-standalone configuration WTRUs 102a, 102b, 102c may communicate with/connect to gNBs 180a, 180b, 180c while also communicating with/connecting to another RAN such as eNode-Bs 160a, 160b, 160c. For example, WTRUs 102a, 102b, 102c may implement DC principles to communicate with one or more gNBs 180a, 180b, 180c and one or more eNode-Bs 160a, 160b, 160c substantially simultaneously. In the non-standalone configuration, eNode-Bs 160a, 160b, 160c may serve as a mobility anchor for WTRUs 102a, 102b, 102c and gNBs 180a, 180b, 180c may provide additional coverage and/or throughput for servicing WTRUs 102a, 102b, 102c.
[0279] Each of the gNBs 180a, 180b, 180c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, support of network slicing, dual connectivity, interworking between NR and E- UTRA, routing of user plane data towards User Plane Function (UPF) 184a, 184b, routing of control plane information towards Access and Mobility Management Function (AMF) 182a, 182b and the like. As shown in FIG. 41 , the gNBs 180a, 180b, 180c may communicate with one another over an Xn interface.
[0280] The CN 115 shown in FIG. 41 may include at least one AMF 182a, 182b, at least one UPF 184a,184b, at least one Session Management Function (SMF) 183a, 183b, and possibly a Data Network (DN) 185a, 185b. While each of the foregoing elements are depicted as part of the CN 115, it will be appreciated that any of these elements may be owned and/or operated by an entity other than the CN operator.
[0281] The AMF 182a, 182b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 113 via an N2 interface and may serve as a control node. For example, the AMF 182a, 182b may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, support for network slicing (e.g., handling of different PDU sessions with different requirements), selecting a particular SMF 183a, 183b, management of the registration area, termination of NAS signaling, mobility management, and the like. Network slicing may be used by the AMF 182a, 182b in order to customize CN support for WTRUs 102a, 102b, 102c based on the types of services being utilized WTRUs 102a, 102b, 102c. For example, different network slices may be established for different use cases such as services relying on ultra-reliable low latency (URLLC) access, services relying on enhanced massive mobile broadband (eMBB) access, services for machine type communication (MTC) access, and/or the like. The AMF 162 may provide a control plane function for switching between the RAN 113 and other RANs (not shown) that employ other radio technologies, such as LTE, LTE-A, LTE-A Pro, and/or non-3GPP access technologies such as WiFi.
[0282] The SMF 183a, 183b may be connected to an AMF 182a, 182b in the CN 115 via an N11 interface. The SMF 183a, 183b may also be connected to a UPF 184a, 184b in the CN 115 via an N4 interface. The SMF 183a, 183b may select and control the UPF 184a, 184b and configure the routing of traffic through the UPF 184a, 184b. The SMF 183a, 183b may perform other functions, such as managing and allocating UE IP address, managing PDU sessions, controlling policy enforcement and QoS, providing downlink data notifications, and the like. A PDU session type may be IP-based, non-IP based, Ethernet- based, and the like.
[0283] The UPF 184a, 184b may be connected to one or more of the gNBs 180a, 180b, 180c in the RAN 113 via an N3 interface, which may provide the WTRUs 102a, 102b, 102c with access to packet- switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices. The UPF 184, 184b may perform other functions, such as routing and forwarding packets, enforcing user plane policies, supporting multi-homed PDU sessions, handling user plane QoS, buffering downlink packets, providing mobility anchoring, and the like.
[0284] The CN 115 may facilitate communications with other networks. For example, the CN 115 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the CN 115 and the PSTN 108. In addition, the CN 115 may provide the WTRUs 102a, 102b, 102c with access to the other networks 112, which may include other wired and/or wireless networks that are owned and/or operated by other service providers. In one embodiment, the WTRUs 102a, 102b, 102c may be connected to a local Data Network (DN) 185a, 185b through the UPF 184a, 184b via the N3 interface to the UPF 184a, 184b and an N6 interface between the UPF 184a, 184b and the DN 185a, 185b.
[0285] In view of FIGs. 37-41 , and the corresponding description of FIGs. 37-41 , one or more, or all, of the functions described herein with regard to one or more of: WTRU 102a-d, Base Station 114a-b, eNode-B 160a-c, MME 162, SGW 164, PGW 166, gNB 180a-c, AMF 182a-b, UPF 184a-b, SMF 183a-b, DN 185a-b, and/or any other device(s) described herein, may be performed by one or more emulation devices (not shown). The emulation devices may be one or more devices configured to emulate one or more, or all, of the functions described herein. For example, the emulation devices may be used to test other devices and/or to simulate network and/or WTRU functions.
[0286] The emulation devices may be designed to implement one or more tests of other devices in a lab environment and/or in an operator network environment. For example, the one or more emulation devices may perform the one or more, or all, functions while being fully or partially implemented and/or deployed as part of a wired and/or wireless communication network in order to test other devices within the communication network. The one or more emulation devices may perform the one or more, or all, functions while being temporarily implemented/deployed as part of a wired and/or wireless communication network. The emulation device may be directly coupled to another device for purposes of testing and/or may performing testing using over-the-air wireless communications.
[0287] The one or more emulation devices may perform the one or more, including all, functions while not being implemented/deployed as part of a wired and/or wireless communication network. For example, the emulation devices may be utilized in a testing scenario in a testing laboratory and/or a non-deployed (e.g., testing) wired and/or wireless communication network in order to implement testing of one or more components. The one or more emulation devices may be test equipment. Direct RF coupling and/or wireless communications via RF circuitry (e.g., which may include one or more antennas) may be used by the emulation devices to transmit and/or receive data.
[0288] Note that various hardware elements of one or more of the described embodiments are referred to as "modules" that carry out (i.e., perform, execute, and the like) various functions that are described herein in connection with the respective modules. As used herein, a module includes hardware (e.g., one or more processors, one or more microprocessors, one or more microcontrollers, one or more microchips, one or more application-specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more memory devices) deemed suitable by those of skill in the relevant art for a given implementation. Each described module may also include instructions executable for carrying out the one or more functions described as being carried out by the respective module, and it is noted that those instructions could take the form of or include hardware (hardwired) instructions, firmware instructions, software instructions, and/or the like, and may be stored in any suitable non-transitory computer-readable medium or media, such as commonly referred to as RAM or ROM.
[0289] Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element may be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor. Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.
[0290] In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes may be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings.
[0291] The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued.
[0292] Moreover, in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," "has", "having," "includes", "including," "contains", "containing" or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by "comprises . . . a", "has . . . a", "includes . . . a", "contains . . . a" does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms "a" and "an" are defined as one or more unless explicitly stated otherwise herein. The terms "substantially",
"essentially", "approximately", "about" or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1 % and in another embodiment within 0.5%. The term "coupled" as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is "configured" in a certain way is configured in at least that way but may also be configured in ways that are not listed.
[0293] It will be appreciated that some embodiments may be comprised of one or more generic or specialized processors (or "processing devices") such as microprocessors, digital signal processors, customized processors and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus described herein. Alternatively, some or all functions could be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic. Of course, a combination of the two approaches could be used.
[0294] Accordingly, some embodiments of the present disclosure, or portions thereof, may combine one or more processing devices with one or more software components (e.g., program code, firmware, resident software, micro-code, etc.) stored in a tangible computer-readable memory device, which in combination from a specifically configured apparatus that performs the functions as described herein. These combinations that form specially programmed devices may be generally referred to herein
"modules". The software component portions of the modules may be written in any computer language and may be a portion of a monolithic code base or may be developed in more discrete code portions such as is typical in object-oriented computer languages. In addition, the modules may be distributed across a plurality of computer platforms, servers, terminals, and the like. A given module may even be implemented such that separate processor devices and/or computing hardware platforms perform the described functions.
[0295] Moreover, an embodiment may be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (e.g., comprising a processor) to perform a method as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory. Further, it is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, if guided by the concepts and principles disclosed herein will be readily capable of generating such software instructions and programs and ICs with minimal
experimentation.
[0296] The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it may be understood that various features are grouped together in various embodiments with the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.

Claims

CLAIMS What is Claimed:
1. A method comprising:
displaying, in a first virtual space, a first virtual reality (VR) content to a first user of a first head- mounted display (HMD) device;
determining a first level of interactivity for a virtual object within the first virtual space;
determining a first probable-collision frame representing a first region with a first probability of entry by the first user based on the first level of interactivity for the virtual object and a location of the virtual object within the first virtual space; and
projecting, on a surface proximate to the first user, a first indication of the first probable-collision frame.
2. The method of claim 1 , wherein determining the first level of interactivity for the virtual object within the first virtual space comprises calculating an interactivity value, wherein the interactivity value comprises an estimated likelihood of the first user moving toward the virtual object within the first virtual space.
3. The method of claim 1 , further comprising:
displaying, in a second virtual space, a second virtual reality (VR) content to a second user of a second head-mounted display (HMD) device;
determining a second level of interactivity for a second virtual object within the second virtual space;
determining a second probable-collision frame representing a second region with a second probability of entry by the second user based on the second level of interactivity for the second virtual object and a location of the second virtual object within the second virtual space; and
projecting, on a second surface proximate to the second user, a second indication of the second probable-collision frame.
4. The method of claim 3,
wherein projecting the first indication of the first probable-collision frame and projecting the second indication of the second probable-collision frame are performed by a common projector, and wherein the first virtual space is connected to the second virtual space.
5. The method of any of claims 1 to 3, wherein determining the first probable-collision frame is further based on dimensional body data for the first user.
6. The method of claim 5, further comprising determining an average of the dimensional body data for the first user, wherein determining the first probable-collision frame is further based on the average of the dimensional body data for the first user.
7. The method of any of claims 1 to 4, wherein determining the first probable-collision frame comprises selecting a first probable-collision frame that includes the virtual object within the first virtual space having a largest determined level of interactivity.
8. The method of any of claims 1 to 4, further comprising measuring a first distance between the first user and a first external user, wherein projecting the first indication of the first probable-collision frame is responsive to the first measured distance being less than a first threshold.
9. The method of claim 8, further comprising measuring a second distance between the first user and a second external user, wherein projecting the first indication of the first probable-collision frame is responsive to at least one of the first measured distance and the second measured distance being less than a second threshold.
10. The method of any of claims 1 to 4, wherein projecting the first indication of the first probable-collision frame includes rendering the first indication of the first probable-collision frame based on a point-of- view position of the first external user.
11. The method of any of claims 1 to 4, wherein projecting the first indication of the first probable-collision frame includes projecting a rendering of one of the one or more virtual objects.
12. The method of any of claims 1 to 4, further comprising adaptively changing the first indication of the first probable-collision frame in response to the first VR content displayed on the first HMD device.
13. The method of any of claims 1 to 4, wherein projecting the first indication of the first probable-collision frame includes projecting a rendering of a heat map that indicates a probability of collision with the first user.
14. The method of any of claims 1 to 4,
wherein projecting the first indication of the first probable-collision frame includes projecting a warning image between the first user and another user in a real world space,
wherein the another user is within a threshold distance from the first user, and wherein the warning image is configured to avoid a potential collision between the first user and the external user.
15. An apparatus comprising:
a projector;
a processor; and
a non-transitory computer-readable medium storing instructions that are operative, if executed on the processor, to perform the processes of:
displaying, in a virtual space, a virtual reality (VR) content to a user of a head-mounted display (HMD) device;
determining a level of interactivity for one or more virtual objects within the virtual space; determining a probable-collision frame representing a region with a probability of entry by the user based on the level of interactivity for the one or more virtual objects and a location of the one or more virtual objects within the virtual space; and
projecting, on a surface proximate to the user, an indication of the probable-collision frame.
16. A method comprising:
displaying, in a virtual space, a virtual reality (VR) content to a user of a head-mounted display (HMD) device;
determining a probable-collision frame representing a region with a probability of entry by the user; and
projecting, on a surface proximate to the user, an indication of the probable-collision frame.
17. The method of claim 16, further comprising:
determining a level of interactivity for one or more virtual objects within the virtual space, wherein determining the level of interactivity comprises determining an average of one or more characteristic display parameters for each of the one or more virtual objects within the virtual space,
wherein the characteristic display parameters include one or more parameters selected from the group consisting of pixel size, interest of the virtual object, degree of pixel change, display time, degree of interaction induction, and location.
18. The method of claims 17, further comprising:
determining a profile information value as an average of user profile dimensional body data for the user; and determining a physical motion area ratio value based on the level of interactivity and the profile information value,
wherein determining the probable-collision frame is based on the level of interactivity for the one or more virtual objects and a location of the one or more virtual objects within the virtual space and the physical motion area ratio value.
19. The method of claim 18, wherein determining the physical motion area ratio value comprises
determining an average of the profile information value and the level of interactivity for one of the one or more virtual objects within the virtual space.
20. The method of any of claims 16 to 19,
wherein projecting the indication of the probable-collision frame on the surface proximate to the user includes:
selecting the surface proximate to the user, wherein the selected surface is located between the user and an external user outside of the virtual space within a threshold distance of the user; rendering the probable-collision frame; and
projecting the rendered probable-collision frame on the selected surface.
21. The method of claim 20, wherein rendering the probable-collision frame comprises:
mapping a point-of-view position of the external user to the virtual space;
orienting the probable-collision frame to be facing the external user; and
distorting a horizontal ratio and a vertical ratio of the probable-collision frame based on the point- of-view position of the external user.
22. The method of any of claims 16 to 19, wherein projecting the indication of the probable-collision frame comprises rendering a heat map as the probable-collision frame.
23. An apparatus comprising:
a projector;
a processor; and
a non-transitory computer-readable medium storing instructions that are operative, if executed on the processor, to perform the processes of:
displaying, in a virtual space, a virtual reality (VR) content to a user of a head-mounted display (HMD) device; determining a probable-collision frame representing a region with a probability of entry by the user; and
projecting, on a surface proximate to the user, an indication of the probable-collision frame.
24. The apparatus of claim 23, wherein the apparatus comprises the HMD device, and the HMD device comprises the projector and the processor.
25. The apparatus of claim 23, wherein the projector is external to and separate from the HMD device.
PCT/US2018/028451 2017-04-26 2018-04-19 Method and apparatus for projecting collision-deterrents in virtual reality viewing environments WO2018200315A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762490402P 2017-04-26 2017-04-26
US62/490,402 2017-04-26

Publications (1)

Publication Number Publication Date
WO2018200315A1 true WO2018200315A1 (en) 2018-11-01

Family

ID=62167932

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/028451 WO2018200315A1 (en) 2017-04-26 2018-04-19 Method and apparatus for projecting collision-deterrents in virtual reality viewing environments

Country Status (1)

Country Link
WO (1) WO2018200315A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10664988B2 (en) * 2018-06-28 2020-05-26 Intel Corporation Methods and apparatus to avoid collisions in shared physical spaces using universal mapping of virtual environments
US10832484B1 (en) 2019-05-09 2020-11-10 International Business Machines Corporation Virtual reality risk detection
TWI747186B (en) * 2020-03-05 2021-11-21 國立臺北科技大學 Methods and systems of augmented reality processing, computer program product and computer-readable recording medium
WO2021233568A1 (en) * 2020-05-20 2021-11-25 Gixel GmbH Augmented reality glasses with external projection area
CN114758105A (en) * 2022-04-27 2022-07-15 歌尔股份有限公司 Collision prompt method, collision prevention device and computer readable storage medium
US11481931B2 (en) 2020-07-07 2022-10-25 Qualcomm Incorporated Virtual private space for extended reality
CN115543093A (en) * 2022-11-24 2022-12-30 浙江安吉吾知科技有限公司 Anti-collision system based on VR technology interaction entity movement
CN117055739A (en) * 2023-10-11 2023-11-14 深圳优立全息科技有限公司 Holographic equipment interaction method, device, equipment and storage medium
WO2023242431A1 (en) * 2022-06-17 2023-12-21 Interdigital Ce Patent Holdings, Sas Degree-of-freedom control in xr experiences

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120194554A1 (en) * 2011-01-28 2012-08-02 Akihiko Kaino Information processing device, alarm method, and program
US20150024368A1 (en) * 2013-07-18 2015-01-22 Intelligent Decisions, Inc. Systems and methods for virtual environment conflict nullification

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120194554A1 (en) * 2011-01-28 2012-08-02 Akihiko Kaino Information processing device, alarm method, and program
US20150024368A1 (en) * 2013-07-18 2015-01-22 Intelligent Decisions, Inc. Systems and methods for virtual environment conflict nullification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BACHMANN ERIC R ET AL: "Collision prediction and prevention in a simultaneous two-user immersive virtual environment", 2013 IEEE VIRTUAL REALITY (VR), IEEE, 18 March 2013 (2013-03-18), pages 89 - 90, XP032479432, ISSN: 1087-8270, ISBN: 978-1-4673-4795-2, [retrieved on 20130627], DOI: 10.1109/VR.2013.6549377 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10664988B2 (en) * 2018-06-28 2020-05-26 Intel Corporation Methods and apparatus to avoid collisions in shared physical spaces using universal mapping of virtual environments
US11244471B2 (en) 2018-06-28 2022-02-08 Intel Corporation Methods and apparatus to avoid collisions in shared physical spaces using universal mapping of virtual environments
US10832484B1 (en) 2019-05-09 2020-11-10 International Business Machines Corporation Virtual reality risk detection
TWI747186B (en) * 2020-03-05 2021-11-21 國立臺北科技大學 Methods and systems of augmented reality processing, computer program product and computer-readable recording medium
WO2021233568A1 (en) * 2020-05-20 2021-11-25 Gixel GmbH Augmented reality glasses with external projection area
WO2021234015A1 (en) * 2020-05-20 2021-11-25 Gixel GmbH Spectacles display system for displaying a virtual image in a field of vision of a user
US11481931B2 (en) 2020-07-07 2022-10-25 Qualcomm Incorporated Virtual private space for extended reality
CN114758105A (en) * 2022-04-27 2022-07-15 歌尔股份有限公司 Collision prompt method, collision prevention device and computer readable storage medium
WO2023242431A1 (en) * 2022-06-17 2023-12-21 Interdigital Ce Patent Holdings, Sas Degree-of-freedom control in xr experiences
CN115543093A (en) * 2022-11-24 2022-12-30 浙江安吉吾知科技有限公司 Anti-collision system based on VR technology interaction entity movement
CN117055739A (en) * 2023-10-11 2023-11-14 深圳优立全息科技有限公司 Holographic equipment interaction method, device, equipment and storage medium
CN117055739B (en) * 2023-10-11 2024-01-26 深圳优立全息科技有限公司 Holographic equipment interaction method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2018200315A1 (en) Method and apparatus for projecting collision-deterrents in virtual reality viewing environments
US11442535B2 (en) Systems and methods for region of interest estimation for virtual reality
JP7333802B2 (en) Metrics and messages to improve your 360-degree adaptive streaming experience
US11360553B2 (en) Systems and methods employing predictive overfilling for virtual reality
US20220180842A1 (en) System and method for prioritizing ar information based on persistence of real-life objects in the user's view
US20180276891A1 (en) System and method for providing an in-context notification of a real-world event within a virtual reality experience
US20200322632A1 (en) Face discontinuity filtering for 360-degree video coding
US20230337269A1 (en) Methods, architectures, apparatuses and systems for extended reality-assisted radio resource management
WO2018200337A1 (en) System and method for simulating light transport between virtual and real objects in mixed reality
WO2019089382A1 (en) 360-degree video coding using face-based geometry padding
US11223848B2 (en) Weighted to spherically uniform PSNR for 360-degree video quality evaluation using cubemap-based projections
US20230377273A1 (en) Method for mirroring 3d objects to light field displays
US20240155417A1 (en) Integrated sensing coordination with a sensing operation management function
WO2023242431A1 (en) Degree-of-freedom control in xr experiences
WO2023081197A1 (en) Methods and apparatus for supporting collaborative extended reality (xr)
WO2023283240A1 (en) Method and procedures for adaptive high granularity sensing using multi-sta coordination

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18725061

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18725061

Country of ref document: EP

Kind code of ref document: A1