US12236524B2 - Methods and apparatus for the creation and evaluation of virtual reality interactions - Google Patents
Methods and apparatus for the creation and evaluation of virtual reality interactions Download PDFInfo
- Publication number
- US12236524B2 US12236524B2 US17/968,300 US202217968300A US12236524B2 US 12236524 B2 US12236524 B2 US 12236524B2 US 202217968300 A US202217968300 A US 202217968300A US 12236524 B2 US12236524 B2 US 12236524B2
- Authority
- US
- United States
- Prior art keywords
- virtual object
- virtual
- data
- interaction
- interaction data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
Definitions
- the present disclosure relates to virtual reality interactions.
- Images of virtual environments are typically generated for display to a user and features within the virtual environment can be created by developers which are responsive to a user input to thereby provide the user with sensory feedback for the virtual environment.
- a typical example of this is when a user provides an input to interact with virtual objects in a virtual environment, and data associated with the virtual objects is processed to provide a visual or audio update with respect to the virtual objects which is perceivable by the user.
- visual and audio sensations can be provided to the user to provide sensory feedback to the user that the user has interacted with the virtual object thereby improving immersion.
- Example embodiments include at least a system, a method, a computer program and a machine-readable, non-transitory storage medium which stores such a computer program.
- FIG. 4 is a schematic flowchart of a data processing method.
- FIG. 1 schematically illustrates the overall system architecture of a computer game processing apparatus such as the Sony® PlayStation 4® entertainment device.
- a system unit 10 is provided, with various peripheral devices connectable to the system unit.
- bus 40 Connected to the bus 40 are data storage components such as a hard disk drive 37 , and a Blu-ray® drive 36 operable to access data on compatible optical discs 36 A. Additionally the RAM unit 22 may communicate with the bus 40 .
- auxiliary processor 38 is also connected to the bus 40 .
- the auxiliary processor 38 may be provided to run or support the operating system.
- the system unit 10 communicates with peripheral devices as appropriate via an audio/visual input port 31 , an Ethernet® port 32 , a Bluetooth® wireless link 33 , a Wi-Fi® wireless link 34 , or one or more universal serial bus (USB) ports 35 .
- Audio and video may be output via an AV output 39 , such as an HDMI port.
- the peripheral devices may include a monoscopic or stereoscopic video camera 41 such as the PlayStation Eye®; wand-style videogame controllers 42 such as the PlayStation Move® and conventional handheld videogame controllers 43 such as the DualShock 4®; portable entertainment devices 44 such as the PlayStation Portable® and PlayStation Vita®; a keyboard 45 and/or a mouse 46 ; a media controller 47 , for example in the form of a remote control; and a headset 48 .
- Other peripheral devices may similarly be considered such as a printer, or a 3 D printer (not shown).
- the GPU 20 B optionally in conjunction with the CPU 20 A, processes data and generates video images (image data) and optionally audio for output via the AV output 39 .
- the audio may be generated in conjunction with or instead by an audio processor (not shown).
- the video and optionally the audio may be presented to a television 51 .
- the video may be stereoscopic.
- the audio may be presented to a home cinema system 52 in one of a number of formats such as stereo, 5.1 surround sound or 7.1 surround sound.
- Video and audio may likewise be presented to a head mounted display unit 53 worn by a user 60 .
- the entertainment device defaults to an operating system such as a variant of FreeBSD 9.0.
- the operating system may run on the CPU 20 A, the auxiliary processor 38 , or a mixture of the two.
- the operating system provides the user with a graphical user interface such as the PlayStation Dynamic Menu. The menu allows the user to access operating system features and to select games and optionally other content.
- FIG. 1 therefore provides an example of a data processing apparatus suitable for processing data associated with a virtual environment to generate images of the virtual environment for display to a user.
- an HMD 20 is shown as an example of an output device for displaying images of a virtual environment to a user 10 .
- the user 10 is wearing the HMD 20 (as an example of a generic head-mountable apparatus—other examples including audio headphones or a head-mountable light source) on the user's head 30 .
- the HMD comprises a frame 40 , in this example formed of a rear strap and a top strap, and a display portion 50 .
- the HMD of FIG. 2 may comprise further features, but which are not shown in FIG. 2 for clarity of this initial explanation.
- the HMD of FIG. 2 completely (or at least substantially completely) obscures the user's view of the surrounding environment. All that the user can see is the pair of images displayed within the HMD, as supplied by an external processing device such as the data processing apparatus of FIG. 1 . In some cases, images may instead (or additionally) be generated by a processor or obtained from memory located at the HMD itself.
- the HMD has associated headphone audio transducers or earpieces 60 which fit into the user's left and right ears 70 .
- the earpieces 60 replay an audio signal provided from an external source, which may be the same as the video signal source which provides the video signal for display to the user's eyes.
- this HMD may be considered as a so-called “full immersion” HMD. Note however that in some cases the HMD is not a full immersion HMD, and may provide at least some facility for the user to see and/or hear the user's surroundings.
- a camera for example a camera mounted on the HMD
- a Bluetooth® antenna may provide communication facilities or may simply be arranged as a directional antenna to allow a detection of the direction of a nearby Bluetooth® transmitter.
- a video signal is provided for display by the HMD.
- This could be provided by an external video signal source 80 such as a video games machine or data processing apparatus (such as a personal computer or the PS5®), in which case the signals could be transmitted to the HMD by a wired or a wireless connection.
- suitable wireless connections include Bluetooth® connections and an example of suitable wired connections include High Definition Multimedia Interface (HDMI®) and DisplayPort®.
- Audio signals for the earpieces 60 can be carried by the same connection.
- any control signals passed between the HMD to the video (audio) signal source may be carried by the same connection.
- a power supply including one or more batteries and/or being connectable to a mains power outlet
- the power supply and the video signal source 80 may be separate units or may be embodied as the same physical unit. There may be separate cables for power and video (and indeed for audio) signal supply, or these may be combined for carriage on a single cable (for example, using separate conductors, as in a USB cable, or in a similar way to a “power over Ethernet” arrangement in which data is carried as a balanced signal and power as direct current, over the same collection of physical wires).
- the video and/or audio signal may in some examples be carried by an optical fibre cable. In other examples, at least part of the functionality associated with generating image and/or audio signals for presentation to the user may be carried out by circuitry and/or processing forming part of the HMD itself.
- FIG. 2 provides an example of a head-mountable display comprising a frame to be mounted onto an observer's head, the frame defining one or two eye display positions which, in use, are positioned in front of a respective eye of the observer and a display element mounted with respect to each of the eye display positions, the display element providing a virtual image of a video display of a video signal from a video signal source to that eye of the observer.
- FIG. 2 shows just one example of an HMD.
- an HMD could use a frame more similar to that associated with conventional eyeglasses, namely a substantially horizontal leg extending back from the display portion to the top rear of the user's ear, possibly curling down behind the ear.
- the user's view of the external environment may not in fact be entirely obscured; the displayed images could be arranged so as to be superposed (from the user's point of view) over the external environment.
- the HMD is an example of an output device for displaying images of a virtual environment to the user 10 .
- the HMD can optionally provide an audio output in dependence upon one or more audio signals.
- the HMD may optionally comprise one or more haptic interfaces for providing a haptic output to the user 10 wearing the HMD according to one or more haptic signals.
- one or more haptic interfaces may be provided for providing a headset rumble.
- the user 10 may be provided with one or more haptic interactions via a peripheral device, such as a handheld controller.
- images of a virtual environment can be displayed to the user 10 and the user can interact with the virtual environment visually, audibly and haptically.
- Interaction data specifying one or more interactions available for a virtual object can thus be created and associated with a virtual object (e.g. associated with the graphics data for a virtual object) for use in triggering one or more interactions for that virtual object in response to a user input.
- the interaction data e.g. one or more scripts
- the interaction data for an object may specify an audio event for the object and specify a condition (e.g. a specific user input or a condition within the virtual environment such as proximity of two virtual objects) for which that audio event is to be triggered.
- interaction data can be created for virtual objects to provide a range of interaction mechanisms such as visual interactions, audible interactions and/or haptic interactions.
- virtual environments often include a number of virtual objects for which no interaction data has been provided during development of the content, and such objects can result in a significant loss of immersion for a user when the user attempts to interact therewith.
- Some in-game virtual environments include a potentially large number of virtual objects for which no user interaction is available and a user's realisation of the user's inability to interact with certain virtual objects can cause an abrupt loss of immersion for the user.
- the operations to be discussed below relate to techniques for improving virtual environments by allowing interaction data associated with a respective virtual object to be shared with other virtual objects in the virtual environment to permit use of the shared interaction data for providing interactions for other virtual objects.
- FIG. 3 schematically illustrates a data processing apparatus 300 for sharing interaction data for a first virtual object with another virtual object to thereby permit use of the interaction data for the another virtual object and thus allow user interaction with the another virtual object.
- the data processing apparatus 300 comprises: receiving circuitry 310 to receive graphics data for at least a portion of a virtual environment, the graphics data including a plurality of virtual objects; a machine learning model 320 trained to output data indicative of an object classification for one or more of the virtual objects in dependence upon the graphics data; assignment circuitry 330 to assign one or more of the virtual objects to one or more virtual object groups in dependence upon the object classification for one or more of the virtual objects, wherein the virtual objects assigned to a same virtual object group have a same object classification; and control circuitry 340 to share first interaction data associated with a first virtual object assigned to a virtual object group with a second virtual object assigned to the virtual object group to permit use of the first interaction data for the second virtual object in response to a user interaction with the second virtual object in the virtual environment.
- the receiving circuitry 310 is configured to receive graphics data for at least a portion of a virtual environment, in which the graphics data includes a plurality of virtual objects associated with the virtual environment.
- the graphics data for a virtual object defines one or more visual characteristics for the virtual object.
- the graphics data may comprise mesh data, point cloud data and/or texture data for a respective virtual object.
- the graphics data for a virtual object may be predefined graphics data (e.g. created by a developer using a virtual reality content creation tool) which is suitable for being processed by a processing unit (e.g. GPU 20 B) to generate image data for one or more image frames.
- the graphics data may comprise one or more images of the virtual environment that have been generated for display by an output device (such as the HMD, such as that shown in FIG. 2 , and/or a display device such as a TV or monitor).
- the receiving circuitry 310 is configured to receive graphics data comprising image data generated by the apparatus in FIG. 1 .
- the graphics data comprises any suitable data that visually characterises a virtual object, and may comprise one or more from the list consisting of: mesh data; point cloud data; texture data; and image data.
- the data processing apparatus 300 may for example be provided as part of a device such as that shown in FIG. 1 , or another processing device such as a cloud gaming server or the HMD 20 .
- the receiving circuitry 310 may receive first mesh data associated with a first virtual object in a virtual environment and second mesh data associated with a second virtual object in the virtual environment.
- one or more images of the virtual environment generated for display by an output device can be received, in which one or more of the images include a visual representation of the first and second virtual objects.
- references herein to images received by the receiving circuitry 310 refer to either stereoscopic images for which left images and right images are generated for display to the respective eyes or a single image that is displayed to both eyes.
- the images of the virtual environment can be received and the images include a plurality of virtual objects having respective different positions within the virtual environment.
- a first virtual object such as a virtual car may be included in a first image
- another virtual object such as another virtual car (and/or another virtual object of a different type) may be present.
- the received images include a plurality of virtual objects.
- the machine learning model 320 is trained to receive at least some of the graphics data for the virtual environment received the receiving circuitry 310 and to output data indicative of an object classification for one or more of the virtual objects in dependence upon at least some of the graphics data.
- the machine learning model 320 provides data indicative of a classification for at least some of the virtual objects represented in the graphics data.
- the machine learning model 320 may output an integer which is mapped to a respective object classification for which the model has been trained, and the number of object classifications for which the machine learning model 320 is trained is not particularly limited. The machine learning techniques will be discussed in more detail later.
- the assignment circuitry 330 is configured to assign one or more of the virtual objects in the graphics data (e.g. in one or more received images) to one or more virtual object groups in dependence upon the object classification for one or more of the virtual objects, wherein the virtual objects assigned to a same virtual object group each have a same object classification.
- Virtual objects having a same object classification (as predicted by the machine learning model 320 ) are assigned to a same group (virtual object group) by the assignment circuitry 330 .
- By assigning a virtual object to a virtual object group one or more instances of interaction data associated with the virtual object are shared and thus made available for use by one or more other virtual objects assigned to the same virtual object group.
- the number of respective virtual object groups is not particularly limited and may be dependent on the number of different object classifications predicted by the machine learning model 330 , or in some cases a predetermined number of virtual object groups may be established in advance.
- the data processing apparatus 300 establishes one or more virtual object groups, in which the virtual objects assigned to a same virtual object group each have a same object classification.
- the interaction data available for a given virtual object assigned to the virtual object group can be shared with and thus made available for use by any other virtual object assigned to that same virtual object group.
- the interaction data for the respective virtual objects assigned to a same virtual object group can be made accessible for use by any of the virtual objects assigned to the virtual object group so that certain virtual objects for which a developer has not created interaction data can be made interactive, and/or certain virtual objects which are already interactive can be enhanced with further interactions from other virtual objects assigned to the same group.
- a virtual object group may be established in advance for a given object classification and a reference virtual object may be assigned in advance to the virtual object group, in which the reference virtual object has associated interaction data.
- the assignment circuitry 330 can assign one or more virtual objects to the same virtual object group as the reference virtual object, and interaction data associated with the reference virtual object can be shared with one or more of the virtual objects assigned to the virtual object group by the assignment circuitry 330 .
- a user may manually create a reference virtual object and assign the reference virtual object to the virtual object group to act as a reference for sharing interaction data within the virtual object group.
- the assignment circuitry 330 can be configured to assign one or more virtual objects to a virtual object group in dependence upon the object classification for one or more of the virtual objects, and the control circuitry 340 can share the interaction data associated with the reference virtual object with one or more of the other virtual objects assigned to the virtual object group.
- the assignment circuitry 330 may assign a plurality of virtual objects to a same virtual object group, in which interaction data has been associated with a single virtual object assigned to the virtual object group.
- the control circuitry 340 can share the interaction data associated with the individual virtual object with each remaining virtual object assigned to the virtual object group. For example, in the case of a video game, a virtual phone (or other type of virtual object) in one scene may have been created and associated with interaction data, whereas other scenes may include virtual phones for which interaction data has not been provided.
- the data processing apparatus 300 can assign virtual objects to virtual object groups as discussed above so that, for example, the interaction data for an individual virtual phone in one scene can be shared with other virtual phones assigned to the same group and thus use of the interaction data can be permitted for another virtual phone in response to a user interaction with that virtual phone in the virtual environment. Consequently, a number of virtual objects which are available for user interaction can be increased thus providing a more interactive and immersive virtual environment.
- a virtual object group may be established, by the assignment circuitry 330 , for each respective object classification indicated by the data output by the machine learning model 320 for the graphics data received for the virtual environment.
- the machine learning model 320 may receive at least some of the images (and/or polygonal meshes) of the virtual environment and output data indicative of N respective object classifications (where N is an integer that is greater than or equal to 1).
- the assignment circuitry 330 can thus establish N respective virtual object groups so that a virtual object group is established for each of the object classifications (e.g. object types).
- the assignment circuitry 330 can assign a virtual object represented in the images (and/or polygonal meshes) of the virtual environment to a respective virtual object group, and in the case of two or more virtual objects assigned to a same group, sharing of one or more instances of interaction data is permitted for the virtual objects assigned to that group.
- the number of virtual object groups can be freely established based on the number of respective object classifications predicted by the machine learning model 320 .
- the assignment circuitry 330 can be configured to define, in advance, a predetermined number of virtual object groups each corresponding to a different predetermined object classification and the assignment circuitry 330 assigns a virtual object to a predetermined virtual object group according to whether the object classification for the virtual object matches a predetermined object classification for the virtual object group.
- the assignment circuitry 330 rather than establishing, by the assignment circuitry 330 , a virtual object group for each respective object classification indicated by the data output by the machine learning model 320 , one or more predetermined virtual object groups each corresponding to a different predetermined object classification can be defined in advance.
- the assignment circuitry 330 can be configured to define a first virtual object group corresponding to a first predetermined object classification and a second virtual object group corresponding to a second predetermined object classification, in which the predetermined object classifications are specified in advance (e.g. by a user). In this way, a virtual object group can be defined for a particular object classification. A user may select, in advance, one or more object classifications for which the assignment circuitry 330 is to establish a virtual object group.
- a first virtual object group corresponding to a virtual car classification and a second virtual object group corresponding to a virtual phone classification may be defined by the assignment circuitry 330 so that any virtual object predicted by the machine learning model 320 to have a car classification or a phone classification can be assigned to the appropriate group. It will be appreciated that any suitable object classification can be specified in advance.
- the number of respective virtual object groups is not particularly limited. In a simplest case there may be a single virtual object group corresponding to a respective object classification. This may be because the machine learning model 330 has been trained for a single object classification or because the assignment circuitry 330 is configured to define, in advance, a single predetermined virtual object group.
- one or more instances of interaction data associated with that virtual object are shared and thus made available for use by one or more other virtual objects assigned to that same virtual object group.
- the one or more instances of interaction data represent data that has been created for, and associated with, a virtual object during a development stage for use by a data processor, such as a game engine, for applying one or more interactions for that virtual object.
- the interaction data associated with a given virtual object may comprise one or more from the list consisting of: animation data; audio data; and haptic data.
- the interaction data associated with a virtual object has been created in advance for the first virtual object.
- the interaction data associated with a virtual object represents predefined data created in advance for a virtual object. Examples of such content creation tools include the Unity editor providing various tools for creating data for virtual objects (e.g. game objects) for scenes.
- the interaction data associated with a virtual object may have been manually created by a developer specifically for that virtual object by manually specifying one or more properties for one or more interactions.
- the interaction data associated with a virtual object may have been selected by a developer from a database comprising instances of predefined interaction data for different virtual objects.
- the data processing apparatus 300 may comprise a storage media to store (and allow retrieval by a game engine, or other processing unit, of) game data, in which the game data comprises one or more instances of interaction data associated with one or more virtual objects. Alternatively or in addition, such game data may be retrieved from a server.
- the interaction data associated with a given virtual object thus defines one or more properties for one or more user interactions.
- a data processing unit such as a game engine, for performing processing functions for progressing a game can be configured to generate an interaction signal for a virtual object using one or more instances of interaction data associated with the virtual object.
- the data processing unit can generate at least one of an audio signal, haptic signal and animation signal in dependence upon the interaction data.
- a virtual object such as a virtual phone may have one or more associated instances of interaction data for input to a data processor, such as a game engine, to provide an update with respect to the virtual object in the virtual environment in response to a user input.
- the interaction data associated with the virtual phone may comprise audio data for use by a game engine to generate one or more audio signals for the virtual phone to thereby provide an audio output to the user.
- the interaction data associated with the virtual phone may comprise haptic data for use by the game engine to generate one or more haptic signals for the virtual phone to thereby provide a haptic interaction to the user via one or more haptic interfaces (e.g. a handheld controller such as the Sony PS5 DualSense®).
- the interaction data associated with the virtual phone may comprise animation data for use by the game engine to generate one or more visual animations to be applied to the virtual phone to animate the virtual phone.
- interaction data associated with the virtual object is processable by a processing unit (such as a game engine) to generate one or more interaction signals for the virtual object.
- a processing unit such as a game engine
- the interaction data associated with a virtual object thus enables user interaction with that virtual object.
- the assignment circuitry 330 assigns one or more of the virtual objects of the virtual environment to one or more of the virtual object groups, so that virtual objects assigned to a same virtual object group each have a same object classification
- the control circuitry 240 is configured to share first interaction data associated with a first virtual object assigned to a virtual object group with a second virtual object assigned to the virtual object group to permit use of the first interaction data for the second virtual object in response to a user interaction by the user with the second virtual object in the virtual environment. Therefore, interaction data that has been associated with a given virtual object by a developer can be made available for use (e.g.
- interaction with the virtual object is enabled on the basis of the interaction data shared for the virtual object group to which the virtual object has been assigned.
- interaction data that has been associated with a first virtual telephone in the virtual environment can be shared within the virtual object group, and hence use of the interaction data can be permitted for use with another virtual telephone assigned to that virtual object group for which interaction data has not been associated.
- the data processing apparatus 300 can be configured to receive graphics data from a processing apparatus such as that shown in FIG. 1 so as to receive graphics data during an on-going game session and assign respective virtual objects to one or more virtual object groups accordingly so that interaction data can be shared.
- the data processing apparatus 300 can be configured to receive graphics data by accessing a virtual object database used by a content developer during a development stage of a video game (or other type of content) so as to obtain graphics data for at least some of the virtual objects in the virtual environment (or at least some of the virtual objects in a given scene for a video game) and assign at least some of the virtual objects to one or more virtual object groups to establish one or more such groups for allowing sharing of interaction data.
- control circuitry 330 is configured to share the first interaction data associated with the first virtual object with each virtual object assigned to the virtual object group.
- the assignment circuitry 330 is configured to assign at least two virtual objects having a same object classification, as predicted by the machine learning model 320 , to a same virtual object group and interaction data associated with a respective virtual object can be shared with any of the other virtual objects that have been assigned to the same group. As such, interaction data can be shared amongst the virtual objects assigned to a same virtual object group.
- the number of virtual objects assigned to a given virtual object group is not particularly limited.
- interaction data may have been created for and associated with a virtual bottle for a given scene for allowing user interaction(s) with the virtual bottle, such as grasping and drinking of a content of the virtual bottle by a virtual avatar with an associated audio sound.
- other portions of the virtual environment corresponding to other scenes may include other virtual bottles only for decorative purposes such that interaction data is not available for these other virtual bottles.
- the data processing apparatus 300 can thus assign a number of respective virtual objects each having an object classification corresponding to a bottle classification to a same virtual object group, and interaction data associated with one of the virtual bottles can be shared for use by any of the other virtual bottles for applying the interaction. Therefore, user interaction with a larger number of virtual objects is enabled.
- the assignment circuitry 330 assigns a plurality of virtual objects to a same virtual object group, in which one of the plurality of virtual objects assigned to the virtual object group has associated interaction data and each of the remaining virtual objects assigned to the virtual object group has no associated interaction data.
- the interaction data for the virtual object is shared within the group so as to permit use of that interaction data for interaction with any of the virtual objects assigned to the group. Consequently, each of the virtual objects can be made interactive for a user and is capable of providing substantially the same interaction(s) as that of the virtual object for which the interaction data was initially created.
- a situation can arise in which two or more virtual objects have been assigned to a same virtual object group and two or more of the respective virtual objects have respective interaction data associated therewith.
- the assignment circuitry 330 is configured to assign a first virtual object and a second virtual object to a same virtual object group, and first interaction data is associated with the first virtual object and second interaction data is associated with the second virtual object.
- the first interaction data and the second interaction data may be complementary in that they provide different types of user interaction (e.g. one provides an audio interaction and the other provides a haptic interaction) or provide a same type of interaction but different events (e.g. a first audio event and a second audio event) such that both the first and second interaction data can be permitted for use by a respective virtual object.
- the first interaction data and the second interaction data may not be complementary.
- control circuitry 340 is configured to share one of the first interaction data and the second interaction data with a third virtual object assigned to the virtual object group to permit use of one of the first interaction data and the second interaction data for the third virtual object in response to a user interaction with the third virtual object in the virtual environment.
- control circuitry 340 in order to avoid potential conflicts arising from two or more respective instances of interaction data in a same group which may or may not be incompatible, can be configured to permit use of one of the first and second interaction data (or more generally one of a plurality of instances of interaction data) for a given virtual object in the virtual object group.
- This technique can be performed without requiring an analysis of the respective instances of interaction data and can ensure that incompatible interactions are not combined for a respective virtual object. In this way, a respective instance of the interaction data can be permitted for use by another virtual object.
- the control circuitry 340 randomly selects one of the two or more of the virtual objects having associated interaction data to share the interaction data for the selected virtual object for use by another virtual object.
- control circuitry 340 is configured to detect an interaction type associated with the first interaction data and the second interaction data and share both the first interaction data and the second interaction data with a third virtual object assigned to the same virtual object group to thereby provide a user interaction for the third virtual object, when the first interaction data and the second interaction data have a different interaction type.
- the interaction data associated with a virtual object may enable different types of interaction for that object, including audio interaction, visual interaction and/or haptic interaction.
- An interaction type for the interaction data associated with a virtual object is detectable by the control circuitry 340 .
- the interaction data associated with a given virtual object may take the form of one or more code scripts and an interaction type can be detected based on one or more syntax elements associated with the script.
- metadata may be associated with a virtual object for indicating one or more interaction types associated with the interaction data for the virtual object.
- the control circuitry 340 can share both types of interaction data to permit use of both types of interaction data by a virtual object assigned to that virtual object group.
- first interaction data associated with a first virtual object may comprise audio data for providing an audio interaction and second interaction data associated with a second virtual object may comprise haptic data for providing a haptic interaction.
- the control circuitry 340 can thus detect the presence of the audio interaction type and the haptic interaction type for the two respective instances of interaction data and share the two respective instances of interaction data with any of the virtual objects, including with each other, potentially.
- Table I below shows an example in which virtual objects A, B and C have been assigned to a same virtual object group and virtual objects A and B have been associated with respective interaction data having different interaction types (in this example, audio and haptic type interactions are used for explanation).
- Object A has associated interaction data corresponding to an audio type of interaction
- object B has associated interaction data corresponding to a haptic type of interaction
- object C has no associated interaction data (Y indicates presence and N indicates absence in the table).
- the control circuitry 330 can be configured to detect an interaction type for the interaction data and in the example shown in Table 1 both the interaction data corresponding to the audio type of interaction and the interaction data corresponding to the haptic type of interaction can be shared with the object C.
- Table II below shows another example, in which virtual objects A, B and C have been assigned to a same virtual object group and virtual object A and B have been associated with interaction data having different interaction types.
- object B has both haptic type interaction data and audio type interaction data associated therewith.
- the haptic type interaction data associated with object B can be shared with object C and a random selection of one of the instances of audio type interaction data from object A and object B can be made for selecting the instance of audio type interaction data to be shared with object C.
- a single virtual object which provides the greatest number of interaction types may be preferentially selected so that only the interaction data associated with that single virtual object is shared so as to be permitted for use by another virtual object.
- a virtual object having a most complete set of interactions specified by the interaction data may be selected by the control circuitry 340 for sharing with the other virtual objects assigned to that virtual object group (therefore in this example, the interaction data for object B provides the most complete set of interaction and the interaction data for object B can be selected for sharing and the interaction data for object A is not selected for sharing).
- the control circuitry 340 is configured to detect a number of respective interaction types associated with the first interaction data and the second interaction data, respectively, and to share the interaction data having the greatest number of respective interaction types with the third virtual object.
- the interaction data associated with virtual object B represents the interaction data having the greatest number of respective interaction types.
- the above discussion refers to sharing interaction data within a virtual object group to permit use of the shared interaction data for a third virtual object for which no interaction data has been provided. In some cases it may or may not be beneficial to permit use of interaction data associated with one virtual object for another object for which interactions have already been enabled.
- a developer may have associated first interaction data with the first virtual object and also associated interaction data with another virtual object assigned to a same group as the first virtual object. In this case, sharing of the first interaction data with the another virtual object may not be beneficial as a developer has already enabled interactions for the another virtual object.
- the assignment circuitry 330 is configured to assign plurality of virtual objects to a same virtual object group and to share the first interaction data associated with the first virtual object with another virtual object (in other words, a second virtual object) assigned to the virtual object group in dependence upon whether interaction data is associated with the another virtual object.
- the interaction data for the first virtual object can be selectively shared with other virtual objects in the group so as to share the interaction data with virtual objects for which there is no interaction data and sharing with virtual objects that have already been associated with interaction data during development can be selectively prohibited.
- the control circuitry 340 is configured to prohibit sharing of the first interaction data with the second virtual object when the second interaction data is associated with the second virtual object.
- control circuitry 340 is configured to prohibit sharing of the first interaction data with the second virtual object when the second interaction data is associated with the second virtual object (and vice versa).
- the first interaction data associated with the first virtual object and the second interaction data associated with the second virtual object may correspond to different interaction types.
- a developer may have provided interaction data specifying an audio interaction for a given virtual object, and also provided interaction data specifying a haptic interaction for another virtual object assigned to the same virtual object group.
- the control circuitry 340 is configured to detect an interaction type associated with the first interaction data and the second interaction data and to share the first interaction data with the second virtual object when the first interaction data and the second interaction data have a different interaction type. In this way, a virtual object that would otherwise have provided only an audio interaction can be made to provide both an audio interaction and a haptic interaction (or another similar combination of interactions) using the data shared within the virtual object group.
- a developer may have created a first virtual sword for use in a first scene to provide an audio interaction with a user and then created another virtual sword (having either a same visual appearance or a different visual appearance) for use in another scene to provide a haptic interaction (e.g. providing a swooshing sound for movements of a virtual sword when manipulated by an avatar and providing a haptic interaction to simulate gripping a handle and/or striking an object).
- a haptic interaction e.g. providing a swooshing sound for movements of a virtual sword when manipulated by an avatar and providing a haptic interaction to simulate gripping a handle and/or striking an object.
- any combination of haptic data, audio data and animation data may be suitably used.
- the audio interaction data for object A can potentially be shared with object B so that both an audio interaction and a haptic interaction are possible for the object B.
- the haptic interaction data for object B can potentially be shared with object A so that both an audio interaction and a haptic interaction are possible for the object A, in which the audio interaction is actually based on the interaction data shared by the object A.
- These represent examples of first and second interaction data that can be used in combination. In particular, by combining the first interaction data for one virtual object and the second interaction data for another object, a range of interaction mechanisms available for an already interactive object can be enhanced.
- the interaction data represents data that has been created for, and associated with, a virtual object during a development stage for use by a data processor, such as a game engine (e.g. the Unreal 4TM game engine), for applying one or more interactions for that virtual object.
- a data processor such as a game engine (e.g. the Unreal 4TM game engine) for applying one or more interactions for that virtual object.
- the interaction data may specify one or more of an audio event, haptic event and animation event, and one or more conditions (e.g. user inputs and/or position and/or orientation of the virtual object) for which such an event is to be triggered.
- the interaction data associated with a given virtual object may for example take the form of one or more code scripts (also referred to as interaction scripts) associated with the virtual object for specifying one or more events.
- a script can be created by a developer using editors such as the Unity editor and optionally an authoring tool such as Audiokinetic Wwise.
- the interaction data for a haptic feedback event for a virtual object may take the form of a script comprising code for specifying: a condition for which the haptic event is to be triggered, a target device for the haptic event (e.g. a handheld controller), one or more haptic effects, and an intensity for the one or more haptic effects.
- Interaction data for an audio event or animation event may take a similar form.
- the interaction data may comprise a script for a virtual object for enabling a virtual object to be animated in response to an interaction with a user's avatar so that the virtual object can be grasped by a hand of the avatar (e.g. the script may specify attachment for the virtual object to a node of an avatar for a given condition).
- sharing of such interaction data within a given virtual object group may increase the number of virtual objects that can be grasped by a user's avatar.
- CNN convolutional neural network
- the machine learning model 320 is trained to output the data indicative of the object classification for a given virtual object in dependence upon a type of the given virtual object.
- the machine learning model 320 can be trained with labelled training data (e.g. labelled mesh data, labelled point cloud data, labelled texture data and/or labelled images), where the label provides an indication of a type of object that is present in the data.
- the machine learning model can thus learn features that are representative of a given type of object (via so-called ‘supervised learning’).
- the machine learning model 320 can thus be trained to classify the objects according to their type.
- the machine learning model 320 can output data indicative of a class of object for which the machine learning has been trained so that objects are classified according to their type.
- the output may comprise an integer which is mapped to a respective class label for which the model has been trained.
- labelled training data comprising labelled meshes and/or images of any object types such as cars, guns, chairs, hand tools and drinking vessels may be used to train the machine learning model to classify these types of objects.
- the machine learning model 320 can be trained to provide a relatively broad or narrow classification as required. For example, for an action-adventure game, a single object classification for car objects may be sufficient, whereas for a driving game a number of narrower classifications for different types of car, such as saloon type and convertible type, may be used. Similarly, a single classification for an object type corresponding to a ball may be used or narrower classifications such as golf ball, baseball and so on may be used.
- the machine learning model 320 is trained to output the data indicative of the object classification for a given virtual object in dependence upon a type and a size of the given virtual object.
- the machine learning model 320 can be trained with labelled training data, where the label provides an indication of both a type of object and a relative size for that object that is present in the training data.
- the machine learning model can thus learn features (e.g. mesh features and/or image features) that are representative of a given type of object having a relative size and thereby classify objects according to their type and relative size.
- allowing interaction data associated with a virtual object to be used for another virtual object of the same type so that both objects are able to provide a same user interaction is acceptable for many object types.
- a user interaction with an object of a given type should differ depending on a size of the object.
- An example of this could be a type of virtual weapon for example, such a sword, in that a developer will typically create interactions for a large sword that are different to interactions for a smaller dagger due to the different properties (e.g.
- the graphics data comprises one or more images of the virtual environment
- the machine learning model 320 is trained to output data indicative of a scene classification for one or more of the images of the virtual environment
- the assignment circuitry 330 is configured to assign one or more of the virtual objects to one or more of the virtual object groups in dependence upon the scene classification and the object classification, wherein the virtual objects assigned to a same virtual object group each have a same scene classification and object classification.
- a virtual object may be created for a scene to have one or more user interactions and a virtual object of a same type may be created for a different type of scene to have one or more different user interactions.
- the machine learning model 320 can be trained using labelled training images, where the label provides an indication of a type of object that is present and also a type of scene.
- the label provides an indication of a type of object that is present and also a type of scene.
- FIG. 4 is a schematic flowchart illustrating a data processing method. The method comprising:
- example embodiments can be implemented by computer software operating on a general purpose computing system such as a games machine.
- computer software which when executed by a computer, causes the computer to carry out any of the methods discussed above is considered as an embodiment of the present disclosure.
- embodiments of the disclosure are provided by a non-transitory, machine-readable storage medium which stores such computer software.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Computer Hardware Design (AREA)
- Architecture (AREA)
- Audiology, Speech & Language Pathology (AREA)
- User Interface Of Digital Computer (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Image Analysis (AREA)
Abstract
Description
| TABLE 1 | |||||
| Interaction data | A | B | C | ||
| Audio Type | Y | N | N | ||
| Haptic Type | N | Y | N | ||
| TABLE II | |||||
| Interaction data | A | B | C | ||
| Audio Type | Y | Y | N | ||
| Haptic Type | N | Y | N | ||
-
- receiving (at a step 410) graphics data for at least a portion of a virtual environment, the graphics data including a plurality of virtual objects;
- outputting (at a step 420), by a machine learning model, data indicative of an object classification for one or more of the virtual objects in dependence upon the graphics data;
- assigning (at a step 430) one or more of the virtual objects to one or more virtual object groups in dependence upon the object classification for one or more of the virtual objects, wherein the virtual objects assigned to a same virtual object group each have a same object classification; and
- sharing (at a step 440) first interaction data associated with a first virtual object assigned to a virtual object group with a second virtual object assigned to the virtual object group to thereby permit use of the first interaction data for the second virtual object in response to a user interaction with the second virtual object in the virtual environment.
Claims (17)
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2115791.2 | 2021-11-03 | ||
| GB2115791.2A GB2612767A (en) | 2021-11-03 | 2021-11-03 | Virtual reality interactions |
| GB2115791 | 2021-11-03 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20230133049A1 US20230133049A1 (en) | 2023-05-04 |
| US12236524B2 true US12236524B2 (en) | 2025-02-25 |
Family
ID=78828478
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/968,300 Active 2043-03-09 US12236524B2 (en) | 2021-11-03 | 2022-10-18 | Methods and apparatus for the creation and evaluation of virtual reality interactions |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US12236524B2 (en) |
| EP (1) | EP4177715B1 (en) |
| GB (1) | GB2612767A (en) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB2612767A (en) | 2021-11-03 | 2023-05-17 | Sony Interactive Entertainment Inc | Virtual reality interactions |
| US12210670B2 (en) * | 2023-02-21 | 2025-01-28 | Chegg, Inc. | Systems and methods for cloned avatar management in a persistent virtual-reality environment |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140306891A1 (en) * | 2013-04-12 | 2014-10-16 | Stephen G. Latta | Holographic object feedback |
| US20180046245A1 (en) * | 2016-08-11 | 2018-02-15 | Microsoft Technology Licensing, Llc | Mediation of interaction methodologies in immersive environments |
| US20180359040A1 (en) | 2017-06-12 | 2018-12-13 | Gracenote, Inc. | Detecting and Responding to Rendering of Interactive Video Content |
| US20190139321A1 (en) * | 2017-11-03 | 2019-05-09 | Samsung Electronics Co., Ltd. | System and method for changing a virtual reality environment dynamically |
| US20200346114A1 (en) | 2019-04-30 | 2020-11-05 | Microsoft Technology Licensing, Llc | Contextual in-game element recognition and dynamic advertisement overlay |
| US20210027531A1 (en) | 2019-07-24 | 2021-01-28 | Electronic Arts Inc. | Terrain generation and population system |
| US20210089780A1 (en) | 2014-02-28 | 2021-03-25 | Second Spectrum, Inc. | Data processing systems and methods for enhanced augmentation of interactive video content |
| US20210248826A1 (en) | 2020-02-07 | 2021-08-12 | Krikey, Inc. | Surface distinction for mobile rendered augmented reality |
| EP4177715A1 (en) | 2021-11-03 | 2023-05-10 | Sony Interactive Entertainment Inc. | Virtual reality interactions |
-
2021
- 2021-11-03 GB GB2115791.2A patent/GB2612767A/en active Pending
-
2022
- 2022-10-13 EP EP22201400.3A patent/EP4177715B1/en active Active
- 2022-10-18 US US17/968,300 patent/US12236524B2/en active Active
Patent Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140306891A1 (en) * | 2013-04-12 | 2014-10-16 | Stephen G. Latta | Holographic object feedback |
| US20210089780A1 (en) | 2014-02-28 | 2021-03-25 | Second Spectrum, Inc. | Data processing systems and methods for enhanced augmentation of interactive video content |
| US20180046245A1 (en) * | 2016-08-11 | 2018-02-15 | Microsoft Technology Licensing, Llc | Mediation of interaction methodologies in immersive environments |
| US20180359040A1 (en) | 2017-06-12 | 2018-12-13 | Gracenote, Inc. | Detecting and Responding to Rendering of Interactive Video Content |
| US20190139321A1 (en) * | 2017-11-03 | 2019-05-09 | Samsung Electronics Co., Ltd. | System and method for changing a virtual reality environment dynamically |
| US20200346114A1 (en) | 2019-04-30 | 2020-11-05 | Microsoft Technology Licensing, Llc | Contextual in-game element recognition and dynamic advertisement overlay |
| US20210027531A1 (en) | 2019-07-24 | 2021-01-28 | Electronic Arts Inc. | Terrain generation and population system |
| US20210248826A1 (en) | 2020-02-07 | 2021-08-12 | Krikey, Inc. | Surface distinction for mobile rendered augmented reality |
| EP4177715A1 (en) | 2021-11-03 | 2023-05-10 | Sony Interactive Entertainment Inc. | Virtual reality interactions |
Non-Patent Citations (4)
| Title |
|---|
| Combined Search and Examination Report for corresponding GB Application No. GB2115791.2, 9 pages dated Apr. 29, 2022. |
| Examination Report for corresponding GB Application No. 2115791.2, 4 pages dated Dec. 20, 2023. |
| Examination Report for corresponding GB Application No. 2115791.2, 5 pages dated May 16, 2024. |
| Extended European Search Report for corresponding EP Application No. 22201400.3, 8 pages dated Mar. 10, 2023. |
Also Published As
| Publication number | Publication date |
|---|---|
| EP4177715B1 (en) | 2025-05-14 |
| GB2612767A (en) | 2023-05-17 |
| US20230133049A1 (en) | 2023-05-04 |
| GB202115791D0 (en) | 2021-12-15 |
| EP4177715A1 (en) | 2023-05-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6700463B2 (en) | Filtering and parental control methods for limiting visual effects on head mounted displays | |
| JP6373920B2 (en) | Simulation system and program | |
| US10545339B2 (en) | Information processing method and information processing system | |
| US20180373328A1 (en) | Program executed by a computer operable to communicate with head mount display, information processing apparatus for executing the program, and method executed by the computer operable to communicate with the head mount display | |
| US10744411B2 (en) | Simulation system, processing method, and information storage medium | |
| US20180196506A1 (en) | Information processing method and apparatus, information processing system, and program for executing the information processing method on computer | |
| US12236524B2 (en) | Methods and apparatus for the creation and evaluation of virtual reality interactions | |
| US20190217197A1 (en) | Information processing method, apparatus, and system for executing the information processing method | |
| US20190114841A1 (en) | Method, program and apparatus for providing virtual experience | |
| US11086587B2 (en) | Sound outputting apparatus and method for head-mounted display to enhance realistic feeling of augmented or mixed reality space | |
| US20230141027A1 (en) | Graphics rendering | |
| JP6794390B2 (en) | Simulation system and program | |
| US11882172B2 (en) | Non-transitory computer-readable medium, information processing method and information processing apparatus | |
| US10319346B2 (en) | Method for communicating via virtual space and system for executing the method | |
| US20240115942A1 (en) | Customizable virtual reality scenes using eye tracking | |
| JP7232765B2 (en) | Simulation method and system | |
| US20240367035A1 (en) | Information processing method, information processing system and computer program | |
| US12100081B2 (en) | Customized digital humans and pets for meta verse | |
| US20240121569A1 (en) | Altering audio and/or providing non-audio cues according to listener's audio depth perception |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: SONY INTERACTIVE ENTERTAINMENT INC., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:JENABZADEH, MANDANA;REEL/FRAME:061457/0482 Effective date: 20221018 |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| AS | Assignment |
Owner name: SONY INTERACTIVE ENTERTAINMENT INC., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SONY INTERACTIVE ENTERTAINMENT EUROPE LIMITED;REEL/FRAME:061886/0035 Effective date: 20221021 Owner name: SONY INTERACTIVE ENTERTAINMENT INC., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:JONES, MICHAEL LEE;REEL/FRAME:061886/0027 Effective date: 20221118 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| AS | Assignment |
Owner name: SONY INTERACTIVE ENTERTAINMENT EUROPE LIMITED, GREAT BRITAIN Free format text: EMPLOYMENT AGREEMENT;ASSIGNOR:CAPPELLO, FABIO;REEL/FRAME:062006/0492 Effective date: 20160506 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |