US20160259409A1 - Device for object manipulating with multi-input sources - Google Patents

Device for object manipulating with multi-input sources Download PDF

Info

Publication number
US20160259409A1
US20160259409A1 US13/147,069 US201013147069A US2016259409A1 US 20160259409 A1 US20160259409 A1 US 20160259409A1 US 201013147069 A US201013147069 A US 201013147069A US 2016259409 A1 US2016259409 A1 US 2016259409A1
Authority
US
United States
Prior art keywords
attributes
virtual
status
state
virtual object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US13/147,069
Other versions
US9454222B1 (en
Inventor
Hyun Jeong Lee
Seung Ju Han
Joon Ah Park
Wook Chang
Jeong Hwan Ahn
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Assigned to SAMSUNG ELECTRONICS CO., LTD. reassignment SAMSUNG ELECTRONICS CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AHN, JEONG HWAN, CHANG, WOOK, HAN, SEUNG JU, LEE, HYUN JEGON, PARK, JOON AH
Assigned to SAMSUNG ELECTRONICS CO., LTD. reassignment SAMSUNG ELECTRONICS CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AHN, JEONG HWAN, HAN, SEUNG JU, LEE, HYUN JEONG, PARK, JOON AH, CHANG, WOOK
Publication of US20160259409A1 publication Critical patent/US20160259409A1/en
Application granted granted Critical
Publication of US9454222B1 publication Critical patent/US9454222B1/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • A63F13/28Output arrangements for video game devices responding to control signals received from the game device for affecting ambient conditions, e.g. for vibrating players' seats, activating scent dispensers or affecting temperature or light
    • A63F13/285Generating tactile feedback signals via the game input device, e.g. force feedback
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1037Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted for converting control signals received from the game device into a haptic signal, e.g. using force feedback
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5546Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
    • A63F2300/5553Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history user representation in the game field, e.g. avatar
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/65Methods for processing data by generating or executing the game program for computing the condition of a game character
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6607Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6623Methods for processing data by generating or executing the game program for rendering three dimensional images for animating a group of characters
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Definitions

  • the present invention relates to a method for modeling a structure of a virtual object and also modeling an avatar in a virtual world.
  • a virtual reality (VR) technology is being developed and applied in various fields, particularly, in an entertainment field.
  • the entertainment field is commercialized, for example, in the form of 3-dimensional (3D) virtual online community such as Second Life and a 3D game station.
  • the 3D game station offers an innovative gaming experience through a 3D input device.
  • a sensor-based multi-modal interface may be applied to a VR system to achieve control of a complicated 3D virtual world.
  • a connection between the real world and the virtual world may be achieved by a virtual to real-representation of sensory effect (VR-RoSE) engine and a real to virtual-RoSE (RV-RoSE) engine.
  • VR-RoSE virtual to real-representation of sensory effect
  • RV-RoSE real to virtual-RoSE
  • an object manipulation device including an object modeling unit to set a structure of a virtual object, and an object operating unit to select the virtual object and control an object operation of the selected virtual object.
  • the virtual object may include at least one selected from general information on the virtual object, an object identifier for identification of the virtual object in a virtual world, and object attributes including at least one attribute of the virtual object.
  • the object identifier may include at least one selected from an object ID allocated to the virtual object, an object state for recognition of a state of the virtual object, and modifiable attributes for determining modifiability of attributes of the virtual object.
  • the object attributes may include at least one selected from spatial attributes, physical attributes, temporal attributes, and combinational attributes.
  • the spatial attributes may include at least one of a shape, a location, and a size of the virtual object.
  • the physical attributes may include at least one of a tactile sensation, a pressure, a vibration, and a temperature of the virtual object, and the temporal attributes may include at least one of a duration and a motion of the virtual object.
  • the object operating unit may control at least one performance of selection of the virtual object, collection of object attributes of the virtual object, modification of the object attributes of the virtual object, and removal and storing of the object attributes of the virtual object.
  • the object manipulation device may include an avatar structure setting unit to set a structure of an avatar, and an avatar navigation unit to control a motion of the avatar corresponding to a motion of a user in a real world.
  • the avatar structure setting unit may include an avatar identifying unit to set information for identifying the avatar, an avatar condition managing unit to set a physical condition and a mental condition of the avatar, and a motion managing unit to manage the motion of the avatar.
  • the avatar navigation unit may include a general information managing unit to manage general information of the avatar, and a control data managing unit to control the motion of the avatar.
  • the control data managing unit may manage at least one of a movement state, a movement direction, and a speed of the avatar.
  • an object manipulation apparatus and method capable of modeling an object for manipulation of a virtual object and effectively reflecting a motion of a real world to manipulation of an object of a virtual world.
  • an object manipulation apparatus and method capable of effectively navigating an avatar in a virtual world by determining a physical and mental condition of the avatar and setting motion data of the avatar.
  • FIG. 1 illustrates a block diagram of an object manipulation apparatus according to an embodiment of the present invention
  • FIG. 2 illustrates a diagram of a system connecting a virtual world with a real world, according to an example embodiment
  • FIG. 3 illustrates a diagram describing an object modeling operation according to an example embodiment
  • FIG. 4 illustrates a diagram describing an object operation model according to an example embodiment
  • FIG. 5 illustrates a diagram describing an object operation model according to another example embodiment
  • FIG. 6 illustrates a diagram describing a process of manipulating an object associated with a real to virtual-representation of sensory effect (RV-RoSE) engine according to an example embodiment
  • FIG. 7 illustrates a block diagram describing an object manipulation apparatus according to another example embodiment
  • FIG. 8 illustrates a diagram showing a countenance and a pose of an avatar, which are determined by an avatar condition managing unit, according to an example embodiment
  • FIG. 9 illustrates a diagram describing metadata control for avatar navigation, according to an example embodiment.
  • FIG. 10 illustrates a diagram describing an avatar navigation process in association with an RV-RoSE engine, according to an example embodiment.
  • FIG. 1 illustrates a block diagram of an object manipulation apparatus 100 according to an embodiment of the present invention.
  • the object manipulation apparatus 100 may include an object modeling unit 110 to set a structure of a virtual object, and an object operating unit 120 to select the virtual object and control an object operation of the selected virtual object.
  • the object modeling refers to a process of defining an object model that includes an identifier and attributes for manipulation of the virtual object.
  • FIG. 2 illustrates a diagram of a system connecting a virtual world with a real world, according to an example embodiment. That is, FIG. 2 shows system architecture of sensor input metadata and virtual element metadata.
  • a connection between a real world 220 and a virtual world 210 may be achieved via a virtual to real-representation of sensory effect (VR-RoSE) engine 231 and a virtual to real-representation of sensory effect (RV-RoSE) engine 232 .
  • the virtual element metadata refers to metadata related to structures of objects and avatars present in a virtual space.
  • the sensor input metadata refers to metadata for a control function such as navigation and manipulation of the avatars and the objects in a multimodal input device.
  • the object modeling and the object operation will be described in further detail.
  • the object modeling relates to the sensor input metadata while the object operation relates to a sensor input command.
  • the OM including the identifier and the attributes may be defined for manipulation of the virtual object. All objects in the virtual world 210 need to have a particular identifier for controlling software capable of discriminating the objects. In addition, all the objects may include spatial, physical, and temporal attributes to provide reality. Hereinafter, an example of the object modeling will be described with reference to FIG. 3 .
  • FIG. 3 illustrates a diagram describing an object modeling operation according to an example embodiment.
  • FIG. 3 shows an object 310 having a predetermined shape and an object 320 having a predetermined tactile sensation.
  • the object modeling may define shape attributes and tactile attributes of the objects.
  • the virtual world may provide a selection effect and a manipulation effect.
  • Variables related to the effects may include a size, a shape, a tactile sensation, a density, a motion, and the like.
  • the object may include general information, an object identifier, and object attributes.
  • the general information may contain an overall description of the object.
  • the object identifier is used for discrimination of the object in the virtual world.
  • the object identifier may include an object ID, an object state indicating a present state by selected, selectable, and unselectable modes, and modifiable attributes indicating modifiability of the attributes.
  • the object attributes may include spatial attributes such as a shape, a location, and a size, physical attributes such as a tactile sensation, a pressure or force, a vibration, and a temperature, temporal attributes such as a duration and a motion, and combinational attributes including a combination of the aforementioned attributes.
  • the object operation may include collection of information through an interface, modification of the object attributes, and removal and restoration of the object.
  • modification of the object attributes may include collection of information through an interface, modification of the object attributes, and removal and restoration of the object.
  • removal and restoration of the object may include collection of information through an interface, modification of the object attributes, and removal and restoration of the object.
  • FIG. 4 illustrates a diagram describing an OO model according to an example embodiment.
  • FIG. 4 illustrates a process of generating a virtual car.
  • the virtual car may be generated by using initial models and revising sizes, locations, and shapes of the initial models. That is, the virtual car may be generated as desired by revising the sizes, locations, and shapes of the initial models through sequential operations 410 , 420 , and 430 .
  • Reality may be provided to the virtual object according to a weight, a roughness, and the like of the virtual object.
  • FIG. 5 shows various states of a hand grasping boxes of various weights. That is, with respect to objects having the same shape, various motions may be expressed according to weights, masses, and the like.
  • FIG. 5 also shows various deformed states of a rubber ball being grasped by a hand. That is, the object may be deformed according to forces, pressures, and the like applied to the object.
  • the OO may include object selection to select an object desired to be deformed by a user, and object manipulation to modify the attributes of the selected object.
  • the object manipulation may perform at least one of collection of object attributes of the virtual object, modification of the object attributes of the virtual object, removal and storing of the object attributes of the virtual object. Accordingly, the object manipulation may include ObtainTargetInfo to obtain an ID of the selected object and existing attributes, ModifyAttributes to modify the object attributes, and Remove/RestoreObject to remove or restore the object attributes.
  • the object manipulation may include operations of selecting a target object according to a user preference, extracting the object attributes of the selected object, modifying the extracted object attributes, storing the modified object attributes, and releasing the object.
  • FIG. 6 illustrates a diagram describing object manipulation in association with an RV-RoSE engine according to an example embodiment.
  • the whole system includes a virtual world engine 610 , a real world interface 630 , a sensor adaptation preference 620 , and an RV-RoSE engine 640 .
  • the virtual world engine 610 is a system for connecting with a virtual world such as Second Life.
  • the real world interface 630 refers to a terminal enabling a user to control the virtual world.
  • the real world interface 630 includes a 2D/3D mouse, a keyboard, a joystick, a motion sensor, a heat sensor, a camera, a haptic glove, and the like.
  • the sensor adaptation preference 620 refers to a part to add an intention of the user, for example, adjustment of a range of data values.
  • ID information of the selected virtual object may be input to an importer of the RV-RoSE engine 640 .
  • spatial, physical, and temporal information are input to a sub object variable through an object manipulation controller.
  • the object manipulation controller of the RV-RoSE engine 640 adjusts and stores values of corresponding variables.
  • the modified object attributes may be transmitted to the virtual world engine 610 through an object information exporter.
  • the OM is a basic element of the virtual element metadata.
  • FIG. 7 illustrates a block diagram describing an object manipulation apparatus 700 according to another example embodiment.
  • the object manipulation apparatus 700 includes an avatar structure setting unit 710 to set a structure of an avatar, and an avatar navigation unit 720 to control a motion of the avatar corresponding to a motion of the user of the real world.
  • the avatar structure setting may be related to the virtual element metadata whereas avatar motion control, that is, navigation control may be related to a sensor input metadata.
  • Virtual elements may include avatars, objects, geometries, cameras, light conditions, and the like.
  • the present embodiment will define the structure of the avatar.
  • An avatar represents another identity of the user.
  • the avatar needs to have attributes including a physical condition and a mental condition since the avatar behaves in different manners according to the physical condition and the mental condition of a user.
  • motion patterns of the avatar may be varied by combining the physical condition and the mental condition.
  • the avatar may include parameters related to the physical condition and the mental condition.
  • AvatarCondition may be defined as a main element for the physical condition and the mental condition of the avatar.
  • the AvatarCondition may include PhysicalCondition and MentalCondition as sub-parameters for the physical condition and the mental condition, respectively.
  • a countenance and a pose of the avatar may be determined by values of the AvatarCondition, which will be described in detail with reference to FIG. 8 .
  • FIG. 8 illustrates a diagram showing the countenance and the pose of the avatar, which are determined by an avatar condition managing unit, according to an example embodiment.
  • various countenances and poses such as an expressionless face 810 , a happy face 820 , and a sitting pose 830 , may be determined according to the values of the AvatarCondition.
  • the avatar metadata may also include AvatarMotionData.
  • the AvatarMotionData may indicate a current motion state such as an on and off state when motion data is allocated, and a degree of reaction of the avatar with respect to the motion, such as a reaction range.
  • a hierarchical diagram of avatar information may be expressed as follows.
  • Avatar navigation is a basic operation among control operations for a 3D virtual world.
  • a multi-modal interface is capable of recognizing context information to related to a user or user environments and also recognizing information necessary for the navigation.
  • the avatar navigation may be expressed in various manners.
  • FIG. 9 illustrates a diagram describing metadata control for the avatar navigation, according to an example embodiment.
  • the avatar may use MoveState to check a motion such as walking, running, and the like.
  • walking and running may be discriminated by speed.
  • RefMotionID provides information on which motion is simultaneously performed with the avatar navigation.
  • various situations may be applied to be navigable using context information by the multi-modal interface.
  • a hierarchical diagram of navigation control information with respect to the sensor input may be expressed as follows.
  • FIG. 10 illustrates a diagram describing an avatar navigation process in association with an RV-RoSE engine 1040 , according to an example embodiment.
  • the whole system may include a virtual world engine 101 , a real world interface 1030 , a sensor adaptation preference 1020 , and the RV-RoSE engine 1040 .
  • the virtual world engine 1010 is a system for connection with the virtual world such as Second Life.
  • the real world interface 1030 refers to a terminal enabling a user to control the virtual world.
  • the sensor adaptation preference 1020 may add an intention of the user, for example, adjustment of a range of data values.
  • ID information of the selected avatar may be input to an importer of the RV-RoSE engine 1040 .
  • navigation information is input to a sub navigation variable through an avatar navigation controller.
  • the avatar navigation controller of the RV-RoSE engine 1040 adjusts and stores a value of a corresponding variable.
  • the modified navigation value of the avatar may be transmitted to the virtual world engine 1010 through an avatar information exporter.
  • VE is a basic element of virtual elements.
  • Avatar contains information on all parameters applicable to characteristics of the avatar.
  • AvatarIdentifier contains a specific type of information on avatar identification.
  • AvatarMotionData contains a specific type of information on an avatar motion.
  • AvatarCondition contains a specific type of condition information of the avatar.
  • AvatarCondition includes PhysicalCondition and MentalCondition.
  • Navigation contains information on all control parameters and contextual states of the control parameters.
  • NavigationDescription contains information for an initial navigation state.
  • a motion in the real world may be effectively reflected to manipulation of a virtual object of the virtual world by modeling an object for manipulation of the virtual object and suggesting object operation schema.
  • an avatar in the virtual world may be effectively navigated by defining a physical condition and a mental condition of the avatar and setting motion data of the avatar.
  • the methods according to the above-described example embodiments may be recorded in non-transitory computer-readable media including program instructions to implement various operations embodied by a computer.
  • the media may also include, alone or in combination with the program instructions, data files, data structures, and the like.
  • the program instructions recorded on the media may be those specially designed and constructed for the purposes of the example embodiments, or they may be of the kind well-known and available to those having skill in the computer software arts.
  • non-transitory computer-readable media examples include magnetic media such as hard discs, floppy disks, and magnetic tape; optical media such as CD ROM disks and DVDs; magneto-optical media such as optical disks; and hardware devices that are specially configured to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory, and the like.
  • the media may be transfer media such as optical lines, metal lines, or waveguides including a carrier wave for transmitting a signal designating the program command and the data construction.
  • Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter.
  • the described hardware devices may be configured to act as one or more software modules in order to perform the operations of the above-described example embodiments, or vice versa.

Abstract

An object manipulation apparatus and method model an object for manipulation of a virtual object, suggest object operation schema, define a physical and mental condition of an avatar, and set motion data of the avatar.

Description

    TECHNICAL FIELD
  • The present invention relates to a method for modeling a structure of a virtual object and also modeling an avatar in a virtual world.
  • BACKGROUND ART
  • Recent research has rapidly increased interest of users regarding interaction between a human and a computer. A virtual reality (VR) technology is being developed and applied in various fields, particularly, in an entertainment field. The entertainment field is commercialized, for example, in the form of 3-dimensional (3D) virtual online community such as Second Life and a 3D game station. The 3D game station offers an innovative gaming experience through a 3D input device. A sensor-based multi-modal interface may be applied to a VR system to achieve control of a complicated 3D virtual world. Here, a connection between the real world and the virtual world may be achieved by a virtual to real-representation of sensory effect (VR-RoSE) engine and a real to virtual-RoSE (RV-RoSE) engine.
  • Corresponding to development of the VR technology, there is a need for a method of more effectively reflecting a motion in the real world for manipulation of an object of the virtual world and navigating an avatar in the virtual world.
  • DISCLOSURE OF INVENTION
  • According to an aspect of the present invention, there is provided an object manipulation device including an object modeling unit to set a structure of a virtual object, and an object operating unit to select the virtual object and control an object operation of the selected virtual object.
  • The virtual object may include at least one selected from general information on the virtual object, an object identifier for identification of the virtual object in a virtual world, and object attributes including at least one attribute of the virtual object.
  • The object identifier may include at least one selected from an object ID allocated to the virtual object, an object state for recognition of a state of the virtual object, and modifiable attributes for determining modifiability of attributes of the virtual object.
  • The object attributes may include at least one selected from spatial attributes, physical attributes, temporal attributes, and combinational attributes.
  • The spatial attributes may include at least one of a shape, a location, and a size of the virtual object. The physical attributes may include at least one of a tactile sensation, a pressure, a vibration, and a temperature of the virtual object, and the temporal attributes may include at least one of a duration and a motion of the virtual object.
  • The object operating unit may control at least one performance of selection of the virtual object, collection of object attributes of the virtual object, modification of the object attributes of the virtual object, and removal and storing of the object attributes of the virtual object.
  • The object manipulation device may include an avatar structure setting unit to set a structure of an avatar, and an avatar navigation unit to control a motion of the avatar corresponding to a motion of a user in a real world.
  • The avatar structure setting unit may include an avatar identifying unit to set information for identifying the avatar, an avatar condition managing unit to set a physical condition and a mental condition of the avatar, and a motion managing unit to manage the motion of the avatar.
  • The avatar navigation unit may include a general information managing unit to manage general information of the avatar, and a control data managing unit to control the motion of the avatar.
  • The control data managing unit may manage at least one of a movement state, a movement direction, and a speed of the avatar.
  • According to one embodiment of the present invention, there is provided an object manipulation apparatus and method capable of modeling an object for manipulation of a virtual object and effectively reflecting a motion of a real world to manipulation of an object of a virtual world.
  • According to one embodiment of the present invention, there is provided an object manipulation apparatus and method capable of effectively navigating an avatar in a virtual world by determining a physical and mental condition of the avatar and setting motion data of the avatar.
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 illustrates a block diagram of an object manipulation apparatus according to an embodiment of the present invention;
  • FIG. 2 illustrates a diagram of a system connecting a virtual world with a real world, according to an example embodiment;
  • FIG. 3 illustrates a diagram describing an object modeling operation according to an example embodiment;
  • FIG. 4 illustrates a diagram describing an object operation model according to an example embodiment;
  • FIG. 5 illustrates a diagram describing an object operation model according to another example embodiment;
  • FIG. 6 illustrates a diagram describing a process of manipulating an object associated with a real to virtual-representation of sensory effect (RV-RoSE) engine according to an example embodiment;
  • FIG. 7 illustrates a block diagram describing an object manipulation apparatus according to another example embodiment;
  • FIG. 8 illustrates a diagram showing a countenance and a pose of an avatar, which are determined by an avatar condition managing unit, according to an example embodiment;
  • FIG. 9 illustrates a diagram describing metadata control for avatar navigation, according to an example embodiment; and
  • FIG. 10 illustrates a diagram describing an avatar navigation process in association with an RV-RoSE engine, according to an example embodiment.
  • BEST MODE FOR CARRYING OUT THE INVENTION
  • Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. The embodiments are described below in order to explain the present invention by referring to the figures.
  • FIG. 1 illustrates a block diagram of an object manipulation apparatus 100 according to an embodiment of the present invention.
  • Referring to FIG. 1, the object manipulation apparatus 100 may include an object modeling unit 110 to set a structure of a virtual object, and an object operating unit 120 to select the virtual object and control an object operation of the selected virtual object. Here, the object modeling refers to a process of defining an object model that includes an identifier and attributes for manipulation of the virtual object.
  • FIG. 2 illustrates a diagram of a system connecting a virtual world with a real world, according to an example embodiment. That is, FIG. 2 shows system architecture of sensor input metadata and virtual element metadata. A connection between a real world 220 and a virtual world 210 may be achieved via a virtual to real-representation of sensory effect (VR-RoSE) engine 231 and a virtual to real-representation of sensory effect (RV-RoSE) engine 232. Here, the virtual element metadata refers to metadata related to structures of objects and avatars present in a virtual space. The sensor input metadata refers to metadata for a control function such as navigation and manipulation of the avatars and the objects in a multimodal input device. The object modeling and the object operation will be described in further detail. The object modeling relates to the sensor input metadata while the object operation relates to a sensor input command.
  • <Object Modeling (OM)>
  • The OM including the identifier and the attributes may be defined for manipulation of the virtual object. All objects in the virtual world 210 need to have a particular identifier for controlling software capable of discriminating the objects. In addition, all the objects may include spatial, physical, and temporal attributes to provide reality. Hereinafter, an example of the object modeling will be described with reference to FIG. 3.
  • FIG. 3 illustrates a diagram describing an object modeling operation according to an example embodiment.
  • FIG. 3 shows an object 310 having a predetermined shape and an object 320 having a predetermined tactile sensation. The object modeling may define shape attributes and tactile attributes of the objects.
  • The virtual world may provide a selection effect and a manipulation effect. Variables related to the effects may include a size, a shape, a tactile sensation, a density, a motion, and the like.
  • A hierarchical diagram of the object modeling for the selection and the manipulation is shown below.
  • The object may include general information, an object identifier, and object attributes. The general information may contain an overall description of the object.
  • The object identifier is used for discrimination of the object in the virtual world. The object identifier may include an object ID, an object state indicating a present state by selected, selectable, and unselectable modes, and modifiable attributes indicating modifiability of the attributes.
  • The object attributes may include spatial attributes such as a shape, a location, and a size, physical attributes such as a tactile sensation, a pressure or force, a vibration, and a temperature, temporal attributes such as a duration and a motion, and combinational attributes including a combination of the aforementioned attributes.
  • <Object Operation (OO)>
  • The object operation may include collection of information through an interface, modification of the object attributes, and removal and restoration of the object. Hereinafter, an example object operation will be described with reference to FIG. 4.
  • FIG. 4 illustrates a diagram describing an OO model according to an example embodiment.
  • FIG. 4 illustrates a process of generating a virtual car. The virtual car may be generated by using initial models and revising sizes, locations, and shapes of the initial models. That is, the virtual car may be generated as desired by revising the sizes, locations, and shapes of the initial models through sequential operations 410, 420, and 430.
  • Reality may be provided to the virtual object according to a weight, a roughness, and the like of the virtual object.
  • For example, FIG. 5 shows various states of a hand grasping boxes of various weights. That is, with respect to objects having the same shape, various motions may be expressed according to weights, masses, and the like. FIG. 5 also shows various deformed states of a rubber ball being grasped by a hand. That is, the object may be deformed according to forces, pressures, and the like applied to the object.
  • A hierarchical diagram of the OO will be described in further detail below.
  • The OO may include object selection to select an object desired to be deformed by a user, and object manipulation to modify the attributes of the selected object. The object manipulation may perform at least one of collection of object attributes of the virtual object, modification of the object attributes of the virtual object, removal and storing of the object attributes of the virtual object. Accordingly, the object manipulation may include ObtainTargetInfo to obtain an ID of the selected object and existing attributes, ModifyAttributes to modify the object attributes, and Remove/RestoreObject to remove or restore the object attributes.
  • Hereinafter, the system architecture for the object manipulation will be described.
  • The object manipulation may include operations of selecting a target object according to a user preference, extracting the object attributes of the selected object, modifying the extracted object attributes, storing the modified object attributes, and releasing the object.
  • FIG. 6 illustrates a diagram describing object manipulation in association with an RV-RoSE engine according to an example embodiment.
  • Referring to FIG. 6, the whole system includes a virtual world engine 610, a real world interface 630, a sensor adaptation preference 620, and an RV-RoSE engine 640.
  • The virtual world engine 610 is a system for connecting with a virtual world such as Second Life. The real world interface 630 refers to a terminal enabling a user to control the virtual world. For example, the real world interface 630 includes a 2D/3D mouse, a keyboard, a joystick, a motion sensor, a heat sensor, a camera, a haptic glove, and the like.
  • The sensor adaptation preference 620 refers to a part to add an intention of the user, for example, adjustment of a range of data values.
  • When the user selects the virtual object through various versions of the real world interface 630, ID information of the selected virtual object may be input to an importer of the RV-RoSE engine 640. Additionally, spatial, physical, and temporal information are input to a sub object variable through an object manipulation controller. When the user modifies the object attributes through various versions of the real world interface 630, the object manipulation controller of the RV-RoSE engine 640 adjusts and stores values of corresponding variables. Next, the modified object attributes may be transmitted to the virtual world engine 610 through an object information exporter.
  • <Metadata Schema>
  • Hereinafter, metadata schema, syntax, and semantics related to the object modeling and the object operation will be described.
  • <ObjectModel (OM) Schema>
  • 1. OM
  • The OM is a basic element of the virtual element metadata.
  • Syntax
  • 2. ObjectIdentifier
  • Syntax
  • Semantic
  • 3. ObjectAttributes
  • Syntax
  • Semantic
  • 4. SpatialAttributes
  • Syntax
  • Semantic
  • 6. TemporalAttributes
  • Syntax
  • Semantic
  • <ObjectOperations (OO) Schema>
  • 1. OO
  • Syntax
  • Semantic
  • 2. ObjectManipulation
  • Syntax
  • Semantic
  • FIG. 7 illustrates a block diagram describing an object manipulation apparatus 700 according to another example embodiment.
  • Referring to FIG. 7, the object manipulation apparatus 700 includes an avatar structure setting unit 710 to set a structure of an avatar, and an avatar navigation unit 720 to control a motion of the avatar corresponding to a motion of the user of the real world. Here, the avatar structure setting may be related to the virtual element metadata whereas avatar motion control, that is, navigation control may be related to a sensor input metadata.
  • <Avatar>
  • Virtual elements may include avatars, objects, geometries, cameras, light conditions, and the like. The present embodiment will define the structure of the avatar.
  • An avatar represents another identity of the user. In Second Life or a 3D game, the avatar needs to have attributes including a physical condition and a mental condition since the avatar behaves in different manners according to the physical condition and the mental condition of a user. Also, motion patterns of the avatar may be varied by combining the physical condition and the mental condition. For combination of information on the physical condition and the mental condition, the avatar may include parameters related to the physical condition and the mental condition.
  • For example, first, AvatarCondition may be defined as a main element for the physical condition and the mental condition of the avatar. The AvatarCondition may include PhysicalCondition and MentalCondition as sub-parameters for the physical condition and the mental condition, respectively.
  • A countenance and a pose of the avatar may be determined by values of the AvatarCondition, which will be described in detail with reference to FIG. 8.
  • FIG. 8 illustrates a diagram showing the countenance and the pose of the avatar, which are determined by an avatar condition managing unit, according to an example embodiment.
  • Referring to FIG. 8, various countenances and poses, such as an expressionless face 810, a happy face 820, and a sitting pose 830, may be determined according to the values of the AvatarCondition.
  • To generate various behavior patterns, the avatar metadata may also include AvatarMotionData. The AvatarMotionData may indicate a current motion state such as an on and off state when motion data is allocated, and a degree of reaction of the avatar with respect to the motion, such as a reaction range.
  • Accordingly, a hierarchical diagram of avatar information may be expressed as follows.
  • <Navigation Control>
  • Avatar navigation is a basic operation among control operations for a 3D virtual world. A multi-modal interface is capable of recognizing context information to related to a user or user environments and also recognizing information necessary for the navigation. When sensor input of the multi-modal interface is systemized, the avatar navigation may be expressed in various manners.
  • FIG. 9 illustrates a diagram describing metadata control for the avatar navigation, according to an example embodiment.
  • Referring to FIG. 9, the avatar may use MoveState to check a motion such as walking, running, and the like. Here, walking and running may be discriminated by speed. RefMotionID provides information on which motion is simultaneously performed with the avatar navigation. In addition, various situations may be applied to be navigable using context information by the multi-modal interface.
  • A hierarchical diagram of navigation control information with respect to the sensor input may be expressed as follows.
  • FIG. 10 illustrates a diagram describing an avatar navigation process in association with an RV-RoSE engine 1040, according to an example embodiment.
  • Referring to FIG. 10, the whole system may include a virtual world engine 101, a real world interface 1030, a sensor adaptation preference 1020, and the RV-RoSE engine 1040.
  • The virtual world engine 1010 is a system for connection with the virtual world such as Second Life. The real world interface 1030 refers to a terminal enabling a user to control the virtual world. The sensor adaptation preference 1020 may add an intention of the user, for example, adjustment of a range of data values.
  • When the user selects an avatar through various versions of the real world interface 1030, ID information of the selected avatar may be input to an importer of the RV-RoSE engine 1040. Additionally, navigation information is input to a sub navigation variable through an avatar navigation controller. When the user modifies a navigation value through various types of the real world interface 1030, the avatar navigation controller of the RV-RoSE engine 1040 adjusts and stores a value of a corresponding variable. Next, the modified navigation value of the avatar may be transmitted to the virtual world engine 1010 through an avatar information exporter.
  • <Virtual Element Schema>
  • 1. VE
  • VE is a basic element of virtual elements.
  • syntax
  • semantics
  • 2. Avatar
  • Avatar contains information on all parameters applicable to characteristics of the avatar.
  • syntax
  • semantics
  • 3. AvatarIdentifier
  • AvatarIdentifier contains a specific type of information on avatar identification.
  • syntax
  • semantics
  • 4. AvatarMotionData
  • AvatarMotionData contains a specific type of information on an avatar motion.
  • syntax
  • semantics
  • 5. AvatarCondition
  • AvatarCondition contains a specific type of condition information of the avatar. AvatarCondition includes PhysicalCondition and MentalCondition.
  • syntax
  • semantics
  • <Navigation Control Schema>
  • 1. Navigation
  • Navigation contains information on all control parameters and contextual states of the control parameters.
  • syntax
  • semantics
  • 2. NavigationDescription
  • NavigationDescription contains information for an initial navigation state.
  • syntax
  • semantics
  • As described above, a motion in the real world may be effectively reflected to manipulation of a virtual object of the virtual world by modeling an object for manipulation of the virtual object and suggesting object operation schema.
  • In addition, an avatar in the virtual world may be effectively navigated by defining a physical condition and a mental condition of the avatar and setting motion data of the avatar.
  • The methods according to the above-described example embodiments may be recorded in non-transitory computer-readable media including program instructions to implement various operations embodied by a computer. The media may also include, alone or in combination with the program instructions, data files, data structures, and the like. The program instructions recorded on the media may be those specially designed and constructed for the purposes of the example embodiments, or they may be of the kind well-known and available to those having skill in the computer software arts. Examples of non-transitory computer-readable media include magnetic media such as hard discs, floppy disks, and magnetic tape; optical media such as CD ROM disks and DVDs; magneto-optical media such as optical disks; and hardware devices that are specially configured to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory, and the like. The media may be transfer media such as optical lines, metal lines, or waveguides including a carrier wave for transmitting a signal designating the program command and the data construction. Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter. The described hardware devices may be configured to act as one or more software modules in order to perform the operations of the above-described example embodiments, or vice versa.
  • Although a few embodiments of the present invention have been shown and described, the present invention is not limited to the described embodiments. Instead, it would be appreciated by those skilled in the art that changes may be made to these embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (20)

1. An object manipulation device comprising:
an object modeling unit configured by a processor to set a structure of a virtual object; and
an object operating unit configured by the processor to select the virtual object and control an object operation of the selected virtual object,
wherein a model of the virtual object comprises general information, an object identifier, and object attributes,
wherein the object identifier comprises an object identification, an object state, and modifiable attributes,
wherein the object state comprises available status, selected status, and unavailable status
wherein the modifiable attributes comprise available status and unavailable status, and
wherein the object operating unit is responsive to one of the statuses of the object state and one of the statuses of the modifiable attributes.
2. The object manipulation device of claim 1, wherein the virtual object comprises at least one of general information on the virtual object, an object identifier for identification of the virtual object in a virtual world, and object attributes comprising at least one attribute of the virtual object.
3. The object manipulation device of claim 2, wherein the object identifier comprises at least one of an object ID allocated to the virtual object, an object state for recognition of a state of the virtual object, and modifiable attributes for determining modifiability of attributes of the virtual object.
4. The object manipulation device of claim 2, wherein the object attributes comprises at least one of spatial attributes, physical attributes, temporal attributes, and combinational attributes.
5. The object manipulation device of claim 4, wherein the spatial attributes comprise at least one of a shape, a location, and a size of the virtual object, the physical attributes comprise at least one of a tactile sensation, a pressure, a vibration, and a temperature of the virtual object, and the temporal attributes comprise at least one of a duration and a motion of the virtual object.
6. The object manipulation device of claim 1, wherein the object operating unit controls at least one performance of selection of the virtual object, collection of object attributes of the virtual object, modification of the object attributes of the virtual object, and removal and storing of the object attributes of the virtual object.
7-11. (canceled)
12. The object manipulation device of claim 1, further comprising:
a computer comprising the processor,
wherein the object attributes comprise spatial attributes, physical attributes, temporal attributes, and combinations,
wherein the spatial attributes comprise shape, location, and size,
wherein the physical attributes comprise tactile, pressure or force, vibration, and temperature, and
wherein the temporal attributes comprise duration and motion.
13. The object manipulation device of claim 12, wherein the virtual object comprises at least one of general information on the virtual object, an object identifier for identification of the virtual object in a virtual world, and object attributes comprising at least one attribute of the virtual object.
14. The object manipulation device of claim 13, wherein the object identifier comprises at least one of an object ID allocated to the virtual object, an object state for recognition of a state of the virtual object, and modifiable attributes for determining modifiability of attributes of the virtual object.
15. The object manipulation device of claim 13, wherein the object attributes comprises at least one of spatial attributes, physical attributes, temporal attributes, and combinational attributes.
16. An object manipulation method comprising:
setting, by an object modeling unit of a computer, a structure of a virtual object; and
selecting, by an object operating unit of the computer, the virtual object and control an object operation of the selected virtual object,
wherein a model of the virtual object comprises general information, an object identifier, and object attributes,
wherein the object identifier comprises an object identification, an object state, and modifiable attributes,
wherein the object state comprises available status, selected status, and unavailable status,
wherein the modifiable attributes comprise available status and unavailable status, and
wherein the object operating unit is responsive to one of the statuses of the object state and one of the statuses of the modifiable attributes.
17. The object manipulation method of claim 16, wherein the virtual object comprises at least one of general information on the virtual object, an object identifier for identification of the virtual object in a virtual world, and object attributes comprising at least one attribute of the virtual object.
18. The object manipulation method of claim 17, wherein the object identifier comprises at least one of an object ID allocated to the virtual object, an object state for recognition of a state of the virtual object, and modifiable attributes for determining modifiability of attributes of the virtual object.
19. The object manipulation method of claim 17, wherein the object attributes comprises at least one of spatial attributes, physical attributes, temporal attributes, and combinational attributes.
20. A non-transitory computer-readable recording medium controlling a computer to execute the method of claim 16.
21. An object manipulation device comprising:
a processor configured to
import an object identifier from a virtual world engine based on an predefined object model, wherein the predefined object model defines that the object identifier comprises an object state and an modifiable attributes, wherein the object state comprises available status, selected status, and unavailable status, and wherein the modifiable attributes comprise available status and unavailable status;
import an object information from the virtual world engine based on the predefined object model, wherein the predefined object model further defines that the object information comprises spatial attributes, physical attributes, and temporal attributes;
manipulate the object information by modifying, removing, restoring, or combining at least one of the spatial attributes, the physical attributes, and the temporal attributes; and
export the manipulated object information to the virtual world engine.
22. The object manipulation device of claim 21, wherein the processor is further configured to:
receive a first sensor input command from a sensor, wherein the first sensor input command corresponds to a selecting operation;
import the object identifier, based on the first sensor input command;
check the object state from the imported object identifier to determine whether the object state has an available status, a selected status, or an unavailable status;
responsive to the object state having an available status, set the object state to indicate a selected status;
import the object information;
receive a second sensor input command from the sensor, wherein the second sensor input command corresponds to manipulating operation;
check the modifiable attributes from the imported object identifier to determine whether the modifiable attributes has an available status or an unavailable status; and
responsive to the modifiable attributes having an available status, manipulate the object information based on the second sensor input command, and export the manipulated object information.
23. The object manipulation device of claim 22, wherein:
the spatial attributes comprise shape, location, and size;
the physical attributes comprise tactile, pressure or force, vibration, and temperature; and
the temporal attributes comprise duration, and motion.
24. The object manipulation device of claim 22, wherein, if the object state indicates an unavailable status, the processor is further configured not to perform the selecting operation; and if the modifiable attributes indicate an unavailable status, the processor is further configured not to perform the manipulating operation.
US13/147,069 2009-01-29 2010-01-29 Device for object manipulating with multi-input sources Expired - Fee Related US9454222B1 (en)

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
KR10-2009-0007181 2009-01-29
KR10-2009-0007182 2009-01-29
KR1020090007181 2009-01-29
KR1020090007182 2009-01-29
KR20090007181 2009-01-29
KR20090007182 2009-01-29
KR1020100008110 2010-01-28
KR10-2010-0008110 2010-01-28
KR1020100008110A KR20100088094A (en) 2009-01-29 2010-01-28 Device for object manipulation with multi-input sources
PCT/KR2010/000571 WO2010087654A2 (en) 2009-01-29 2010-01-29 Object-manipulation device using multiple input sources

Publications (2)

Publication Number Publication Date
US20160259409A1 true US20160259409A1 (en) 2016-09-08
US9454222B1 US9454222B1 (en) 2016-09-27

Family

ID=42754397

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/147,069 Expired - Fee Related US9454222B1 (en) 2009-01-29 2010-01-29 Device for object manipulating with multi-input sources

Country Status (4)

Country Link
US (1) US9454222B1 (en)
EP (1) EP2393064A4 (en)
KR (1) KR20100088094A (en)
WO (1) WO2010087654A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150241959A1 (en) * 2013-07-12 2015-08-27 Magic Leap, Inc. Method and system for updating a virtual world
US20150262005A1 (en) * 2012-11-08 2015-09-17 Sony Corporation Information processing apparatus, information processing method, and program
US11291919B2 (en) * 2017-05-07 2022-04-05 Interlake Research, Llc Development of virtual character in a learning game

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015004670A1 (en) * 2013-07-10 2015-01-15 Real View Imaging Ltd. Three dimensional user interface
KR20180033822A (en) * 2016-09-26 2018-04-04 포항공과대학교 산학협력단 Method and device for generating a vibration effect from physical simulation
KR20220086873A (en) * 2020-12-17 2022-06-24 한국전자통신연구원 Device and method for generating npc capable of adjusting skill level

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU1328597A (en) * 1995-11-30 1997-06-19 Virtual Technologies, Inc. Tactile feedback man-machine interface device
GB2404315A (en) 2003-07-22 2005-01-26 Kelseus Ltd Controlling a virtual environment
US20060250401A1 (en) 2005-05-06 2006-11-09 Patrick Pannese Systems and methods for generating 3D simulations
KR100571832B1 (en) 2004-02-18 2006-04-17 삼성전자주식회사 Method and apparatus for integrated modeling of 3D object considering its physical features
JP4297804B2 (en) 2004-02-19 2009-07-15 任天堂株式会社 GAME DEVICE AND GAME PROGRAM
JP2007536634A (en) * 2004-05-04 2007-12-13 フィッシャー−ローズマウント・システムズ・インコーポレーテッド Service-oriented architecture for process control systems
US20050285853A1 (en) * 2004-06-29 2005-12-29 Ge Medical Systems Information Technologies, Inc. 3D display system and method
WO2006062948A2 (en) * 2004-12-06 2006-06-15 Honda Motor Co., Ltd. Interface for robot motion control
US8683386B2 (en) 2006-10-03 2014-03-25 Brian Mark Shuster Virtual environment for computer game
US8902227B2 (en) * 2007-09-10 2014-12-02 Sony Computer Entertainment America Llc Selective interactive mapping of real-world objects to create interactive virtual-world objects

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150262005A1 (en) * 2012-11-08 2015-09-17 Sony Corporation Information processing apparatus, information processing method, and program
US10438058B2 (en) * 2012-11-08 2019-10-08 Sony Corporation Information processing apparatus, information processing method, and program
US20150241959A1 (en) * 2013-07-12 2015-08-27 Magic Leap, Inc. Method and system for updating a virtual world
US10641603B2 (en) * 2013-07-12 2020-05-05 Magic Leap, Inc. Method and system for updating a virtual world
US10767986B2 (en) 2013-07-12 2020-09-08 Magic Leap, Inc. Method and system for interacting with user interfaces
US10866093B2 (en) 2013-07-12 2020-12-15 Magic Leap, Inc. Method and system for retrieving data in response to user input
US11029147B2 (en) 2013-07-12 2021-06-08 Magic Leap, Inc. Method and system for facilitating surgery using an augmented reality system
US11060858B2 (en) 2013-07-12 2021-07-13 Magic Leap, Inc. Method and system for generating a virtual user interface related to a totem
US11221213B2 (en) 2013-07-12 2022-01-11 Magic Leap, Inc. Method and system for generating a retail experience using an augmented reality system
US11656677B2 (en) 2013-07-12 2023-05-23 Magic Leap, Inc. Planar waveguide apparatus with diffraction element(s) and system employing same
US11291919B2 (en) * 2017-05-07 2022-04-05 Interlake Research, Llc Development of virtual character in a learning game

Also Published As

Publication number Publication date
EP2393064A2 (en) 2011-12-07
EP2393064A4 (en) 2013-07-31
WO2010087654A2 (en) 2010-08-05
WO2010087654A3 (en) 2010-11-04
US9454222B1 (en) 2016-09-27
KR20100088094A (en) 2010-08-06

Similar Documents

Publication Publication Date Title
JP7186913B2 (en) Interacting with 3D Virtual Objects Using Pose and Multiple DOF Controllers
US11340694B2 (en) Visual aura around field of view
AU2017370555B2 (en) Virtual user input controls in a mixed reality environment
US11423624B2 (en) Mapping and localization of a passable world
US9454222B1 (en) Device for object manipulating with multi-input sources
CN102301311B (en) Standard gestures
CA3087775A1 (en) Eclipse cursor for virtual content in mixed reality displays
EP3072033A1 (en) Motion control of a virtual environment

Legal Events

Date Code Title Description
AS Assignment

Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, HYUN JEGON;HAN, SEUNG JU;PARK, JOON AH;AND OTHERS;REEL/FRAME:027345/0771

Effective date: 20111004

AS Assignment

Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, HYUN JEONG;HAN, SEUNG JU;PARK, JOON AH;AND OTHERS;SIGNING DATES FROM 20110923 TO 20111004;REEL/FRAME:027943/0361

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20200927