EP2126708A1 - Virtual world user opinion&response monitoring - Google Patents

Virtual world user opinion&response monitoring

Info

Publication number
EP2126708A1
EP2126708A1 EP08726207A EP08726207A EP2126708A1 EP 2126708 A1 EP2126708 A1 EP 2126708A1 EP 08726207 A EP08726207 A EP 08726207A EP 08726207 A EP08726207 A EP 08726207A EP 2126708 A1 EP2126708 A1 EP 2126708A1
Authority
EP
European Patent Office
Prior art keywords
user
animated
environment
avatar
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP08726207A
Other languages
German (de)
French (fr)
Other versions
EP2126708A4 (en
Inventor
Phil Harrison
Gary M. Zalewski
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment Europe Ltd
Sony Interactive Entertainment America LLC
Original Assignee
Sony Computer Entertainment Europe Ltd
Sony Computer Entertainment America LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GBGB0703974.6A external-priority patent/GB0703974D0/en
Priority claimed from GB0704225A external-priority patent/GB2447094B/en
Priority claimed from GB0704235A external-priority patent/GB2447095B/en
Priority claimed from GB0704227A external-priority patent/GB2447020A/en
Priority claimed from GB0704246A external-priority patent/GB2447096B/en
Priority claimed from US11/789,326 external-priority patent/US20080215975A1/en
Application filed by Sony Computer Entertainment Europe Ltd, Sony Computer Entertainment America LLC filed Critical Sony Computer Entertainment Europe Ltd
Publication of EP2126708A1 publication Critical patent/EP2126708A1/en
Publication of EP2126708A4 publication Critical patent/EP2126708A4/en
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/12
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/23Input arrangements for video game devices for interfacing with the game device, e.g. specific interfaces between game controller and console
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/61Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor using advertising information
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • A63F13/795Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for finding other players; for building a team; for providing a buddy list
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/105Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals using inertial sensors, e.g. accelerometers, gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1081Input via voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/40Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of platform network
    • A63F2300/406Transmission via wireless network, e.g. pager or GSM
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/40Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of platform network
    • A63F2300/407Data transfer via internet
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5506Details of game data or player data management using advertisements
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6063Methods for processing data by generating or executing the game program for sound processing
    • A63F2300/6072Methods for processing data by generating or executing the game program for sound processing of an input signal, e.g. pitch and rhythm extraction, voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games

Definitions

  • Example gaming platforms may be the Sony Playstation or Sony Playstation2 (PS2), each of which is sold in the form of a game console.
  • the game console is designed to connect to a monitor (usually a television) and enable user interaction through handheld controllers.
  • the game console is designed with specialized processing hardware, including a CPU, a graphics synthesizer for processing intensive graphics operations, a vector unit for performing geometry transformations, and other glue hardware, firmware, and software.
  • the game console is further designed with an optical disc tray for receiving game compact discs for local play through the game console. Online gaming is also possible, where a user can interactively play against or with other users over the Internet.
  • a virtual world is a simulated environment in which users may interact with each other via one or more computer processors. Users may appear on a video screen in the form of representations referred to as avatars.
  • the degree of interaction between the avatars and the simulated environment is implemented by one or more computer applications that govern such interactions as simulated physics, exchange of information between users, and the like.
  • the nature of interactions among users of the virtual world is often limited by the constraints of the system implementing the virtual world.
  • the present invention fills these needs by providing computer generated graphics that depict a virtual world.
  • the virtual world can be traveled, visited, and interacted with using a controller or controlling input of a real-world computer user.
  • the real-world user in essence is playing a video game, in which he controls an avatar (e.g., virtual person) in the virtual environment.
  • the real-world user can move the avatar, strike up conversations with other avatars, view content such as advertising, make comments or gestures about content or advertising.
  • method for enabling interactive control and monitoring of avatars in a computer generated virtual world environment includes generating a graphical animated environment and presenting a viewable object within the graphical animated environment. Further provided is moving an avatar within the graphical animated environment, where the moving includes directing a field of view of the avatar toward the viewable object. A response of the avatar is detected when the field of view of the avatar is directed toward the viewable object. Further included is storing the response and the response is analyzed to determine whether the response by the avatar is more positive or more negative.
  • the viewable object is an advertisement.
  • a computer implemented method for executing a network application is provided.
  • the network application is defined to render a virtual environment, and the virtual environment is depicted by computer graphics.
  • the method includes generating an animated user and controlling the animated user in the virtual environment.
  • the method presents advertising objects in the virtual environment and detects actions by the animated user to determine if the animated user is viewing one of the advertising object in the virtual environment. Reactions of the animated user are captured when the animated user is viewing the advertising object.
  • the reactions by the animated user within the virtual environment are those that relate to the advertising object, and are presented to a third party to determine effectiveness of the advertising object in the virtual environment.
  • a computer implemented method for executing a network application is provided.
  • the network application is defined to render a virtual environment, and the virtual environment is depicted by computer graphics.
  • the method includes generating an animated user and controlling the animated user in the virtual environment.
  • the method presents advertising objects in the virtual environment and detects actions by the animated user to determine if the animated user is viewing one of the advertising object in the virtual environment. Reactions of the animated user are captured when the animated user is viewing the advertising object.
  • the reactions by the animated user within the virtual environment are those that relate to the advertising object, and are presented to a third party to determine if the reactions are more positive or more negative.
  • Figure IA illustrates a virtual space, in accordance with one embodiment of the present invention.
  • Figure IB illustrates user sitting on a chair holding a controller and communicating with a game console, in accordance with one embodiment of the present invention.
  • Figures 1C-1D illustrate a location profile for an avatar that is associated with a user of a game in which virtual space interactivity is provided, in accordance with one embodiment of the present invention.
  • Figure 2 illustrates a more detailed diagram showing the monitoring of the real- world user for generating feedback, as described with reference to in Figure IA, in accordance with one embodiment of the present invention.
  • Figure 3 A illustrates an example where controller and the buttons of the controller can be selected by a real-world user to cause the avatar's response to change, depending on the real-world user's approval or disapproval of advertisement, in accordance with one embodiment of the present invention.
  • Figure 3B illustrates operations that may be performed by a program in response to user operation of a controller for generating button responses of the avatar when viewing specific advertisements or objects, or things within the virtual space.
  • Figures 4A-4C illustrate other controller buttons that may be selected from the left shoulder buttons and the right shoulder buttons to cause different selections that will express different feedback from an avatar, in accordance with one embodiment of the present invention.
  • Figure 5 illustrates an embodiment where a virtual space may include a plurality of virtual world avatars, in accordance with one embodiment of the present invention.
  • Figure 6 illustrates a flowchart diagram used to determine when to monitor user feedback, in accordance with one embodiment of the present invention.
  • Figure 7 A illustrates an example where user A is walking through the virtual space and is entering an active area, in accordance with one embodiment of the present invention.
  • Figure 7B shows user A entering the active area, but having the field of view not focused on the screen, in accordance with one embodiment of the present invention.
  • Figure 7C illustrates an example where user A is now focused in on a portion of the screen, in accordance with one embodiment of the present invention.
  • Figure 7D illustrates an example where user is fully viewing the screen and is within the active area, in accordance with one embodiment of the present invention.
  • Figure 8A illustrates an embodiment where users within a virtual room may be prompted to vote as to their likes or dislikes, regarding a specific ad, in accordance with one embodiment of the present invention.
  • Figure 8B shows users moving to different parts of the room to signal their approval or disapproval, or likes or dislikes, in accordance with one embodiment of the present invention.
  • Figure 8C shows an example of users voting YES or NO by raising their left or right hands, in accordance with one embodiment of the present invention.
  • Figure 9 schematically illustrates the overall system architecture of the Sony® Playstation 3® entertainment device, a console having controllers for implementing an avatar control system in accordance with one embodiment of the present invention.
  • Figure 10 is a schematic of the Cell processor in accordance with one embodiment of the present invention.
  • Embodiments of computer generated graphics that depict a virtual world are provided.
  • the virtual world can be traveled, visited, and interacted with using a controller or controlling input of a real-world computer user.
  • the real- world user in essence is playing a video game, in which he controls an avatar (e.g., virtual person) in the virtual environment.
  • the real-world user can move the avatar, strike up conversations with other avatars, view content such as advertising, make comments or gestures about content or advertising.
  • program instructions and processing is performed to monitor any comments, gestures, or interactions with object of the virtual world.
  • monitoring is performed upon obtaining permission from users, so that users have control of whether their actions are tracked.
  • the user's experience in the virtual world may be enhanced, as the display and rendering of data for the user is more tailored to the users likes and dislikes.
  • advertisers will learn what specific users like, and their advertising can be adjusted for specific users or for types of users (e.g., teenagers, young adults, kids (using kid-rated environments), adults, and other demographics, types or classes).
  • the information of user response to specific ads can also be provided directly to advertisers, game developers, logic engines, and suggestion engines. In this manner, advertisers will have a better handle on the customer likes, dislikes, and may be better suited to provide types of ads to specific users, and game/environment developers and owners can apply correct charges to advertisers based on use by users, selection by users, activity by users, reaction by users, viewing by users, etc.
  • game/environment developers and owners can apply correct charges to advertisers based on use by users, selection by users, activity by users, reaction by users, viewing by users, etc.
  • users may interact with a virtual world.
  • virtual world means a representation of a real or fictitious environment having rules of interaction simulated by means of one or more processors that a real user may perceive via one or more display devices and/or may interact with via one or more user interfaces.
  • user interface refers to a real device by which a user may send inputs to or receive outputs from the virtual world.
  • the virtual world may be simulated by one or more processor modules. Multiple processor modules may be linked together via a network.
  • the user may interact with the virtual world via a user interface device that can communicate with the processor modules and other user interface devices via a network.
  • Certain aspects of the virtual world may be presented to the user in graphical form on a graphical display such as a computer monitor, television monitor or similar display. Certain other aspects of the virtual world may be presented to the user in audible form on a speaker, which may be associated with the graphical display.
  • users may be represented by avatars. Each avatar within the virtual world may be uniquely associated with a different user.
  • the name or pseudonym of a user may be displayed next to the avatar so that users may readily identify each other.
  • a particular user's interactions with the virtual world may be represented by one or more corresponding actions of the avatar.
  • Different users may interact with each other in the public space via their avatars.
  • An avatar representing a user could have an appearance similar to that of a person, an animal or an object.
  • An avatar in the form of a person may have the same gender as the user or a different gender. The avatar may be shown on the display so that the user can see the avatar along with other objects in the virtual world.
  • the display may show the world from the point of view of the avatar without showing itself.
  • the user's (or avatar's) perspective on the virtual world may be thought of as being the view of a virtual camera.
  • a virtual camera refers to a point of view within the virtual world that may be used for rendering two-dimensional images of a 3D scene within the virtual world.
  • Users may interact with each other through their avatars by means of the chat channels associated with each lobby.
  • Users may enter text for chat with other users via their user interface.
  • the text may then appear over or next to the user's avatar, e.g., in the form of comic-book style dialogue bubbles, sometimes referred to as chat bubbles.
  • chat may be facilitated by the use of a canned phrase chat system sometimes referred to as quick chat.
  • quick chat a user may select one or more chat phrases from a menu.
  • the public spaces are public in the sense that they are not uniquely associated with any particular user or group of users and no user or group of users can exclude another user from the public space.
  • Each private space by contrast, is associated with a particular user from among a plurality of users.
  • a private space is private in the sense that the particular user associated with the private space may restrict access to the private space by other users.
  • the private spaces may take on the appearance of familiar private real estate.
  • Moving the avatar representation of user A 102b about the conceptual virtual space can be dictated by a user moving a controller of a game console and dictating movements of the avatar in different directions so as to virtually enter the various spaces of the conceptual virtual space.
  • FIG. 1A illustrates a virtual space 100, in accordance with one embodiment of the present invention.
  • the virtual space 100 is one where avatars are able to roam, congregate, and interact with one another and/or objects of a virtual space.
  • Avatars are virtual world animated characters that represent or are correlated to a real-world user which may be playing an interactive game.
  • the interactive game may require the real-world user to move his or her avatar about the virtual space 100 so as to enable interaction with other avatars (controlled by other real-world users or computer generated avatars), that may be present in selected spaces within the virtual space 100.
  • the virtual space 100 shown in Figure IA shows the avatars user A 102b and user B 104b.
  • User A 102b is shown having a user field of view 102b' while user B 104b is shown having a user B field of view 104b'.
  • user A and user B are in the virtual space 100, and focused on an advertisement 101.
  • Advertisement 101 may include a model person that is holding up a particular advertising item (e.g., product, item, object, or other image of a product or service), and is displaying the advertising object in an animated video, still picture, or combinations thereof.
  • advertisement 101 may portray a sexy model for model 101a, so as to attract users that may be roaming or traveling across a virtual space 100.
  • Other techniques for attracting avatar users, as is commonly done in advertising may also be included as part of this embodiment.
  • model 101a could be animated and could move about a screen within the virtual space 100 or can jump out of the virtual screen and join the avatars.
  • user A and user B are shown tracking their viewing toward this particular advertisement 101.
  • the field of view of each of the users can be tracked by a program executed by a computing system so as to determine where within the virtual space 100 the users are viewing, and/or what duration, what their gestures might be while viewing the advertisement 101, etc..
  • Operation 106 is shown where processing is performed to determine whether users (e.g., avatars), are viewing advertisements via the avatars that define user A 102b and user B 104b.
  • 108 illustrates processing performed to detect and monitor real- world user feedback 108a and to monitor user controlled avatar feedback 108b.
  • real-world users can select specific keys on controllers so as to graphically illustrate their approval or disapproval in a graphical form using user-prompted thumbs up or user-prompted thumbs down 108c.
  • the real-world response of a real-world user 102a playing a game can be monitored in a number of ways.
  • the real- world user may be holding a controller while viewing a display screen.
  • the display screen may provide two views.
  • One view may be from the standpoint of the user avatar and the avatar's field of view, while the other view may be from the perspective of the avatar, as if the real-world user were floating behind the avatar (such as the view of the avatar from the avatar's backside).
  • the real- world user frowns, becomes excited makes facial expressions, these gestures, comments and/or facial expressions may be tracked by a camera that is part of a real-world system.
  • the real- world system may be connected to a computing device (e.g., such as a game console, or general computer(s)), and a camera that is interfaced with the game console.
  • a computing device e.g., such as a game console, or general computer(s)
  • the camera in the real-world environment will track the real-world user's 102a facial expressions, sounds, frowns, or general excitement or non-excitement during the experience.
  • the experience may be that of the avatar that is moving about the virtual space 100 and as viewed by the user in the real- world.
  • a process may be executed to collect real- world and avatar response. If the real-world user 102a makes a gesture that is recognized by the camera, those gestures will be mapped to the face of the avatar. Consequently, real- world user facial expressions, movements, and actions, if tracked, can be interpreted, and assigned to particular aspects of the avatar. If the real-world user laughs, the avatar laughs, if the real- world user jumps, the avatar jumps, if the real-world user gets angry, the avatar gets angry, if the real-world user moves a body part, the avatar moves a body part. This, in this embodiment, is not necessary for the user to interface with a controller, but the real-world user, by simply moving, reacting, etc., can cause the avatar to do the same as the avatar moves about the virtual spaces.
  • the monitoring may be performed of user-controlled avatar feedback 108b.
  • the real-world user can decide to select certain buttons on the controller to cause the avatar to display a response.
  • the real-world user may select a button to cause the avatar to laugh, frown, look disgusted, or generally produce a facial and/or body response.
  • this information can be fed back for analysis in operation 110.
  • users will be asked to approve monitoring of their response, and if monitored, their experience may be enhanced, as the program and computing system(s) can provide an environment that shapes itself to the likes, dislikes, etc. of the specific users or types of users.
  • the analysis is performed by a computing system(s) (e.g., networked, local or over the internet), and controlled by software that is capable of determining the button(s) selected by the user and the visual avatar responses, or the responses captured by the camera or microphone of the user in the real- world.
  • a computing system(s) e.g., networked, local or over the internet
  • software that is capable of determining the button(s) selected by the user and the visual avatar responses, or the responses captured by the camera or microphone of the user in the real- world.
  • Operation 112 is an optional operation that allows profile analysis to be accessed by the computing system in addition to the analysis obtained from the feedback in operation 110.
  • Profile analysis 112 is an operation that allows the computing system to determine pre-defined likes, dislikes, geographic locations, languages, and other attributes of a particular user that may be visiting a virtual space 100. In this manner, in addition to monitoring what the avatars are looking at, their reaction and feedback, this additional information can be profiled and stored in a database so that data mining can be done and associated with the specific avatars viewing the content.
  • Figure IB illustrates user 102a sitting on a chair holding a controller and communicating with a game console, in accordance with one embodiment of the present invention.
  • User 102a being the real-world user, will have the option of viewing his monitor and the images displayed on the monitor, in two modes.
  • One mode may be from the eyes of the real-world user, showing the backside of user A 102b, which is the avatar.
  • the user A 102b avatar has a field of view 102b 1 while user B has a filed of view 104b'.
  • the view will be a closer-up view showing what is within the field of view 102b' of the avatar 102b.
  • the screen 101 will be a magnification of the model 101a which more clearly shows the view from the eyes of the avatar controlled by the real-world user.
  • the real-world user can then switch between mode A (from behind the avatar), or mode B (from the eyes of the virtual user avatar).
  • mode A from behind the avatar
  • mode B from the eyes of the virtual user avatar
  • the gestures of the avatar as controlled by the real-world user will be tracked as well as the field of view and position of the eyes/head of the avatar within the virtual space 100.
  • Figure 1C illustrates a location profile for an avatar that is associated with a user of a game in which virtual space interactivity is provided.
  • a selection menu may be provided to allow the user to select a profile that will better define the user's interests and the types of locations and spaces that may be available to the user.
  • the user may be provided with a location menu 116.
  • Location menu 116 may be provided with a directory of countries that may be itemized by alphabetical order.
  • the user would then select a particular country, such as Japan, and the user would then be provided a location sub-menu 118.
  • Location sub-menu 118 may ask the user to define a state 118a, a province 1 18b, a region 118c, or a prefecture 118d, depending on the location selected. If the country that was selected was Japan, Japan is divided into prefectures 118d, that represent a type of state within the country of Japan. Then, the user would be provided with a selection of cities 120. [0053] Once the user has selected a particular city within a prefecture, such as Tokyo, Japan, the user would be provided with further menus to zero down into locations and virtual spaces that may be applicable to the user.
  • Figure ID illustrates a personal profile for the user and the avatar that would be representing the user in the virtual space.
  • a personal profile menu 122 is provided.
  • the personal profile menu 122 will list a plurality of options for the user to select based on the types of social definitions associated with the personal profile defined by the user.
  • the social profile may include sports teams, sports e-play, entertainment, and other sub-categories within the social selection criteria.
  • a sub-menu 124 may be selected when a user selects a professional men's sports team, and additional sub-menus 126 that may define further aspects of motor sports.
  • the examples illustrated in the personal profile menu 122 are only exemplary, and it should be understood that the granularity and that variations in profile selection menu contents may change depending on the country selected for the user using the location menu 116 of Figure 1C, the sub-menus 118, and the city selector 120. In one embodiment, certain categories may be partially or completely filled based on the location profile defined by the user. For example, the Japanese location selection could load a plurality of baseball teams in the sports section that may include Japanese league teams (e.g., Nippon Baseball League) as opposed to U.S. based Major League Baseball (MLBTM) teams.
  • JEBTM Major League Baseball
  • the personal profile menu 122 is a dynamic menu that is generated and is displayed to the user with specific reference to the selections of the user in relation to the where the user is located on the planet.
  • FIG. 2 illustrates a more detailed diagram showing the monitoring of the real- world user for generating feedback, as described with reference to 108a in Figure IA.
  • the real-world user 102a may be sitting on a chair holding a controller 208.
  • the controller 208 can be wireless or wired to a computing device.
  • the computing device can be a game console 200 or a general computer.
  • the console 200 is capable of connecting to a network.
  • the network may be a wide-area-network (or internet) which would allow some or all processing to be performed by a program running over the network.
  • one or more servers will execute a game program that will render objects, users, animation, sounds, shading, textures, and other user interfaced reactions, or captures based on processing as performed on networked computers.
  • user 102a holds controller 208 which movements of the controller buttons are captured in operation 214. The movement of the arms and hands and buttons are captured, so as to capture motion of the controller 208, buttons of the controller 208, and six-axis dimensional rotation of the controller 208. Example six-axis positional monitoring may be done using an inertial monitor.
  • controller 208 may be captured in terms of sound by a microphone 202, or by position, lighting, or other input feedback by a camera 204.
  • Display 206 will render a display showing the virtual user avatar traversing the virtual world and the scenes of the virtual world, as controlled by user 102a.
  • Operation 212 is configured to capture the sound processed for particular words by user 102a.
  • the microphone 202 will be configured to capture that information so that the sound may be processed for identifying particular words or information. Voice recognition may also be performed to determine what is said in particular spaces, if users authorize capture.
  • camera 204 will capture the gestures by the user 102a, movements by controller 208, facial expressions by user 102a, and general feedback excitement or non-excitement.
  • Camera 204 will then provide capture of facial expressions, body language, and other information in operation 210. All of this information that is captured in operations, 210, 212, and 214, can be provided as feedback for analysis 110, as described with reference to Figure IA.
  • the aggregated user opinion is processed in operation 218 that will organize and compartmentalize the aggregated responses by all users that may be traveling the virtual space and viewing the various advertisements within the virtual spaces that they enter. This information can then be parsed and provided in operation 220 to advertisers and operators of the virtual space.
  • This information can be provide guidance to the advertisers as to who is viewing the advertisements, how long they viewed the advertisements, their gestures made in front of the advertisements, their gestures made about the advertisements, and will also provide operators of the virtual space metric by which to possibly charge for the advertisements within the virtual spaces, as depending on their popularity, views by particular users, and the like.
  • FIG 3 A illustrates an example where controller 208 and the buttons of the controller 208 can be selected by a real-world user to cause the avatar's response 300 to change, depending on the real- world user's approval or disapproval of advertisement 100.
  • user controlled avatar feedback is monitored, depending on whether the avatar is viewing the advertisement 100 and the specific buttons selected on controller 208 when the avatar is focusing in on the advertisement 100.
  • the real- world user that would be using controller 208 (not shown), could select Rl so that the avatar response 300 of the user's avatar is a laugh (HA HA HA!).
  • buttons such as left shoulder buttons 208a and right shoulder buttons 208b can be used for similar controls of the avatar.
  • the user can select L2 to smile, Ll to frown, R2 to roll eyes, and other buttons for producing other avatar responses 300.
  • Figure 3B illustrates operations that may be performed by a program in response to user operation of a controller 208 for generating button responses of the avatar when viewing specific advertisements 100 or objects, or things within the virtual space.
  • Operation 302 defines buttons that are mapped to avatar facial response to enable players to modify avatar expressions.
  • Operation 304 defines detection of when a player (or a user) views a specific ad and the computing device would recognize that the user is viewing that specific ad.
  • the avatar will have a field of view, and that field of view can be monitored, depending on where the avatar is looking within the virtual space.
  • the computing system (and computer program controlling the computer system) is constantly monitoring the field of view of the avatars within the virtual space, it is possible to determine when the avatar is viewing specific ads. If a user is viewing a specific ad that should be monitored for facial responses, operation 306 will define monitoring of the avatar for changes in avatar facial expressions. If the user selects for the avatar to laugh at specific ads, this information will be taken in by the computing system and stored to define how specific avatars responded to specific ads.
  • operations 307 and 308 will be continuously analyzed to determine all changes and facial expressions, and the information is fed back for analysis.
  • the analysis performed on all the facial expressions can be off-line or in real time.
  • This information can then be passed back to the system and then populated to specific advertisers to enable data mining, and re-tailoring of specific ads in response to their performance in the virtual space.
  • Figure 4A illustrates other controller 208 buttons that may be selected from the left shoulder buttons 208a and the right shoulder buttons 208b to cause different selections that will express different feedback from an avatar. For instance, a user can scroll through the various sayings and then select the sayings that they desire by pushing button Rl . The user can also select through different emoticons to select a specific emoticon and then select button R2.
  • FIG 4B illustrates the avatar controlled by a real-world user which then selects the saying "COOL” and then selecting button Rl.
  • the real- world user can also select button R2 in addition to Rl to provide an emoticon together with a saying.
  • the result is the avatar will smile and say "COOL”.
  • the avatar saying "cool” can be displayed using a cloud or it could be output by the computer by a sound output.
  • Figure 4C illustrates avatar 400a where the real-world user selected button Rl to select "MEH” plus Ll to illustrate a hand gesture.
  • the result will be avatar 400b and 400c where the avatar is saying "MEH” in a cloud and is moving his hand to signal a MEH expression.
  • the expression of MEH is an expression of indifference or lack of interest.
  • the avatar can signal with a MEH and a hand gesture.
  • a MEH a hand gesture.
  • Each of these expressions, whether they are sayings, emoticons, gestures, animations, and the like, are tracked if the user is viewing a specific advertisement and such information is captured so that the data can be provided to advertisers or the virtual world creators and/or operators.
  • FIG. 5 illustrates an embodiment where a virtual space 500 may include a plurality of virtual world avatars.
  • Each of the virtual world avatars will have their own specific field of view, and what they are viewing is tracked by the system. If the avatars are shown having discussions amongst themselves, that information is tracked to show that they are not viewing a specific advertisement, object, picture, trailer, or digital data that may be presented within the virtual space.
  • a plurality of avatars are shown viewing a motion picture within a theater. Some avatars are not viewing the picture and thus, would not be tracked to determine their facial expressions. Users controlling their avatars can then move about the space and enter into locations where they may or may not be viewing a specific advertisement. Consequently, the viewer's motions, travels, field of views, and interactions can be monitored to determine whether the users are actively viewing advertisements, objects, or interacting with one another.
  • FIG. 6 illustrates a flowchart diagram used to determine when to monitor user feedback.
  • an active area needs to be defined.
  • the active area can be defined as an area where an avatar feedback is monitored. Active area can be sized based on an advertisement size, ad placement, active areas can be overlapped, and the like. Once an active area is monitored for the field of view of the avatar, the viewing by the avatar can be logged as to how long the avatar views the advertisement, spends in a particular area in front of an advertisement, and the gestures made by the avatar when viewing the advertisement.
  • Operation 600 shows a decision where avatar users are determined to be in or out of an active area. If the users are not in the active area, then operation 602 is performed where nothing is tracked of the avatar.
  • that avatar user is tracked to determine if the user's field of view if focusing in on an advertisement within the space in operation 604. If the user is not focusing on any advertisement or object that should be tracked in operation 604, then the method moves to operation 608.
  • operation 608 the system will continue monitoring the field of view of the user.
  • the method will continuously move to operation 610 where it is determined whether the user's field of view is now on the ad. If it is not, the method moves back to operation 608. This loop will continue until, in operation 610 it is determined that the user is viewing the ad, and the user is within the active area. At that point, operation 606 will monitor feedback capture of the avatar when the avatar is within the active area, and the avatar has his or her field of view focused on the ad.
  • FIG. 7A illustrates an example where user A 704 is walking through the virtual space and is entering an active area 700.
  • Active area 700 may be a specific room, a location within a room, or a specific space within the virtual space.
  • user A 704 is walking in a direction of the active area 700 where three avatar users are viewing a screen or ad 702.
  • the three users already viewing the screen 702 will attract others because they are already in the active area 700, and their field of view is focused on the screen that may be running an interesting ad for a service or product.
  • Figure 7B shows user A 704 entering the active area 700, but having the field of view 706 not focused on the screen 702.
  • the system will not monitor the facial expressions or bodily expressions, or verbal expressions by the avatar 704, because the avatar 704 is not focused in on the screen 702 where an ad may be running and user feedback of his expressions would be desired.
  • Figure 7C illustrates an example where user A 704 is now focused in on a portion of the screen 710.
  • the field of view 706 shows the user A's 704 field of view only focusing in on half of the screen 702. If the ad content is located in area 710, then the facial expressions and feedback provided by user A 704 will be captured. However, if the advertisement content is on the screen 702 in an area not covered by his field of view, that facial expression and feedback will not be monitored.
  • Figure 7D illustrates an example where user 704 is fully viewing the screen 702 and is within the active area 700.
  • the system will continue monitoring the feedback from user A and only discontinued feedback monitoring of user A when user A leaves the active area.
  • the user A's feedback can be discontinued for monitoring feedback if the particular advertisement is shown on the screen 702 ends, or is no longer in session.
  • Figure 8A illustrates an embodiment where users within a virtual room may be prompted to vote as to their likes or dislikes, regarding a specific ad 101.
  • users 102b and 104b may move onto either a YES area or a NO area.
  • User 102b' is now standing on YES and user 104b' is now standing on NO.
  • This feedback is monitored, and is easily captured, as users can simply move to different locations within a scene to display their approval, disapproval, likes, dislikes, etc.
  • Figure 8B shows users moving to different parts of the room to signal their approval or disapproval, or likes or dislikes.
  • FIG. 8C shows an example of users 800-808 voting YES or NO by raising their left or right hands. These parts of the user avatar bodies can be moved by simply selecting the correct controller buttons (e.g., Ll, Rl, etc.).
  • the virtual world program may be executed partially on a server connected to the internet and partially on the local computer (e.g., game console, desktop, laptop, or wireless hand held device). Still further, the execution can be entirely on a remote server or processing machine, which provides the execution results to the local display screen.
  • the local display or system should have minimal processing capabilities to receive the data over the network (e.g., like the Internet) and render the graphical data on the screen.
  • FIG. 9 schematically illustrates the overall system architecture of the Sony® Playstation 3® entertainment device, a console having controllers for implementing an avatar control system in accordance with one embodiment of the present invention.
  • a system unit 900 is provided, with various peripheral devices connectable to the system unit 9OO.
  • the system unit 900 comprises: a Cell processor 928; a Rambus® dynamic random access memory (XDRAM) unit 926; a Reality Synthesizer graphics unit 930 with a dedicated video random access memory (VRAM) unit 932; and an I/O bridge 934.
  • XDRAM Rambus® dynamic random access memory
  • VRAM dedicated video random access memory
  • the system unit 900 also comprises a BIu Ray® Disk BD-ROM® optical disk reader 940 for reading from a disk 940a and a removable slot-in hard disk drive (HDD) 936, accessible through the I/O bridge 934.
  • the system unit 900 also comprises a memory card reader 938 for reading compact flash memory cards, Memory Stick® memory cards and the like, which is similarly accessible through the I/O bridge 934.
  • the I/O bridge 934 also connects to six Universal Serial Bus (USB) 2.0 ports 924; a gigabit Ethernet port 922; an IEEE 802.1 lb/g wireless network (Wi-Fi) port 920; and a Bluetooth® wireless link port 918 capable of supporting of up to seven Bluetooth connections.
  • USB Universal Serial Bus
  • the I/O bridge 934 handles all wireless, USB and Ethernet data, including data from one or more game controllers 902. For example when a user is playing a game, the I/O bridge 934 receives data from the game controller 902 via a Bluetooth link and directs it to the Cell processor 928, which updates the current state of the game accordingly.
  • the wireless, USB and Ethernet ports also provide connectivity for other peripheral devices in addition to game controllers 902, such as: a remote control 904; a keyboard 906; a mouse 908; a portable entertainment device 910 such as a Sony Playstation Portable® entertainment device; a video camera such as an EyeToy® video camera 912; and a microphone headset 914.
  • peripheral devices may therefore in principle be connected to the system unit 900 wirelessly; for example the portable entertainment device 910 may communicate via a Wi-Fi ad-hoc connection, whilst the microphone headset 914 may communicate via a Bluetooth link.
  • Playstation 3 device is also potentially compatible with other peripheral devices such as digital video recorders (DVRs), set-top boxes, digital cameras, portable media players, Voice over IP telephones, mobile telephones, printers and scanners.
  • DVRs digital video recorders
  • set-top boxes digital cameras
  • portable media players Portable media players
  • Voice over IP telephones mobile telephones
  • printers and scanners a legacy memory card reader 916 may be connected to the system unit via a USB port 924, enabling the reading of memory cards 948 of the kind used by the Playstation® or Playstation 2® devices.
  • the game controller 902 is operable to communicate wirelessly with the system unit 900 via the Bluetooth link.
  • the game controller 902 can instead be connected to a USB port, thereby also providing power by which to charge the battery of the game controller 902.
  • the game controller is sensitive to motion in six degrees of freedom, corresponding to translation and rotation in each axis. Consequently gestures and movements by the user of the game controller may be translated as inputs to a game in addition to or instead of conventional button or joystick commands.
  • other wirelessly enabled peripheral devices such as the Playstation Portable device may be used as a controller.
  • additional game or control information may be provided on the screen of the device.
  • Other alternative or supplementary control devices may also be used, such as a dance mat (not shown), a light gun (not shown), a steering wheel and pedals (not shown) or bespoke controllers, such as a single or several large buttons for a rapid- response quiz game (also not shown).
  • the remote control 904 is also operable to communicate wirelessly with the system unit 900 via a Bluetooth link.
  • the remote control 904 comprises controls suitable for the operation of the BIu Ray Disk BD-ROM reader 940 and for the navigation of disk content.
  • the BIu Ray Disk BD-ROM reader 940 is operable to read CD-ROMs compatible with the Playstation and PlayStation 2 devices, in addition to conventional pre-recorded and recordable CDs, and so-called Super Audio CDs.
  • the reader 940 is also operable to read DVD-ROMs compatible with the Playstation 2 and PlayStation 3 devices, in addition to conventional pre-recorded and recordable DVDs.
  • the reader 940 is further operable to read BD-ROMs compatible with the Playstation 3 device, as well as conventional prerecorded and recordable Blu-Ray Disks.
  • the system unit 900 is operable to supply audio and video, either generated or decoded by the Playstation 3 device via the Reality Synthesizer graphics unit 930, through audio and video connectors to a display and sound output device 942 such as a monitor or television set having a display 944 and one or more loudspeakers 946.
  • the audio connectors 950 may include conventional analogue and digital outputs whilst the video connectors 952 may variously include component video, S-video, composite video and one or more High Definition Multimedia Interface (HDMI) outputs. Consequently, video output may be in formats such as PAL or NTSC, or in 72Op, 108Oi or 1080p high definition.
  • Audio processing is performed by the Cell processor 928.
  • the Playstation 3 device's operating system supports Dolby® 5.1 surround sound, Dolby® Theatre Surround (DTS), and the decoding of 7.1 surround sound from Blu-Ray® disks.
  • the video camera 912 comprises a single charge coupled device (CCD), an LED indicator, and hardware-based real-time data compression and encoding apparatus so that compressed video data may be transmitted in an appropriate format such as an intra-image based MPEG (motion picture expert group) standard for decoding by the system unit 900.
  • the camera LED indicator is arranged to illuminate in response to appropriate control data from the system unit 900, for example to signify adverse lighting conditions.
  • Embodiments of the video camera 912 may variously connect to the system unit 900 via a USB, Bluetooth or Wi-Fi communication port.
  • Embodiments of the video camera may include one or more associated microphones and also be capable of transmitting audio data.
  • the CCD may have a resolution suitable for high-definition video capture. In use, images captured by the video camera may for example be incorporated within a game or interpreted as game control inputs.
  • a peripheral device such as a video camera or remote control via one of the communication ports of the system unit 900
  • an appropriate piece of software such as a device driver should be provided.
  • Device driver technology is well-known and will not be described in detail here, except to say that the skilled man will be aware that a device driver or similar software interface may be required in the present embodiment described.
  • FIG. 10 is a schematic of the Cell processor 928 in accordance with one embodiment of the present invention.
  • the Cell processors 928 has an architecture comprising four basic components: external input and output structures comprising a memory controller 1060 and a dual bus interface controller 1070A,B; a main processor referred to as the Power Processing Element 1050; eight co-processors referred to as Synergistic Processing Elements (SPEs) 1010A-H; and a circular data bus connecting the above components referred to as the Element Interconnect Bus 1080.
  • the total floating point performance of the Cell processor is 218 GFLOPS, compared with the 6.2 GFLOPs of the Playstation 2 device's Emotion Engine.
  • the Power Processing Element (PPE) 1050 is based upon a two-way simultaneous multithreading Power 970 compliant PowerPC core (PPU) 1055 running with an internal clock of 3.2 GHz. It comprises a 512 kB level 2 (L2) cache and a 32 kB level 1 (Ll) cache.
  • the PPE 1050 is capable of eight single position operations per clock cycle, translating to 25.6 GFLOPs at 3.2 GHz.
  • the primary role of the PPE 1050 is to act as a controller for the Synergistic Processing Elements 101 OA-H, which handle most of the computational workload. In operation the PPE 1050 maintains a job queue, scheduling jobs for the Synergistic Processing Elements 1010A-H and monitoring their progress. Consequently each Synergistic Processing Element 1010A-H runs a kernel whose role is to fetch a job, execute it and synchronizes with the PPE 1050.
  • Each Synergistic Processing Element (SPE) 101 OA-H comprises a respective Synergistic Processing Unit (SPU) 1020A-H, and a respective Memory Flow Controller (MFC) 1040 A-H comprising in turn a respective Dynamic Memory Access Controller (DMAC) 1042 A-H, a respective Memory Management Unit (MMU) 1044 A-H and a bus interface (not shown).
  • SPU 1020 A-H is a RISC processor clocked at 3.2 GHz and comprising 256 kB local RAM 1030A-H, expandable in principle to 4 GB.
  • Each SPE gives a theoretical 25.6 GFLOPS of single precision performance.
  • An SPU can operate on 4 single precision floating point members, 4 32-bit numbers, 8 16-bit integers, or 16 8-bit integers in a single clock cycle. In the same clock cycle it can also perform a memory operation.
  • the SPU 1020A-H does not directly access the system memory XDRAM 926; the 64-bit addresses formed by the SPU 1020 A-H are passed to the MFC 1040 A-H which instructs its DMA controller 1042 A-H to access memory via the Element Interconnect Bus 1080 and the memory controller 1060.
  • the Element Interconnect Bus (EIB) 1080 is a logically circular communication bus internal to the Cell processor 928 which connects the above processor elements, namely the PPE 1050, the memory controller 1060, the dual bus interface 1070 A 5 B and the 8 SPEs 1010A-H, totaling 12 participants. Participants can simultaneously read and write to the bus at a rate of 8 bytes per clock cycle. As noted previously, each SPE 1010A-H comprises a DMAC 1042 A-H for scheduling longer read or write sequences.
  • the EIB comprises four channels, two each in clockwise and anti-clockwise directions. Consequently for twelve participants, the longest step- wise data-flow between any two participants is six steps in the appropriate direction.
  • the theoretical peak instantaneous EIB bandwidth for 12 slots is therefore 96B per clock, in the event of full utilization through arbitration between participants. This equates to a theoretical peak bandwidth of 307.2 GB/s (gigabytes per second) at a clock rate of 3.2GHz.
  • the memory controller 1060 comprises an XDRAM interface 1062, developed by Rambus Incorporated.
  • the memory controller interfaces with the Rambus XDRAM 926 with a theoretical peak bandwidth of 25.6 GB/s.
  • the dual bus interface 1070 A 5 B comprises a Rambus FlexIO® system interface 1072A,B.
  • the interface is organized into 12 channels each being 8 bits wide, with five paths being inbound and seven outbound. This provides a theoretical peak bandwidth of 62.4 GB/s (36.4 GB/s outbound, 26 GB/s inbound) between the Cell processor and the I/O Bridge 700 via controller 170A and the Reality Simulator graphics unit 200 via controller 170B.
  • Data sent by the Cell processor 928 to the Reality Simulator graphics unit 930 will typically comprise display lists, being a sequence of commands to draw vertices, apply textures to polygons, specify lighting conditions, and so on.
  • Embodiments may include capturing depth data to better identify the real- world user and to direct activity of an avatar or scene.
  • the object can be something the person is holding or can also be the person's hand.
  • the terms "depth camera” and "three-dimensional camera” refer to any camera that is capable of obtaining distance or depth information as well as two-dimensional pixel information.
  • a depth camera can utilize controlled infrared lighting to obtain distance information.
  • Another exemplary depth camera can be a stereo camera pair, which triangulates distance information using two standard cameras.
  • depth sensing device refers to any type of device that is capable of obtaining distance information as well as two- dimensional pixel information.
  • new "depth cameras” provide the ability to capture and map the third-dimension in addition to normal two-dimensional video imagery. With the new depth data, embodiments of the present invention allow the placement of computer-generated objects in various positions within a video scene in real-time, including behind other objects.
  • embodiments of the present invention provide real-time interactive gaming experiences for users.
  • users can interact with various computer-generated objects in real-time.
  • video scenes can be altered in realtime to enhance the user's game experience.
  • computer generated costumes can be inserted over the user's clothing, and computer generated light sources can be utilized to project virtual shadows within a video scene.
  • a depth camera captures two-dimensional data for a plurality of pixels that comprise the video image. These values are color values for the pixels, generally red, green, and blue (RGB) values for each pixel. In this manner, objects captured by the camera appear as two-dimension objects on a monitor.
  • RGB red, green, and blue
  • Embodiments of the present invention also contemplate distributed image processing configurations.
  • the invention is not limited to the captured image and display image processing taking place in one or even two locations, such as in the CPU or in the CPU and one other element.
  • the input image processing can just as readily take place in an associated CPU, processor or device that can perform processing; essentially all of image processing can be distributed throughout the interconnected system.
  • the present invention is not limited to any specific image processing hardware circuitry and/or software.
  • the embodiments described herein are also not limited to any specific combination of general hardware circuitry and/or software, nor to any particular source for the instructions executed by processing components.
  • the invention may employ various computer-implemented operations involving data stored in computer systems. These operations include operations requiring physical manipulation of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. Further, the manipulations performed are often referred to in terms, such as producing, identifying, determining, or comparing.
  • the above described invention may be practiced with other computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like.
  • the invention may also be practiced in distributing computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • the invention can also be embodied as computer readable code on a computer readable medium.
  • the computer readable medium is any data storage device that can store data which can be thereafter read by a computer system, including an electromagnetic wave carrier. Examples of the computer readable medium include hard drives, network attached storage (NAS), read-only memory, random-access memory, CD- ROMs, CD-Rs, CD-RWs, magnetic tapes, and other optical and non-optical data storage devices.
  • the computer readable medium can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion.

Abstract

Methods and systems for executing a network application is provided. The network application is defined to render a virtual environment, and the virtual environment is depicted by computer graphics. The method includes generating an animated user and controlling the animated user in the virtual environment. The method presents advertising objects in the virtual environment and detects actions by the animated user to determine if the animated user is viewing one of the advertising object in the virtual environment. Reactions of the animated user are captured when the animated user is viewing the advertising object. The reactions by the animated user within the virtual environment are those that relate to the advertising object, and are presented to a third party to determine effectiveness of the advertising object in the virtual environment. Additionally, actual reactions (e.g., physical, audible, or combinations) of the real-world user can be captured and analyzed, or captured and mapped to the avatar for analysis of the of the avatar response.

Description

VIRTUAL WORLD USER OPINION & RESPONSE
MONITORING
BACKGROUND
Description of the Related Art
[0001] The video game industry has seen many changes over the years. As computing power has expanded, developers of video games have likewise created game software that takes advantage of these increases in computing power. To this end, video game developers have been coding games that incorporate sophisticated operations and mathematics to produce a very realistic game experience.
[0002] Example gaming platforms, may be the Sony Playstation or Sony Playstation2 (PS2), each of which is sold in the form of a game console. As is well known, the game console is designed to connect to a monitor (usually a television) and enable user interaction through handheld controllers. The game console is designed with specialized processing hardware, including a CPU, a graphics synthesizer for processing intensive graphics operations, a vector unit for performing geometry transformations, and other glue hardware, firmware, and software. The game console is further designed with an optical disc tray for receiving game compact discs for local play through the game console. Online gaming is also possible, where a user can interactively play against or with other users over the Internet.
[0003] As game complexity continues to intrigue players, game and hardware manufacturers have continued to innovate to enable additional interactivity and computer programs. Some computer programs define virtual worlds. A virtual world is a simulated environment in which users may interact with each other via one or more computer processors. Users may appear on a video screen in the form of representations referred to as avatars. The degree of interaction between the avatars and the simulated environment is implemented by one or more computer applications that govern such interactions as simulated physics, exchange of information between users, and the like. The nature of interactions among users of the virtual world is often limited by the constraints of the system implementing the virtual world.
[0004] It is within this context that embodiments of the invention arise. SUMMARY OF THE INVENTION
[0005] Broadly speaking, the present invention fills these needs by providing computer generated graphics that depict a virtual world. The virtual world can be traveled, visited, and interacted with using a controller or controlling input of a real-world computer user. The real-world user in essence is playing a video game, in which he controls an avatar (e.g., virtual person) in the virtual environment. In this environment, the real-world user can move the avatar, strike up conversations with other avatars, view content such as advertising, make comments or gestures about content or advertising.
[0006] In one embodiment, method for enabling interactive control and monitoring of avatars in a computer generated virtual world environment is provided. The method includes generating a graphical animated environment and presenting a viewable object within the graphical animated environment. Further provided is moving an avatar within the graphical animated environment, where the moving includes directing a field of view of the avatar toward the viewable object. A response of the avatar is detected when the field of view of the avatar is directed toward the viewable object. Further included is storing the response and the response is analyzed to determine whether the response by the avatar is more positive or more negative. In one example, the viewable object is an advertisement.
[0007] In another embodiment, a computer implemented method for executing a network application is provided. The network application is defined to render a virtual environment, and the virtual environment is depicted by computer graphics. The method includes generating an animated user and controlling the animated user in the virtual environment. The method presents advertising objects in the virtual environment and detects actions by the animated user to determine if the animated user is viewing one of the advertising object in the virtual environment. Reactions of the animated user are captured when the animated user is viewing the advertising object. The reactions by the animated user within the virtual environment are those that relate to the advertising object, and are presented to a third party to determine effectiveness of the advertising object in the virtual environment.
[0008] In one embodiment, a computer implemented method for executing a network application is provided. The network application is defined to render a virtual environment, and the virtual environment is depicted by computer graphics. The method includes generating an animated user and controlling the animated user in the virtual environment. The method presents advertising objects in the virtual environment and detects actions by the animated user to determine if the animated user is viewing one of the advertising object in the virtual environment. Reactions of the animated user are captured when the animated user is viewing the advertising object. The reactions by the animated user within the virtual environment are those that relate to the advertising object, and are presented to a third party to determine if the reactions are more positive or more negative.
[0009] Other aspects and advantages of the invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrating by way of example the principles of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The invention, together with further advantages thereof, may best be understood by reference to the following description taken in conjunction with the accompanying drawings.
[0011] Figure IA illustrates a virtual space, in accordance with one embodiment of the present invention.
[0012] Figure IB illustrates user sitting on a chair holding a controller and communicating with a game console, in accordance with one embodiment of the present invention.
[0013] Figures 1C-1D illustrate a location profile for an avatar that is associated with a user of a game in which virtual space interactivity is provided, in accordance with one embodiment of the present invention.
[0014] Figure 2 illustrates a more detailed diagram showing the monitoring of the real- world user for generating feedback, as described with reference to in Figure IA, in accordance with one embodiment of the present invention.
[0015] Figure 3 A illustrates an example where controller and the buttons of the controller can be selected by a real-world user to cause the avatar's response to change, depending on the real-world user's approval or disapproval of advertisement, in accordance with one embodiment of the present invention. [0016] Figure 3B illustrates operations that may be performed by a program in response to user operation of a controller for generating button responses of the avatar when viewing specific advertisements or objects, or things within the virtual space.
[0017] Figures 4A-4C illustrate other controller buttons that may be selected from the left shoulder buttons and the right shoulder buttons to cause different selections that will express different feedback from an avatar, in accordance with one embodiment of the present invention.
[0018] Figure 5 illustrates an embodiment where a virtual space may include a plurality of virtual world avatars, in accordance with one embodiment of the present invention.
[0019] Figure 6 illustrates a flowchart diagram used to determine when to monitor user feedback, in accordance with one embodiment of the present invention.
[0020] Figure 7 A illustrates an example where user A is walking through the virtual space and is entering an active area, in accordance with one embodiment of the present invention.
[0021] Figure 7B shows user A entering the active area, but having the field of view not focused on the screen, in accordance with one embodiment of the present invention.
[0022] Figure 7C illustrates an example where user A is now focused in on a portion of the screen, in accordance with one embodiment of the present invention.
[0023] Figure 7D illustrates an example where user is fully viewing the screen and is within the active area, in accordance with one embodiment of the present invention.
[0024] Figure 8A illustrates an embodiment where users within a virtual room may be prompted to vote as to their likes or dislikes, regarding a specific ad, in accordance with one embodiment of the present invention.
[0025] Figure 8B shows users moving to different parts of the room to signal their approval or disapproval, or likes or dislikes, in accordance with one embodiment of the present invention.
[0026] Figure 8C shows an example of users voting YES or NO by raising their left or right hands, in accordance with one embodiment of the present invention. [0027] Figure 9 schematically illustrates the overall system architecture of the Sony® Playstation 3® entertainment device, a console having controllers for implementing an avatar control system in accordance with one embodiment of the present invention.
[0028] Figure 10 is a schematic of the Cell processor in accordance with one embodiment of the present invention.
DETAILED DESCRIPTION
[0029] Embodiments of computer generated graphics that depict a virtual world are provided. The virtual world can be traveled, visited, and interacted with using a controller or controlling input of a real-world computer user. The real- world user in essence is playing a video game, in which he controls an avatar (e.g., virtual person) in the virtual environment. In this environment, the real-world user can move the avatar, strike up conversations with other avatars, view content such as advertising, make comments or gestures about content or advertising.
[0030] In one embodiment, program instructions and processing is performed to monitor any comments, gestures, or interactions with object of the virtual world. In another embodiment, monitoring is performed upon obtaining permission from users, so that users have control of whether their actions are tracked. In still another embodiment, if the user's actions are tracked, the user's experience in the virtual world may be enhanced, as the display and rendering of data for the user is more tailored to the users likes and dislikes. In still another embodiment, advertisers will learn what specific users like, and their advertising can be adjusted for specific users or for types of users (e.g., teenagers, young adults, kids (using kid-rated environments), adults, and other demographics, types or classes).
[0031] The information of user response to specific ads can also be provided directly to advertisers, game developers, logic engines, and suggestion engines. In this manner, advertisers will have a better handle on the customer likes, dislikes, and may be better suited to provide types of ads to specific users, and game/environment developers and owners can apply correct charges to advertisers based on use by users, selection by users, activity by users, reaction by users, viewing by users, etc. [0032] In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without some or all of these specific details. In other instances, well known process steps have not been described in detail in order not to obscure the present invention.
[0033] According to an embodiment of the present invention users may interact with a virtual world. As used herein the term virtual world means a representation of a real or fictitious environment having rules of interaction simulated by means of one or more processors that a real user may perceive via one or more display devices and/or may interact with via one or more user interfaces. As used herein, the term user interface refers to a real device by which a user may send inputs to or receive outputs from the virtual world. The virtual world may be simulated by one or more processor modules. Multiple processor modules may be linked together via a network. The user may interact with the virtual world via a user interface device that can communicate with the processor modules and other user interface devices via a network. Certain aspects of the virtual world may be presented to the user in graphical form on a graphical display such as a computer monitor, television monitor or similar display. Certain other aspects of the virtual world may be presented to the user in audible form on a speaker, which may be associated with the graphical display.
[0034] Within the virtual world, users may be represented by avatars. Each avatar within the virtual world may be uniquely associated with a different user. Optionally, the name or pseudonym of a user may be displayed next to the avatar so that users may readily identify each other. A particular user's interactions with the virtual world may be represented by one or more corresponding actions of the avatar. Different users may interact with each other in the public space via their avatars. An avatar representing a user could have an appearance similar to that of a person, an animal or an object. An avatar in the form of a person may have the same gender as the user or a different gender. The avatar may be shown on the display so that the user can see the avatar along with other objects in the virtual world.
[0035] Alternatively, the display may show the world from the point of view of the avatar without showing itself. The user's (or avatar's) perspective on the virtual world may be thought of as being the view of a virtual camera. As used herein, a virtual camera refers to a point of view within the virtual world that may be used for rendering two-dimensional images of a 3D scene within the virtual world. Users may interact with each other through their avatars by means of the chat channels associated with each lobby. Users may enter text for chat with other users via their user interface. The text may then appear over or next to the user's avatar, e.g., in the form of comic-book style dialogue bubbles, sometimes referred to as chat bubbles. Such chat may be facilitated by the use of a canned phrase chat system sometimes referred to as quick chat. With quick chat, a user may select one or more chat phrases from a menu.
[0036] In embodiments of the present invention, the public spaces are public in the sense that they are not uniquely associated with any particular user or group of users and no user or group of users can exclude another user from the public space. Each private space, by contrast, is associated with a particular user from among a plurality of users. A private space is private in the sense that the particular user associated with the private space may restrict access to the private space by other users. The private spaces may take on the appearance of familiar private real estate.
[0037] Moving the avatar representation of user A 102b about the conceptual virtual space can be dictated by a user moving a controller of a game console and dictating movements of the avatar in different directions so as to virtually enter the various spaces of the conceptual virtual space. For more information, reference may be made to: (1) United Kingdom patent application no. 0703974.6 entitled "ENTERTAINMENT DEVICE", filed on March 1 , 2007; (2) United Kingdom patent application no. 0704225.2 entitled "ENTERTAINMENT DEVICE AND METHOD", filed on March 5, 2007; (3) United Kingdom patent application no. 0704235.1 entitled "ENTERTAINMENT DEVICE AND METHOD", filed on March 5, 2007; (4) United Kingdom patent application no. 0704227.8 entitled "ENTERTAINMENT DEVICE AND METHOD", filed on March 5, 2007; and (5) United Kingdom patent application no. 0704246.8 entitled "ENTERTAINMENT DEVICE AND METHOD", filed on March 5, 2007, each of which is herein incorporated by reference.
[0038] Figure IA illustrates a virtual space 100, in accordance with one embodiment of the present invention. The virtual space 100 is one where avatars are able to roam, congregate, and interact with one another and/or objects of a virtual space. Avatars are virtual world animated characters that represent or are correlated to a real-world user which may be playing an interactive game. The interactive game may require the real-world user to move his or her avatar about the virtual space 100 so as to enable interaction with other avatars (controlled by other real-world users or computer generated avatars), that may be present in selected spaces within the virtual space 100.
[0039] The virtual space 100 shown in Figure IA shows the avatars user A 102b and user B 104b. User A 102b is shown having a user field of view 102b' while user B 104b is shown having a user B field of view 104b'. In the example shown, user A and user B are in the virtual space 100, and focused on an advertisement 101. Advertisement 101 may include a model person that is holding up a particular advertising item (e.g., product, item, object, or other image of a product or service), and is displaying the advertising object in an animated video, still picture, or combinations thereof. In this example, advertisement 101 may portray a sexy model for model 101a, so as to attract users that may be roaming or traveling across a virtual space 100. Other techniques for attracting avatar users, as is commonly done in advertising, may also be included as part of this embodiment.
[0040] Still further, model 101a could be animated and could move about a screen within the virtual space 100 or can jump out of the virtual screen and join the avatars. As the model 101a moves about in the advertisement, user A and user B are shown tracking their viewing toward this particular advertisement 101. It should be noted that the field of view of each of the users (avatars) can be tracked by a program executed by a computing system so as to determine where within the virtual space 100 the users are viewing, and/or what duration, what their gestures might be while viewing the advertisement 101, etc.. Operation 106 is shown where processing is performed to determine whether users (e.g., avatars), are viewing advertisements via the avatars that define user A 102b and user B 104b. 108 illustrates processing performed to detect and monitor real- world user feedback 108a and to monitor user controlled avatar feedback 108b.
[0041] Additionally, real-world users can select specific keys on controllers so as to graphically illustrate their approval or disapproval in a graphical form using user-prompted thumbs up or user-prompted thumbs down 108c. The real-world response of a real-world user 102a playing a game can be monitored in a number of ways. The real- world user may be holding a controller while viewing a display screen. The display screen may provide two views. [0042] One view may be from the standpoint of the user avatar and the avatar's field of view, while the other view may be from the perspective of the avatar, as if the real-world user were floating behind the avatar (such as the view of the avatar from the avatar's backside).
[0043] In one embodiment, if the real- world user frowns, becomes excited, makes facial expressions, these gestures, comments and/or facial expressions may be tracked by a camera that is part of a real-world system. The real- world system may be connected to a computing device (e.g., such as a game console, or general computer(s)), and a camera that is interfaced with the game console. Based on the user's reaction to the game, or the content being viewed by the avatars being controlled by the real- world user, the camera in the real-world environment will track the real-world user's 102a facial expressions, sounds, frowns, or general excitement or non-excitement during the experience. The experience may be that of the avatar that is moving about the virtual space 100 and as viewed by the user in the real- world.
[0044] In this embodiment, a process may be executed to collect real- world and avatar response. If the real- world user 102a makes a gesture that is recognized by the camera, those gestures will be mapped to the face of the avatar. Consequently, real- world user facial expressions, movements, and actions, if tracked, can be interpreted, and assigned to particular aspects of the avatar. If the real-world user laughs, the avatar laughs, if the real- world user jumps, the avatar jumps, if the real-world user gets angry, the avatar gets angry, if the real-world user moves a body part, the avatar moves a body part. This, in this embodiment, is not necessary for the user to interface with a controller, but the real-world user, by simply moving, reacting, etc., can cause the avatar to do the same as the avatar moves about the virtual spaces.
[0045] In another embodiment, the monitoring may be performed of user-controlled avatar feedback 108b. In this embodiment, depending on the real- world user's enjoyment or non- enjoyment of a particular advertisement, object, or sensory response when roaming or traveling throughout the virtual space 100, the real-world user can decide to select certain buttons on the controller to cause the avatar to display a response. As shown in 108b, the real-world user may select a button to cause the avatar to laugh, frown, look disgusted, or generally produce a facial and/or body response. Depending on the facial and/or body response that is generated by the avatar or the real- world responses captured of the real- world user, this information can be fed back for analysis in operation 110. In one embodiment, users will be asked to approve monitoring of their response, and if monitored, their experience may be enhanced, as the program and computing system(s) can provide an environment that shapes itself to the likes, dislikes, etc. of the specific users or types of users.
[0046] In one embodiment, the analysis is performed by a computing system(s) (e.g., networked, local or over the internet), and controlled by software that is capable of determining the button(s) selected by the user and the visual avatar responses, or the responses captured by the camera or microphone of the user in the real- world.
Consequently, if the avatars are spending a substantial amount of time in front of particular advertisements in the virtual space 100, that amount of time spent by the avatars (as controlled by real- world users), and the field of view captured by the avatars in the virtual space is tracked. This tracking allows information regarding the user's response to the particular advertisements, objects, motion pictures, or still pictures that may be provided within the virtual space 100. This information being tracked is then stored and organized so that future accesses to this database can be made for data analysis. Operation 112 is an optional operation that allows profile analysis to be accessed by the computing system in addition to the analysis obtained from the feedback in operation 110.
[0047] Profile analysis 112 is an operation that allows the computing system to determine pre-defined likes, dislikes, geographic locations, languages, and other attributes of a particular user that may be visiting a virtual space 100. In this manner, in addition to monitoring what the avatars are looking at, their reaction and feedback, this additional information can be profiled and stored in a database so that data mining can be done and associated with the specific avatars viewing the content.
[0048] For instance, if certain ads within a virtual space are only viewed by users between the ages of 15 to 29, this information may be useful as an age demographic for particular ads and thus, would allow advertisers to re-shape their ads or emphasize their ads for specific age demographics the visit particular spaces. Other demographics and profile information can also be useful to properly tailor advertisements within the virtual space, depending on the users visiting those types of spaces. Thus, based on the analyzed feedback 110 and the profile analysis (which is optional) in operation 112, the information that is gathered can be provided to advertisers and operators of the virtual space in 114.
[0049] Figure IB illustrates user 102a sitting on a chair holding a controller and communicating with a game console, in accordance with one embodiment of the present invention. User 102a, being the real-world user, will have the option of viewing his monitor and the images displayed on the monitor, in two modes. One mode may be from the eyes of the real-world user, showing the backside of user A 102b, which is the avatar. The user A 102b avatar has a field of view 102b1 while user B has a filed of view 104b'.
[0050] If the user 102a, being the real- world user, wishes to have the view from the eyes of the virtual user {i.e., the avatar), the view will be a closer-up view showing what is within the field of view 102b' of the avatar 102b. Thus, the screen 101 will be a magnification of the model 101a which more clearly shows the view from the eyes of the avatar controlled by the real-world user. The real-world user can then switch between mode A (from behind the avatar), or mode B (from the eyes of the virtual user avatar). In either embodiment, the gestures of the avatar as controlled by the real-world user will be tracked as well as the field of view and position of the eyes/head of the avatar within the virtual space 100.
[0051] Figure 1C illustrates a location profile for an avatar that is associated with a user of a game in which virtual space interactivity is provided. In order to narrow down the location in which the user wishes to interact, a selection menu may be provided to allow the user to select a profile that will better define the user's interests and the types of locations and spaces that may be available to the user. For example, the user may be provided with a location menu 116. Location menu 116 may be provided with a directory of countries that may be itemized by alphabetical order.
[0052] The user would then select a particular country, such as Japan, and the user would then be provided a location sub-menu 118. Location sub-menu 118 may ask the user to define a state 118a, a province 1 18b, a region 118c, or a prefecture 118d, depending on the location selected. If the country that was selected was Japan, Japan is divided into prefectures 118d, that represent a type of state within the country of Japan. Then, the user would be provided with a selection of cities 120. [0053] Once the user has selected a particular city within a prefecture, such as Tokyo, Japan, the user would be provided with further menus to zero down into locations and virtual spaces that may be applicable to the user. Figure ID illustrates a personal profile for the user and the avatar that would be representing the user in the virtual space. In this example, a personal profile menu 122 is provided. The personal profile menu 122 will list a plurality of options for the user to select based on the types of social definitions associated with the personal profile defined by the user. For example, the social profile may include sports teams, sports e-play, entertainment, and other sub-categories within the social selection criteria. Further shown is a sub-menu 124 that may be selected when a user selects a professional men's sports team, and additional sub-menus 126 that may define further aspects of motor sports.
[0054] Further illustrated are examples to allow a user to select a religion, sexual orientation, or political preference. The examples illustrated in the personal profile menu 122 are only exemplary, and it should be understood that the granularity and that variations in profile selection menu contents may change depending on the country selected for the user using the location menu 116 of Figure 1C, the sub-menus 118, and the city selector 120. In one embodiment, certain categories may be partially or completely filled based on the location profile defined by the user. For example, the Japanese location selection could load a plurality of baseball teams in the sports section that may include Japanese league teams (e.g., Nippon Baseball League) as opposed to U.S. based Major League Baseball (MLB™) teams.
[0055] Similarly, other categories such as local religions, politics, politicians, may be partially generated in the personal profile selection menu 122 based on the users prior location selection in Figure 1C. Accordingly, the personal profile menu 122 is a dynamic menu that is generated and is displayed to the user with specific reference to the selections of the user in relation to the where the user is located on the planet. Once the avatar selections have been made for the location profile in Figure 1C and the personal profile in Figure ID, the user controlling his or her avatar can roam around, visit, enter, and interact with objects and people within the virtual world. In addition to visiting real-world counter- parts in the virtual world, it is also possible that categories of make belief worlds can be visited. Thus, profiles and selections may be for any form, type, world, or preference, and the example profile selector shall not limit the possibilities in profiles or selections. [0056] Figure 2 illustrates a more detailed diagram showing the monitoring of the real- world user for generating feedback, as described with reference to 108a in Figure IA. The real-world user 102a may be sitting on a chair holding a controller 208. The controller 208 can be wireless or wired to a computing device. The computing device can be a game console 200 or a general computer. The console 200 is capable of connecting to a network. The network may be a wide-area-network (or internet) which would allow some or all processing to be performed by a program running over the network.
[0057] In one embodiment, one or more servers will execute a game program that will render objects, users, animation, sounds, shading, textures, and other user interfaced reactions, or captures based on processing as performed on networked computers. In this example, user 102a holds controller 208 which movements of the controller buttons are captured in operation 214. The movement of the arms and hands and buttons are captured, so as to capture motion of the controller 208, buttons of the controller 208, and six-axis dimensional rotation of the controller 208. Example six-axis positional monitoring may be done using an inertial monitor. Additionally, controller 208 may be captured in terms of sound by a microphone 202, or by position, lighting, or other input feedback by a camera 204. Display 206 will render a display showing the virtual user avatar traversing the virtual world and the scenes of the virtual world, as controlled by user 102a. Operation 212 is configured to capture the sound processed for particular words by user 102a.
[0058] For instance, if user 102a becomes excited or sad or utters specific gestures while the avatar that is being controlled traverses the virtual space 100, the microphone 202 will be configured to capture that information so that the sound may be processed for identifying particular words or information. Voice recognition may also be performed to determine what is said in particular spaces, if users authorize capture. As noted, camera 204 will capture the gestures by the user 102a, movements by controller 208, facial expressions by user 102a, and general feedback excitement or non-excitement.
[0059] Camera 204 will then provide capture of facial expressions, body language, and other information in operation 210. All of this information that is captured in operations, 210, 212, and 214, can be provided as feedback for analysis 110, as described with reference to Figure IA. The aggregated user opinion is processed in operation 218 that will organize and compartmentalize the aggregated responses by all users that may be traveling the virtual space and viewing the various advertisements within the virtual spaces that they enter. This information can then be parsed and provided in operation 220 to advertisers and operators of the virtual space.
[0060] This information can be provide guidance to the advertisers as to who is viewing the advertisements, how long they viewed the advertisements, their gestures made in front of the advertisements, their gestures made about the advertisements, and will also provide operators of the virtual space metric by which to possibly charge for the advertisements within the virtual spaces, as depending on their popularity, views by particular users, and the like.
[0061] Figure 3 A illustrates an example where controller 208 and the buttons of the controller 208 can be selected by a real-world user to cause the avatar's response 300 to change, depending on the real- world user's approval or disapproval of advertisement 100. Thus, user controlled avatar feedback is monitored, depending on whether the avatar is viewing the advertisement 100 and the specific buttons selected on controller 208 when the avatar is focusing in on the advertisement 100. In this example, the real- world user that would be using controller 208 (not shown), could select Rl so that the avatar response 300 of the user's avatar is a laugh (HA HA HA!).
[0062] As shown, other controller buttons such as left shoulder buttons 208a and right shoulder buttons 208b can be used for similar controls of the avatar. For instance, the user can select L2 to smile, Ll to frown, R2 to roll eyes, and other buttons for producing other avatar responses 300. Figure 3B illustrates operations that may be performed by a program in response to user operation of a controller 208 for generating button responses of the avatar when viewing specific advertisements 100 or objects, or things within the virtual space. Operation 302 defines buttons that are mapped to avatar facial response to enable players to modify avatar expressions.
[0063] The various buttons on a controller can be mapped to different responses, and the specific buttons and the specific responses by the avatar can be changed, depending on the user preferences of buttons, or dependent on user preferences. Operation 304 defines detection of when a player (or a user) views a specific ad and the computing device would recognize that the user is viewing that specific ad. As noted above, the avatar will have a field of view, and that field of view can be monitored, depending on where the avatar is looking within the virtual space.
[0064] As the computing system (and computer program controlling the computer system) is constantly monitoring the field of view of the avatars within the virtual space, it is possible to determine when the avatar is viewing specific ads. If a user is viewing a specific ad that should be monitored for facial responses, operation 306 will define monitoring of the avatar for changes in avatar facial expressions. If the user selects for the avatar to laugh at specific ads, this information will be taken in by the computing system and stored to define how specific avatars responded to specific ads.
[0065] Thus, operations 307 and 308 will be continuously analyzed to determine all changes and facial expressions, and the information is fed back for analysis. The analysis performed on all the facial expressions can be off-line or in real time. This information can then be passed back to the system and then populated to specific advertisers to enable data mining, and re-tailoring of specific ads in response to their performance in the virtual space.
[0066] Figure 4A illustrates other controller 208 buttons that may be selected from the left shoulder buttons 208a and the right shoulder buttons 208b to cause different selections that will express different feedback from an avatar. For instance, a user can scroll through the various sayings and then select the sayings that they desire by pushing button Rl . The user can also select through different emoticons to select a specific emoticon and then select button R2.
[0067] The user can also select the various gestures and then select the specific gesture using Ll, and the user can select different animations and then select button L2. Figure 4B illustrates the avatar controlled by a real-world user which then selects the saying "COOL" and then selecting button Rl. The real- world user can also select button R2 in addition to Rl to provide an emoticon together with a saying. The result is the avatar will smile and say "COOL". The avatar saying "cool" can be displayed using a cloud or it could be output by the computer by a sound output. Figure 4C illustrates avatar 400a where the real-world user selected button Rl to select "MEH" plus Ll to illustrate a hand gesture. The result will be avatar 400b and 400c where the avatar is saying "MEH" in a cloud and is moving his hand to signal a MEH expression. The expression of MEH is an expression of indifference or lack of interest.
[0068] Thus, if the avatar is viewing a specific advertisement within the virtual space and disapproves or is basically indifferent of the content, the avatar can signal with a MEH and a hand gesture. Each of these expressions, whether they are sayings, emoticons, gestures, animations, and the like, are tracked if the user is viewing a specific advertisement and such information is captured so that the data can be provided to advertisers or the virtual world creators and/or operators.
[0069] Figure 5 illustrates an embodiment where a virtual space 500 may include a plurality of virtual world avatars. Each of the virtual world avatars will have their own specific field of view, and what they are viewing is tracked by the system. If the avatars are shown having discussions amongst themselves, that information is tracked to show that they are not viewing a specific advertisement, object, picture, trailer, or digital data that may be presented within the virtual space.
[0070] In one embodiment, a plurality of avatars are shown viewing a motion picture within a theater. Some avatars are not viewing the picture and thus, would not be tracked to determine their facial expressions. Users controlling their avatars can then move about the space and enter into locations where they may or may not be viewing a specific advertisement. Consequently, the viewer's motions, travels, field of views, and interactions can be monitored to determine whether the users are actively viewing advertisements, objects, or interacting with one another.
[0071] Figure 6 illustrates a flowchart diagram used to determine when to monitor user feedback. In the virtual space, an active area needs to be defined. The active area can be defined as an area where an avatar feedback is monitored. Active area can be sized based on an advertisement size, ad placement, active areas can be overlapped, and the like. Once an active area is monitored for the field of view of the avatar, the viewing by the avatar can be logged as to how long the avatar views the advertisement, spends in a particular area in front of an advertisement, and the gestures made by the avatar when viewing the advertisement. [0072] Operation 600 shows a decision where avatar users are determined to be in or out of an active area. If the users are not in the active area, then operation 602 is performed where nothing is tracked of the avatar. If the user is in the active area, then that avatar user is tracked to determine if the user's field of view if focusing in on an advertisement within the space in operation 604. If the user is not focusing on any advertisement or object that should be tracked in operation 604, then the method moves to operation 608.
[0073] In operation 608, the system will continue monitoring the field of view of the user. The method will continuously move to operation 610 where it is determined whether the user's field of view is now on the ad. If it is not, the method moves back to operation 608. This loop will continue until, in operation 610 it is determined that the user is viewing the ad, and the user is within the active area. At that point, operation 606 will monitor feedback capture of the avatar when the avatar is within the active area, and the avatar has his or her field of view focused on the ad.
[0074] Figure 7A illustrates an example where user A 704 is walking through the virtual space and is entering an active area 700. Active area 700 may be a specific room, a location within a room, or a specific space within the virtual space. As shown, user A 704 is walking in a direction of the active area 700 where three avatar users are viewing a screen or ad 702. The three users already viewing the screen 702 will attract others because they are already in the active area 700, and their field of view is focused on the screen that may be running an interesting ad for a service or product.
[0075] Figure 7B shows user A 704 entering the active area 700, but having the field of view 706 not focused on the screen 702. Thus, the system will not monitor the facial expressions or bodily expressions, or verbal expressions by the avatar 704, because the avatar 704 is not focused in on the screen 702 where an ad may be running and user feedback of his expressions would be desired.
[0076] Figure 7C illustrates an example where user A 704 is now focused in on a portion of the screen 710. The field of view 706 shows the user A's 704 field of view only focusing in on half of the screen 702. If the ad content is located in area 710, then the facial expressions and feedback provided by user A 704 will be captured. However, if the advertisement content is on the screen 702 in an area not covered by his field of view, that facial expression and feedback will not be monitored.
[0077] Figure 7D illustrates an example where user 704 is fully viewing the screen 702 and is within the active area 700. Thus, the system will continue monitoring the feedback from user A and only discontinued feedback monitoring of user A when user A leaves the active area. In another embodiment, the user A's feedback can be discontinued for monitoring feedback if the particular advertisement is shown on the screen 702 ends, or is no longer in session.
[0078] Figure 8A illustrates an embodiment where users within a virtual room may be prompted to vote as to their likes or dislikes, regarding a specific ad 101. In this example, users 102b and 104b may move onto either a YES area or a NO area. User 102b' is now standing on YES and user 104b' is now standing on NO. This feedback is monitored, and is easily captured, as users can simply move to different locations within a scene to display their approval, disapproval, likes, dislikes, etc. Similarly, Figure 8B shows users moving to different parts of the room to signal their approval or disapproval, or likes or dislikes. As shown, users 800 and 802 are already in the YES side of the room, while users 804 and 806 are in the NO side of the room. User 808 is shown changing his mind or simply moving to the YES side of the room. Figure 8C shows an example of users 800-808 voting YES or NO by raising their left or right hands. These parts of the user avatar bodies can be moved by simply selecting the correct controller buttons (e.g., Ll, Rl, etc.).
[0079] In one embodiment, the virtual world program may be executed partially on a server connected to the internet and partially on the local computer (e.g., game console, desktop, laptop, or wireless hand held device). Still further, the execution can be entirely on a remote server or processing machine, which provides the execution results to the local display screen. In this case, the local display or system should have minimal processing capabilities to receive the data over the network (e.g., like the Internet) and render the graphical data on the screen.
[0080] Figure 9 schematically illustrates the overall system architecture of the Sony® Playstation 3® entertainment device, a console having controllers for implementing an avatar control system in accordance with one embodiment of the present invention. A system unit 900 is provided, with various peripheral devices connectable to the system unit 9OO.The system unit 900 comprises: a Cell processor 928; a Rambus® dynamic random access memory (XDRAM) unit 926; a Reality Synthesizer graphics unit 930 with a dedicated video random access memory (VRAM) unit 932; and an I/O bridge 934. The system unit 900 also comprises a BIu Ray® Disk BD-ROM® optical disk reader 940 for reading from a disk 940a and a removable slot-in hard disk drive (HDD) 936, accessible through the I/O bridge 934. Optionally the system unit 900 also comprises a memory card reader 938 for reading compact flash memory cards, Memory Stick® memory cards and the like, which is similarly accessible through the I/O bridge 934. [0081] The I/O bridge 934 also connects to six Universal Serial Bus (USB) 2.0 ports 924; a gigabit Ethernet port 922; an IEEE 802.1 lb/g wireless network (Wi-Fi) port 920; and a Bluetooth® wireless link port 918 capable of supporting of up to seven Bluetooth connections.
[0082] In operation the I/O bridge 934 handles all wireless, USB and Ethernet data, including data from one or more game controllers 902. For example when a user is playing a game, the I/O bridge 934 receives data from the game controller 902 via a Bluetooth link and directs it to the Cell processor 928, which updates the current state of the game accordingly.
[0083] The wireless, USB and Ethernet ports also provide connectivity for other peripheral devices in addition to game controllers 902, such as: a remote control 904; a keyboard 906; a mouse 908; a portable entertainment device 910 such as a Sony Playstation Portable® entertainment device; a video camera such as an EyeToy® video camera 912; and a microphone headset 914. Such peripheral devices may therefore in principle be connected to the system unit 900 wirelessly; for example the portable entertainment device 910 may communicate via a Wi-Fi ad-hoc connection, whilst the microphone headset 914 may communicate via a Bluetooth link.
[0084] The provision of these interfaces means that the Playstation 3 device is also potentially compatible with other peripheral devices such as digital video recorders (DVRs), set-top boxes, digital cameras, portable media players, Voice over IP telephones, mobile telephones, printers and scanners. [0085] In addition, a legacy memory card reader 916 may be connected to the system unit via a USB port 924, enabling the reading of memory cards 948 of the kind used by the Playstation® or Playstation 2® devices.
[0086] In the present embodiment, the game controller 902 is operable to communicate wirelessly with the system unit 900 via the Bluetooth link. However, the game controller 902 can instead be connected to a USB port, thereby also providing power by which to charge the battery of the game controller 902. In addition to one or more analog joysticks and conventional control buttons, the game controller is sensitive to motion in six degrees of freedom, corresponding to translation and rotation in each axis. Consequently gestures and movements by the user of the game controller may be translated as inputs to a game in addition to or instead of conventional button or joystick commands. Optionally, other wirelessly enabled peripheral devices such as the Playstation Portable device may be used as a controller. In the case of the Playstation Portable device, additional game or control information (for example, control instructions or number of lives) may be provided on the screen of the device. Other alternative or supplementary control devices may also be used, such as a dance mat (not shown), a light gun (not shown), a steering wheel and pedals (not shown) or bespoke controllers, such as a single or several large buttons for a rapid- response quiz game (also not shown).
[0087] The remote control 904 is also operable to communicate wirelessly with the system unit 900 via a Bluetooth link. The remote control 904 comprises controls suitable for the operation of the BIu Ray Disk BD-ROM reader 940 and for the navigation of disk content.
[0088] The BIu Ray Disk BD-ROM reader 940 is operable to read CD-ROMs compatible with the Playstation and PlayStation 2 devices, in addition to conventional pre-recorded and recordable CDs, and so-called Super Audio CDs. The reader 940 is also operable to read DVD-ROMs compatible with the Playstation 2 and PlayStation 3 devices, in addition to conventional pre-recorded and recordable DVDs. The reader 940 is further operable to read BD-ROMs compatible with the Playstation 3 device, as well as conventional prerecorded and recordable Blu-Ray Disks.
[0089] The system unit 900 is operable to supply audio and video, either generated or decoded by the Playstation 3 device via the Reality Synthesizer graphics unit 930, through audio and video connectors to a display and sound output device 942 such as a monitor or television set having a display 944 and one or more loudspeakers 946. The audio connectors 950 may include conventional analogue and digital outputs whilst the video connectors 952 may variously include component video, S-video, composite video and one or more High Definition Multimedia Interface (HDMI) outputs. Consequently, video output may be in formats such as PAL or NTSC, or in 72Op, 108Oi or 1080p high definition.
[0090] Audio processing (generation, decoding and so on) is performed by the Cell processor 928. The Playstation 3 device's operating system supports Dolby® 5.1 surround sound, Dolby® Theatre Surround (DTS), and the decoding of 7.1 surround sound from Blu-Ray® disks. [0091] In the present embodiment, the video camera 912 comprises a single charge coupled device (CCD), an LED indicator, and hardware-based real-time data compression and encoding apparatus so that compressed video data may be transmitted in an appropriate format such as an intra-image based MPEG (motion picture expert group) standard for decoding by the system unit 900. The camera LED indicator is arranged to illuminate in response to appropriate control data from the system unit 900, for example to signify adverse lighting conditions. Embodiments of the video camera 912 may variously connect to the system unit 900 via a USB, Bluetooth or Wi-Fi communication port. Embodiments of the video camera may include one or more associated microphones and also be capable of transmitting audio data. In embodiments of the video camera, the CCD may have a resolution suitable for high-definition video capture. In use, images captured by the video camera may for example be incorporated within a game or interpreted as game control inputs.
[0092] In general, in order for successful data communication to occur with a peripheral device such as a video camera or remote control via one of the communication ports of the system unit 900, an appropriate piece of software such as a device driver should be provided. Device driver technology is well-known and will not be described in detail here, except to say that the skilled man will be aware that a device driver or similar software interface may be required in the present embodiment described.
[0093] Figure 10 is a schematic of the Cell processor 928 in accordance with one embodiment of the present invention. The Cell processors 928 has an architecture comprising four basic components: external input and output structures comprising a memory controller 1060 and a dual bus interface controller 1070A,B; a main processor referred to as the Power Processing Element 1050; eight co-processors referred to as Synergistic Processing Elements (SPEs) 1010A-H; and a circular data bus connecting the above components referred to as the Element Interconnect Bus 1080. The total floating point performance of the Cell processor is 218 GFLOPS, compared with the 6.2 GFLOPs of the Playstation 2 device's Emotion Engine.
[0094] The Power Processing Element (PPE) 1050 is based upon a two-way simultaneous multithreading Power 970 compliant PowerPC core (PPU) 1055 running with an internal clock of 3.2 GHz. It comprises a 512 kB level 2 (L2) cache and a 32 kB level 1 (Ll) cache. The PPE 1050 is capable of eight single position operations per clock cycle, translating to 25.6 GFLOPs at 3.2 GHz. The primary role of the PPE 1050 is to act as a controller for the Synergistic Processing Elements 101 OA-H, which handle most of the computational workload. In operation the PPE 1050 maintains a job queue, scheduling jobs for the Synergistic Processing Elements 1010A-H and monitoring their progress. Consequently each Synergistic Processing Element 1010A-H runs a kernel whose role is to fetch a job, execute it and synchronizes with the PPE 1050.
[0095] Each Synergistic Processing Element (SPE) 101 OA-H comprises a respective Synergistic Processing Unit (SPU) 1020A-H, and a respective Memory Flow Controller (MFC) 1040 A-H comprising in turn a respective Dynamic Memory Access Controller (DMAC) 1042 A-H, a respective Memory Management Unit (MMU) 1044 A-H and a bus interface (not shown). Each SPU 1020 A-H is a RISC processor clocked at 3.2 GHz and comprising 256 kB local RAM 1030A-H, expandable in principle to 4 GB. Each SPE gives a theoretical 25.6 GFLOPS of single precision performance. An SPU can operate on 4 single precision floating point members, 4 32-bit numbers, 8 16-bit integers, or 16 8-bit integers in a single clock cycle. In the same clock cycle it can also perform a memory operation. The SPU 1020A-H does not directly access the system memory XDRAM 926; the 64-bit addresses formed by the SPU 1020 A-H are passed to the MFC 1040 A-H which instructs its DMA controller 1042 A-H to access memory via the Element Interconnect Bus 1080 and the memory controller 1060.
[0096] The Element Interconnect Bus (EIB) 1080 is a logically circular communication bus internal to the Cell processor 928 which connects the above processor elements, namely the PPE 1050, the memory controller 1060, the dual bus interface 1070 A5B and the 8 SPEs 1010A-H, totaling 12 participants. Participants can simultaneously read and write to the bus at a rate of 8 bytes per clock cycle. As noted previously, each SPE 1010A-H comprises a DMAC 1042 A-H for scheduling longer read or write sequences. The EIB comprises four channels, two each in clockwise and anti-clockwise directions. Consequently for twelve participants, the longest step- wise data-flow between any two participants is six steps in the appropriate direction. The theoretical peak instantaneous EIB bandwidth for 12 slots is therefore 96B per clock, in the event of full utilization through arbitration between participants. This equates to a theoretical peak bandwidth of 307.2 GB/s (gigabytes per second) at a clock rate of 3.2GHz.
[0097] The memory controller 1060 comprises an XDRAM interface 1062, developed by Rambus Incorporated. The memory controller interfaces with the Rambus XDRAM 926 with a theoretical peak bandwidth of 25.6 GB/s.
[0098] The dual bus interface 1070 A5B comprises a Rambus FlexIO® system interface 1072A,B. The interface is organized into 12 channels each being 8 bits wide, with five paths being inbound and seven outbound. This provides a theoretical peak bandwidth of 62.4 GB/s (36.4 GB/s outbound, 26 GB/s inbound) between the Cell processor and the I/O Bridge 700 via controller 170A and the Reality Simulator graphics unit 200 via controller 170B.
[0099] Data sent by the Cell processor 928 to the Reality Simulator graphics unit 930 will typically comprise display lists, being a sequence of commands to draw vertices, apply textures to polygons, specify lighting conditions, and so on. [00100] Embodiments may include capturing depth data to better identify the real- world user and to direct activity of an avatar or scene. The object can be something the person is holding or can also be the person's hand. In the this description, the terms "depth camera" and "three-dimensional camera" refer to any camera that is capable of obtaining distance or depth information as well as two-dimensional pixel information. For example, a depth camera can utilize controlled infrared lighting to obtain distance information.
Another exemplary depth camera can be a stereo camera pair, which triangulates distance information using two standard cameras. Similarly, the term "depth sensing device" refers to any type of device that is capable of obtaining distance information as well as two- dimensional pixel information. [00101] Recent advances in three-dimensional imagery have opened the door for increased possibilities in real-time interactive computer animation. In particular, new "depth cameras" provide the ability to capture and map the third-dimension in addition to normal two-dimensional video imagery. With the new depth data, embodiments of the present invention allow the placement of computer-generated objects in various positions within a video scene in real-time, including behind other objects.
[00102] Moreover, embodiments of the present invention provide real-time interactive gaming experiences for users. For example, users can interact with various computer-generated objects in real-time. Furthermore, video scenes can be altered in realtime to enhance the user's game experience. For example, computer generated costumes can be inserted over the user's clothing, and computer generated light sources can be utilized to project virtual shadows within a video scene. Hence, using the embodiments of the present invention and a depth camera, users can experience an interactive game environment within their own living room. Similar to normal cameras, a depth camera captures two-dimensional data for a plurality of pixels that comprise the video image. These values are color values for the pixels, generally red, green, and blue (RGB) values for each pixel. In this manner, objects captured by the camera appear as two-dimension objects on a monitor.
[00103] Embodiments of the present invention also contemplate distributed image processing configurations. For example, the invention is not limited to the captured image and display image processing taking place in one or even two locations, such as in the CPU or in the CPU and one other element. For example, the input image processing can just as readily take place in an associated CPU, processor or device that can perform processing; essentially all of image processing can be distributed throughout the interconnected system. Thus, the present invention is not limited to any specific image processing hardware circuitry and/or software. The embodiments described herein are also not limited to any specific combination of general hardware circuitry and/or software, nor to any particular source for the instructions executed by processing components.
[00104] With the above embodiments in mind, it should be understood that the invention may employ various computer-implemented operations involving data stored in computer systems. These operations include operations requiring physical manipulation of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. Further, the manipulations performed are often referred to in terms, such as producing, identifying, determining, or comparing.
[00105] The above described invention may be practiced with other computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like. The invention may also be practiced in distributing computing environments where tasks are performed by remote processing devices that are linked through a communications network.
[00106] The invention can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data which can be thereafter read by a computer system, including an electromagnetic wave carrier. Examples of the computer readable medium include hard drives, network attached storage (NAS), read-only memory, random-access memory, CD- ROMs, CD-Rs, CD-RWs, magnetic tapes, and other optical and non-optical data storage devices. The computer readable medium can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion.
[00107] Although the foregoing invention has been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications may be practiced within the scope of the appended claims. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.
What is claimed is:

Claims

1. A method for enabling interactive control and monitoring of avatars in a computer generated virtual world environment, comprising: generating a graphical animated environment; presenting a viewable object within the graphical animated environment; moving an avatar within the graphical animated environment, the moving includes directing a field of view of the avatar toward the viewable object; detecting a response of the avatar when the field of view of the avatar is directed toward the viewable object; and storing the response; wherein the response is analyzed to determine whether the response is more positive or more negative.
2. A method for enabling interactive control and monitoring of avatars in a computer generated virtual world environment as recited in claim 1 , wherein the viewable object is an advertisement.
3. A method for enabling interactive control and monitoring of avatars in a computer generated virtual world environment as recited in claim 1, wherein the advertisement is animated in a virtual space of the graphical animated environment.
4. A method for enabling interactive control and monitoring of avatars in a computer generated virtual world environment as recited in claim 1 , wherein a real-world user is moving the avatar within the graphical animated environment, and further comprising; detecting audible sound from the real-world user; analyzing the audible sound to determine if the audible sound relates to one of emotions, laughter, utterances, or a combination thereof; and defining the analyzed audible sound to signify the response to be more positive or more negative.
5. A method for enabling interactive control and monitoring of avatars in a computer generated virtual world environment as recited in claim 1 , wherein the response is triggered by a button of a controller.
6. A method for enabling interactive control and monitoring of avatars in a computer generated virtual world environment as recited in claim 4, wherein selected buttons of the controller trigger one or more of facial expressions, bodily expressions, verbal expressions, body movements, comments, or a combination of two or more thereof.
7. A method for enabling interactive control and monitoring of avatars in a computer generated virtual world environment as recited in claim 6, wherein the response is analyzed and presented to owners of the advertisements and operators of the graphical animated environment.
8. A computer implemented method for executing a network application, the network application defined to render a virtual environment, the virtual environment being depicted by computer graphics, comprising: generating an animated user; controlling the animated user in the virtual environment; presenting advertising objects in the virtual environment; detecting actions by the animated user to determine if the animated user is viewing one of the advertising object in the virtual environment; capturing reactions by the animated user when the animated user is viewing the advertising object; wherein the reactions by the animated user within the virtual environment are those that relate to the advertising object, and are presented to a third party to determine effectiveness of the advertising object in the virtual environment.
9. A computer implemented method for executing a network application as recited in claim 8, wherein the third party is an advertiser of a product or service.
10. A computer implemented method for executing a network application as recited in claim 8, wherein the third part is an operator of the virtual environment.
11. A computer implemented method for executing a network application as recited in claim 10, wherein the operator of the virtual environment defines advertising cost formulas for the advertising objects.
12. A computer implemented method for executing a network application as recited in claim 11 , wherein the cost formulas define rates charged to advertisers based on user reactions that relate to specific advertising objects.
13. A computer implemented method for executing a network application as recited in claim 8, wherein the captured reactions include animated user facial expressions, animated user voice expressions, animated user body movements, or combinations thereof.
14. A computer implemented method for executing a network application as recited in claim 8, wherein the advertisement object is animated in a virtual space of the virtual environment.
15. A computer implemented method for executing a network application as recited in claim 8, wherein the reactions are triggered by a button of a controller.
16. A computer implemented method for executing a network application as recited in claim 15, wherein selected buttons of the controller trigger, on the animated user, one or more of facial expressions, bodily expressions, verbal expressions, body movements, comments, or a combination of two or more thereof.
17. A computer implemented method for executing a network application as recited in claim 16, wherein the reaction is analyzed and presented to owners of the advertisements and operators of the virtual environment.
18. A computer implemented method for executing a network application as recited in claim 8, wherein the reaction is analyzed to determine whether the response is more positive or more negative.
19. A computer implemented method for executing a network application, the network application defined to render a virtual environment, the virtual environment being depicted by computer graphics, comprising: generating an animated user; controlling the animated user in the virtual environment; presenting advertising objects in the virtual environment; detecting actions by the animated user to determine if the animated user is viewing one of the advertising object in the virtual environment; capturing reactions by the animated user when the animated user is viewing the advertising object; wherein the reactions by the animated user within the virtual environment are those that relate to the advertising object, and are presented to a third party to determine if the reactions are more positive or more negative.
20. A computer implemented method for executing a network application as recited in claim 19, further comprising: determining a vote by the animated user, the vote signifying approval or disapproval of a specific one of the advertising objects.
21. A computer implemented method for executing a network application as recited in claim 19, wherein the third part is an operator of the virtual environment, and the operator of the virtual environment defines advertising cost formulas for the advertising objects.
22. A computer implemented method for interfacing with a computer program, the computer program being configured to at least partially execute over a network, the computer program defining a graphical environment of virtual places and the computer program enabling a real- world user to control an animated avatar in and around the graphical environment of virtual places, comprising; assigning the real-world user to control the animated avatar; moving the animated avatar in and around the graphical environment; detecting real-world reactions by the real-world user in response to moving the animated avatar in and around the graphical environment; identifying the real-world reactions; mapping the identified real-world reactions to the animated avatar; wherein the animated avatar graphically displays the real-world reactions on a display screen that is executing the computer program.
23. A computer implemented method for interfacing with a computer program as recited in claim 22, wherein the identified real-world reactions are analyzed based on content within the virtual spaces that the animated avatar is viewing.
24. A computer implemented method for interfacing with a computer program as recited in claim 22, wherein the content is a product or service advertised within the graphical environment of virtual places.
25. A computer implemented method for interfacing with a computer program as recited in claim 22, wherein an operator of the computer program defines advertising cost formulas for advertising.
26. A computer implemented method for interfacing with a computer program as recited in claim 25, wherein the cost formulas define rates charged to advertisers based on viewing or reactions that relate to specific advertising.
EP08726207A 2007-03-01 2008-02-27 Virtual world user opinion&response monitoring Ceased EP2126708A4 (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US89239707P 2007-03-01 2007-03-01
GBGB0703974.6A GB0703974D0 (en) 2007-03-01 2007-03-01 Entertainment device
GB0704225A GB2447094B (en) 2007-03-01 2007-03-05 Entertainment device and method
GB0704235A GB2447095B (en) 2007-03-01 2007-03-05 Entertainment device and method
GB0704227A GB2447020A (en) 2007-03-01 2007-03-05 Transmitting game data from an entertainment device and rendering that data in a virtual environment of a second entertainment device
GB0704246A GB2447096B (en) 2007-03-01 2007-03-05 Entertainment device and method
US11/789,326 US20080215975A1 (en) 2007-03-01 2007-04-23 Virtual world user opinion & response monitoring
PCT/US2008/002630 WO2008108965A1 (en) 2007-03-01 2008-02-27 Virtual world user opinion & response monitoring

Publications (2)

Publication Number Publication Date
EP2126708A1 true EP2126708A1 (en) 2009-12-02
EP2126708A4 EP2126708A4 (en) 2010-11-17

Family

ID=39738577

Family Applications (4)

Application Number Title Priority Date Filing Date
EP08730776A Ceased EP2132650A4 (en) 2007-03-01 2008-02-26 System and method for communicating with a virtual world
EP08726207A Ceased EP2126708A4 (en) 2007-03-01 2008-02-27 Virtual world user opinion&response monitoring
EP08726219A Withdrawn EP2118757A4 (en) 2007-03-01 2008-02-27 Virtual world avatar control, interactivity and communication interactive messaging
EP08726220A Withdrawn EP2118840A4 (en) 2007-03-01 2008-02-27 Interactive user controlled avatar animations

Family Applications Before (1)

Application Number Title Priority Date Filing Date
EP08730776A Ceased EP2132650A4 (en) 2007-03-01 2008-02-26 System and method for communicating with a virtual world

Family Applications After (2)

Application Number Title Priority Date Filing Date
EP08726219A Withdrawn EP2118757A4 (en) 2007-03-01 2008-02-27 Virtual world avatar control, interactivity and communication interactive messaging
EP08726220A Withdrawn EP2118840A4 (en) 2007-03-01 2008-02-27 Interactive user controlled avatar animations

Country Status (3)

Country Link
EP (4) EP2132650A4 (en)
JP (5) JP2010533006A (en)
WO (1) WO2008108965A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10489795B2 (en) 2007-04-23 2019-11-26 The Nielsen Company (Us), Llc Determining relative effectiveness of media content items

Families Citing this family (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8397168B2 (en) 2008-04-05 2013-03-12 Social Communications Company Interfacing with a spatial virtual communication environment
US8407605B2 (en) 2009-04-03 2013-03-26 Social Communications Company Application sharing
US7769806B2 (en) 2007-10-24 2010-08-03 Social Communications Company Automated real-time data stream switching in a shared virtual area communication environment
KR101527993B1 (en) 2008-04-05 2015-06-10 소우셜 커뮤니케이션즈 컴퍼니 Shared virtual area communication environment based apparatus and methods
US20100093439A1 (en) * 2008-10-15 2010-04-15 Nc Interactive, Inc. Interactive network game and methods thereof
US20100099495A1 (en) * 2008-10-16 2010-04-22 Nc Interactive, Inc. Interactive network game and methods thereof
US9853922B2 (en) 2012-02-24 2017-12-26 Sococo, Inc. Virtual area communications
JP5527721B2 (en) 2009-01-28 2014-06-25 任天堂株式会社 Program and information processing apparatus
JP5690473B2 (en) 2009-01-28 2015-03-25 任天堂株式会社 Program and information processing apparatus
JP5229484B2 (en) 2009-01-28 2013-07-03 任天堂株式会社 Information processing system, program, and information processing apparatus
JP5813912B2 (en) 2009-01-28 2015-11-17 任天堂株式会社 Program, information processing apparatus, and information processing system
US9542010B2 (en) * 2009-09-15 2017-01-10 Palo Alto Research Center Incorporated System for interacting with objects in a virtual environment
CN102576286B (en) * 2009-09-30 2015-09-30 乐天株式会社 Object displacement method in Web page
US20120192088A1 (en) * 2011-01-20 2012-07-26 Avaya Inc. Method and system for physical mapping in a virtual world
CN106964150B (en) * 2011-02-11 2021-03-02 漳州市爵晟电子科技有限公司 Action positioning point control system and sleeve type positioning point control equipment thereof
US20120277001A1 (en) * 2011-04-28 2012-11-01 Microsoft Corporation Manual and Camera-based Game Control
JP2013003778A (en) * 2011-06-15 2013-01-07 Forum8 Co Ltd Three-dimensional space information processing system, three-dimensional space information processing terminal, three-dimensional space information processing server, three-dimensional space information processing terminal program, three-dimensional space information processing server program, and three-dimensional space information processing method
WO2013181026A1 (en) 2012-06-02 2013-12-05 Social Communications Company Interfacing with a spatial virtual communications environment
CN104516618B (en) * 2013-09-27 2020-01-14 中兴通讯股份有限公司 Interface function analysis display method and device
JP6091407B2 (en) * 2013-12-18 2017-03-08 三菱電機株式会社 Gesture registration device
CN106464707A (en) * 2014-04-25 2017-02-22 诺基亚技术有限公司 Interaction between virtual reality entities and real entities
EP2996017B1 (en) * 2014-09-11 2022-05-11 Nokia Technologies Oy Method, apparatus and computer program for displaying an image of a physical keyboard on a head mountable display
WO2016068581A1 (en) 2014-10-31 2016-05-06 Samsung Electronics Co., Ltd. Device and method of managing user information based on image
US9786125B2 (en) * 2015-06-17 2017-10-10 Facebook, Inc. Determining appearances of objects in a virtual world based on sponsorship of object appearances
US10861056B2 (en) 2015-06-17 2020-12-08 Facebook, Inc. Placing locations in a virtual world
US10339592B2 (en) 2015-06-17 2019-07-02 Facebook, Inc. Configuring a virtual store based on information associated with a user by an online system
US10559305B2 (en) 2016-05-06 2020-02-11 Sony Corporation Information processing system, and information processing method
JP6263252B1 (en) * 2016-12-06 2018-01-17 株式会社コロプラ Information processing method, apparatus, and program for causing computer to execute information processing method
EP3571627A2 (en) 2017-01-19 2019-11-27 Mindmaze Holding S.A. Systems, methods, apparatuses and devices for detecting facial expression and for tracking movement and location including for at least one of a virtual and augmented reality system
US10943100B2 (en) 2017-01-19 2021-03-09 Mindmaze Holding Sa Systems, methods, devices and apparatuses for detecting facial expression
US10515474B2 (en) 2017-01-19 2019-12-24 Mindmaze Holding Sa System, method and apparatus for detecting facial expression in a virtual reality system
JP7070435B2 (en) 2017-01-26 2022-05-18 ソニーグループ株式会社 Information processing equipment, information processing methods, and programs
JP6821461B2 (en) * 2017-02-08 2021-01-27 株式会社コロプラ A method executed by a computer to communicate via virtual space, a program that causes the computer to execute the method, and an information control device.
EP3588444A4 (en) 2017-02-24 2020-02-12 Sony Corporation Information processing apparatus, information processing method, and program
JP6651479B2 (en) * 2017-03-16 2020-02-19 株式会社コロプラ Information processing method and apparatus, and program for causing computer to execute the information processing method
US11328533B1 (en) 2018-01-09 2022-05-10 Mindmaze Holding Sa System, method and apparatus for detecting facial expression for motion capture
JP7308573B2 (en) * 2018-05-24 2023-07-14 株式会社ユピテル System and program etc.
JP7302956B2 (en) * 2018-09-19 2023-07-04 株式会社バンダイナムコエンターテインメント computer system, game system and program
JP2019130295A (en) * 2018-12-28 2019-08-08 ノキア テクノロジーズ オサケユイチア Interaction between virtual reality entities and real entities
JP7323315B2 (en) 2019-03-27 2023-08-08 株式会社コーエーテクモゲームス Information processing device, information processing method and program
JP7190051B2 (en) * 2019-08-20 2022-12-14 日本たばこ産業株式会社 COMMUNICATION SUPPORT METHOD, PROGRAM AND COMMUNICATION SERVER
WO2021033261A1 (en) * 2019-08-20 2021-02-25 日本たばこ産業株式会社 Communication assistance method, program, and communication server
KR102212511B1 (en) * 2019-10-23 2021-02-04 (주)스코넥엔터테인먼트 Virtual Reality Control System
EP3846008A1 (en) 2019-12-30 2021-07-07 TMRW Foundation IP SARL Method and system for enabling enhanced user-to-user communication in digital realities
JP2020146469A (en) * 2020-04-20 2020-09-17 株式会社トプコン Ophthalmologic examination system and ophthalmologic examination device
JP6932224B1 (en) * 2020-06-01 2021-09-08 株式会社電通 Advertising display system
JP7254112B2 (en) * 2021-03-19 2023-04-07 本田技研工業株式会社 Virtual experience providing device, virtual experience providing method, and program
WO2023281755A1 (en) * 2021-07-09 2023-01-12 シャープNecディスプレイソリューションズ株式会社 Display control device, display control method, and program
WO2023068067A1 (en) * 2021-10-18 2023-04-27 ソニーグループ株式会社 Information processing device, information processing method, and program
WO2023149255A1 (en) * 2022-02-02 2023-08-10 株式会社Nttドコモ Display control device
KR20230173481A (en) * 2022-06-17 2023-12-27 주식회사 메타캠프 Apparatus for Metaverse Service by Using Multi-Channel Structure and Channel Syncronizaton and Driving Method Thereof
WO2024004609A1 (en) * 2022-06-28 2024-01-04 ソニーグループ株式会社 Information processing device, information processing method, and recording medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0674927A1 (en) * 1994-03-31 1995-10-04 AT&T Corp. Electronic game utilizing bio-signals
US6183366B1 (en) * 1996-01-19 2001-02-06 Sheldon Goldberg Network gaming system
US20020171647A1 (en) * 2001-05-15 2002-11-21 Sterchi Henry L. System and method for controlling animation by tagging objects within a game environment
EP1318505A1 (en) * 2000-09-13 2003-06-11 A.G.I. Inc. Emotion recognizing method, sensibility creating method, device, and software
EP1388357A2 (en) * 2002-08-07 2004-02-11 Sony Computer Entertainment America Inc. Group behavioural modification using external stimuli
US20050149467A1 (en) * 2002-12-11 2005-07-07 Sony Corporation Information processing device and method, program, and recording medium

Family Cites Families (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5734373A (en) * 1993-07-16 1998-03-31 Immersion Human Interface Corporation Method and apparatus for controlling force feedback interface systems utilizing a host computer
GB9505916D0 (en) * 1995-03-23 1995-05-10 Norton John M Controller
JP3091135B2 (en) * 1995-05-26 2000-09-25 株式会社バンダイ Game equipment
JP3274603B2 (en) * 1996-04-18 2002-04-15 エヌイーシーソフト株式会社 Voice aggregation system and voice aggregation method
JP3975511B2 (en) * 1997-07-25 2007-09-12 富士通株式会社 Personal communication distributed control system
JP3757584B2 (en) * 1997-11-20 2006-03-22 株式会社富士通ゼネラル Advertising effect confirmation system
JP3276068B2 (en) * 1997-11-28 2002-04-22 インターナショナル・ビジネス・マシーンズ・コーポレーション Object selection method and system
US6195104B1 (en) * 1997-12-23 2001-02-27 Philips Electronics North America Corp. System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs
JP2000187435A (en) * 1998-12-24 2000-07-04 Sony Corp Information processing device, portable apparatus, electronic pet device, recording medium with information processing procedure recorded thereon, and information processing method
JP2000311251A (en) * 1999-02-26 2000-11-07 Toshiba Corp Device and method for generating animation and storage medium
EP1250698A4 (en) * 1999-04-20 2002-10-23 John Warren Stringer Human gestural input device with motion and pressure
JP4034002B2 (en) * 1999-04-22 2008-01-16 三菱電機株式会社 Distributed virtual space information management transmission method
AU5012300A (en) * 1999-05-14 2000-12-05 Graphic Gems Method and apparatus for registering lots in a shared virtual world
AU5012600A (en) * 1999-05-14 2000-12-05 Graphic Gems Method and apparatus for a multi-owner, three-dimensional virtual world
JP2000325653A (en) * 1999-05-19 2000-11-28 Enix Corp Portable videogame device and storage medium with program stored therein
JP2001154966A (en) * 1999-11-29 2001-06-08 Sony Corp System and method for supporting virtual conversation being participation possible by users in shared virtual space constructed and provided on computer network and medium storing program
JP2001153663A (en) * 1999-11-29 2001-06-08 Canon Inc Discrimination device for moving direction of object, and photographic device, navigation system, suspension system, game system and remote controller system provide with the device
JP3623415B2 (en) * 1999-12-02 2005-02-23 日本電信電話株式会社 Avatar display device, avatar display method and storage medium in virtual space communication system
JP2001236290A (en) * 2000-02-22 2001-08-31 Toshinao Komuro Communication system using avatar
KR100366384B1 (en) * 2000-02-26 2002-12-31 (주) 고미드 Information search system based on communication of users
JP2001325501A (en) * 2000-03-10 2001-11-22 Heart Gift:Kk On-line gift method
JP3458090B2 (en) * 2000-03-15 2003-10-20 コナミ株式会社 GAME SYSTEM HAVING MESSAGE EXCHANGE FUNCTION, GAME DEVICE USED FOR THE GAME SYSTEM, MESSAGE EXCHANGE SYSTEM, AND COMPUTER-READABLE STORAGE MEDIUM
JP2001321568A (en) * 2000-05-18 2001-11-20 Casio Comput Co Ltd Device and method of game and recording medium
JP2002136762A (en) * 2000-11-02 2002-05-14 Taito Corp Adventure game using latent video
JP3641423B2 (en) * 2000-11-17 2005-04-20 Necインフロンティア株式会社 Advertisement information system
AU2002219857A1 (en) * 2000-11-27 2002-06-03 Butterfly.Net, Inc. System and method for synthesizing environments to facilitate distributed, context-sensitive, multi-user interactive applications
EP1216733A3 (en) * 2000-12-20 2004-09-08 Aruze Co., Ltd. Server providing competitive game service, program storage medium for use in the server, and method of providing competitive game service using the server
JP2002197376A (en) * 2000-12-27 2002-07-12 Fujitsu Ltd Method and device for providing virtual world customerized according to user
JP4613295B2 (en) * 2001-02-16 2011-01-12 株式会社アートディンク Virtual reality playback device
US6895305B2 (en) * 2001-02-27 2005-05-17 Anthrotronix, Inc. Robotic apparatus and wireless communication system
JP4068542B2 (en) * 2001-05-18 2008-03-26 株式会社ソニー・コンピュータエンタテインメント Entertainment system, communication program, computer-readable recording medium storing communication program, and communication method
JP3425562B2 (en) * 2001-07-12 2003-07-14 コナミ株式会社 Character operation program, character operation method, and video game apparatus
JP3732168B2 (en) * 2001-12-18 2006-01-05 株式会社ソニー・コンピュータエンタテインメント Display device, display system and display method for objects in virtual world, and method for setting land price and advertising fee in virtual world where they can be used
JP2003210834A (en) * 2002-01-17 2003-07-29 Namco Ltd Control information, information storing medium, and game device
JP2003259331A (en) * 2002-03-06 2003-09-12 Nippon Telegraph & Telephone West Corp Three-dimensional contents distribution apparatus, three-dimensional contents distribution program, program recording medium, and three-dimensional contents distribution method
JP2003324522A (en) * 2002-05-02 2003-11-14 Nippon Telegr & Teleph Corp <Ntt> Ip/pstn integrated control apparatus, communication method, program, and recording medium
JP2004021606A (en) * 2002-06-17 2004-01-22 Nec Corp Internet service providing system using virtual space providing server
JP2004046311A (en) * 2002-07-09 2004-02-12 Nippon Telegr & Teleph Corp <Ntt> Method and system for gesture input in three-dimensional virtual space
WO2004042545A1 (en) * 2002-11-07 2004-05-21 Personics A/S Adaptive motion detection interface and motion detector
JP3952396B2 (en) * 2002-11-20 2007-08-01 任天堂株式会社 GAME DEVICE AND INFORMATION PROCESSING DEVICE
JP3961419B2 (en) * 2002-12-27 2007-08-22 株式会社バンダイナムコゲームス GAME DEVICE, GAME CONTROL PROGRAM, AND RECORDING MEDIUM CONTAINING THE PROGRAM
GB0306875D0 (en) * 2003-03-25 2003-04-30 British Telecomm Apparatus and method for generating behavior in an object
JP4442117B2 (en) * 2003-05-27 2010-03-31 ソニー株式会社 Information registration method, information registration apparatus, and information registration program
US7725419B2 (en) * 2003-09-05 2010-05-25 Samsung Electronics Co., Ltd Proactive user interface including emotional agent
JP2005100053A (en) * 2003-09-24 2005-04-14 Nomura Research Institute Ltd Method, program and device for sending and receiving avatar information
JP2005216004A (en) * 2004-01-29 2005-08-11 Tama Tlo Kk Program and communication method
JP4559092B2 (en) * 2004-01-30 2010-10-06 株式会社エヌ・ティ・ティ・ドコモ Mobile communication terminal and program
US20060013254A1 (en) * 2004-06-07 2006-01-19 Oded Shmueli System and method for routing communication through various communication channel types
JP2006034436A (en) * 2004-07-23 2006-02-09 Smk Corp Virtual game system using exercise apparatus
CA2582548A1 (en) * 2004-10-08 2006-04-20 Sonus Networks, Inc. Common telephony services to multiple devices associated with multiple networks
US20090005167A1 (en) * 2004-11-29 2009-01-01 Juha Arrasvuori Mobile Gaming with External Devices in Single and Multiplayer Games
JP2006186893A (en) * 2004-12-28 2006-07-13 Matsushita Electric Ind Co Ltd Voice conversation control apparatus
JP2006185252A (en) * 2004-12-28 2006-07-13 Univ Of Electro-Communications Interface device
JP2006211005A (en) * 2005-01-25 2006-08-10 Takashi Uchiyama Television telephone advertising system
WO2006080080A1 (en) * 2005-01-28 2006-08-03 Fujitsu Limited Telephone management system and telephone management method
JP4322833B2 (en) * 2005-03-16 2009-09-02 株式会社東芝 Wireless communication system
DE602006018897D1 (en) * 2005-05-05 2011-01-27 Sony Computer Entertainment Inc Video game control via joystick
US20060252538A1 (en) * 2005-05-05 2006-11-09 Electronic Arts Inc. Analog stick input replacement for lengthy button push sequences and intuitive input for effecting character actions
JP2006004421A (en) * 2005-06-03 2006-01-05 Sony Corp Data processor
US20070002835A1 (en) * 2005-07-01 2007-01-04 Microsoft Corporation Edge-based communication

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0674927A1 (en) * 1994-03-31 1995-10-04 AT&T Corp. Electronic game utilizing bio-signals
US6183366B1 (en) * 1996-01-19 2001-02-06 Sheldon Goldberg Network gaming system
US20050148377A1 (en) * 1996-01-19 2005-07-07 Goldberg Sheldon F. Network gaming system
EP1318505A1 (en) * 2000-09-13 2003-06-11 A.G.I. Inc. Emotion recognizing method, sensibility creating method, device, and software
US20020171647A1 (en) * 2001-05-15 2002-11-21 Sterchi Henry L. System and method for controlling animation by tagging objects within a game environment
EP1388357A2 (en) * 2002-08-07 2004-02-11 Sony Computer Entertainment America Inc. Group behavioural modification using external stimuli
US20050149467A1 (en) * 2002-12-11 2005-07-07 Sony Corporation Information processing device and method, program, and recording medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of WO2008108965A1 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10489795B2 (en) 2007-04-23 2019-11-26 The Nielsen Company (Us), Llc Determining relative effectiveness of media content items
US11222344B2 (en) 2007-04-23 2022-01-11 The Nielsen Company (Us), Llc Determining relative effectiveness of media content items

Also Published As

Publication number Publication date
JP2010535363A (en) 2010-11-18
EP2132650A2 (en) 2009-12-16
EP2126708A4 (en) 2010-11-17
JP2010535362A (en) 2010-11-18
JP2010533006A (en) 2010-10-21
EP2132650A4 (en) 2010-10-27
EP2118840A1 (en) 2009-11-18
EP2118757A4 (en) 2010-11-03
WO2008108965A1 (en) 2008-09-12
JP5756198B2 (en) 2015-07-29
JP2010535364A (en) 2010-11-18
JP2014149836A (en) 2014-08-21
EP2118840A4 (en) 2010-11-10
EP2118757A1 (en) 2009-11-18

Similar Documents

Publication Publication Date Title
US20080215975A1 (en) Virtual world user opinion &amp; response monitoring
WO2008108965A1 (en) Virtual world user opinion &amp; response monitoring
US11442532B2 (en) Control of personal space content presented via head mounted display
US10636217B2 (en) Integration of tracked facial features for VR users in virtual reality environments
US8766983B2 (en) Methods and systems for processing an interchange of real time effects during video communication
US11161236B2 (en) Robot as personal trainer
JP6679747B2 (en) Watching virtual reality environments associated with virtual reality (VR) user interactivity
US10430018B2 (en) Systems and methods for providing user tagging of content within a virtual scene
WO2008106196A1 (en) Virtual world avatar control, interactivity and communication interactive messaging
US20100060662A1 (en) Visual identifiers for virtual world avatars
CN115068932A (en) Audience management at view locations in virtual reality environments
US20140179427A1 (en) Generation of a mult-part mini-game for cloud-gaming based on recorded gameplay
US20090158220A1 (en) Dynamic three-dimensional object mapping for user-defined control device
US10841247B2 (en) Social media connection for a robot
WO2008106197A1 (en) Interactive user controlled avatar animations
WO2018005199A1 (en) Systems and methods for providing user tagging of content within a virtual scene

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20090917

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20101014

RIC1 Information provided on ipc code assigned before grant

Ipc: A63F 13/10 20060101ALI20101008BHEP

Ipc: A63F 13/12 20060101ALI20101008BHEP

Ipc: A63F 13/00 20060101AFI20101008BHEP

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SONY COMPUTER ENTERTAINMENT AMERICA LLC

Owner name: SONY COMPUTER ENTERTAINMENT EUROPE LIMITED

17Q First examination report despatched

Effective date: 20140624

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SONY COMPUTER ENTERTAINMENT AMERICA LLC

Owner name: SONY COMPUTER ENTERTAINMENT EUROPE LIMITED

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SONY COMPUTER ENTERTAINMENT AMERICA LLC

Owner name: SONY INTERACTIVE ENTERTAINMENT EUROPE LIMITED

REG Reference to a national code

Ref country code: DE

Ref legal event code: R003

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20191022