US20150195628A1 - System suitable for efficient communication of media stream and a method in association therewith - Google Patents

System suitable for efficient communication of media stream and a method in association therewith Download PDF

Info

Publication number
US20150195628A1
US20150195628A1 US14/294,898 US201414294898A US2015195628A1 US 20150195628 A1 US20150195628 A1 US 20150195628A1 US 201414294898 A US201414294898 A US 201414294898A US 2015195628 A1 US2015195628 A1 US 2015195628A1
Authority
US
United States
Prior art keywords
media stream
generated
data
description
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/294,898
Other languages
English (en)
Inventor
Teck Chee LEE
Darran Nathan
Shin Yee CHUNG
Yuan Yeow LEOW
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Creative Technology Ltd
Original Assignee
Creative Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Creative Technology Ltd filed Critical Creative Technology Ltd
Assigned to CREATIVE TECHNOLOGY LTD reassignment CREATIVE TECHNOLOGY LTD ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHUNG, Shin Yee, LEE, TECK CHEE, LEOW, Yuan Yeow, NATHAN, DARRAN
Priority to TW103143806A priority Critical patent/TW201531324A/zh
Priority to PCT/SG2014/000616 priority patent/WO2015102532A1/fr
Publication of US20150195628A1 publication Critical patent/US20150195628A1/en
Priority to US16/432,191 priority patent/US11410358B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/31Communication aspects specific to video games, e.g. between several handheld game devices at close range
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/65Network streaming protocols, e.g. real-time transport protocol [RTP] or real-time control protocol [RTCP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/765Media network packet handling intermediate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/26603Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel for automatically generating descriptors from content, e.g. when it is not made available by its provider, using content analysis techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2662Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4621Controlling the complexity of the content stream or additional data, e.g. lowering the resolution or bit-rate of the video stream for a mobile client with a small screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/20Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of the game platform
    • A63F2300/209Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of the game platform characterized by low level software layer, relating to hardware management, e.g. Operating System, Application Programming Interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games

Definitions

  • the present disclosure generally relates to a system suitable for efficiently communicating a media stream generated and a method in association with the system.
  • OpenGL® Open Graphics Library
  • Rendered audio, 2D graphics and/or 3D graphics can be communicated from the electronic device (i.e., a source) to one or more recipients (i.e., receivers).
  • a common example is sharing of a video (which includes audio, 2D graphics and/or 3D graphics) between friends via a common video sharing platform on internet.
  • a typical communication system 100 in association with communication of information is shown.
  • information such as a video can be transmitted from a source 110 to one or more recipients 120 a, 120 b via an internet network 130 .
  • the video itself is uploaded onto the internet network 130 and can be accessed (e.g., downloaded) by a recipient 120 a / 120 b via the internet network 130 .
  • conventional information sharing techniques e.g., uploading a video
  • a video generated at a source 110 to one or more recipients 120 a / 120 b require the source 110 to communicate the video itself to one or more recipients 120 a / 120 b.
  • Communication of video would require substantial resources (e.g., communication bandwidth) since a video typically has a large data size.
  • communication speed e.g., upload and/or download speed
  • user enjoyment (i.e., recipient(s)) of the video may be limited since the manner in which a recipient can adjust the video per user preference is limited.
  • a recipient may be able to change view of the video from partial screen view to full screen view.
  • pixellation may occur if the video generated at the source 110 is not meant (e.g., due to resolution) for full screen view at the receiving end (i.e., recipient(s) 120 a / 120 b ).
  • conventional information sharing techniques do not facilitate communication of information (e.g., video) in an efficient manner and/or user interaction with communicated information (e.g., video) in a user friendly manner.
  • a method for replicating a media stream generated by a transmitter device can include communicating world context generated at the transmitter device and processing the received world context.
  • the world context can be communicated from the transmitter device. Additionally, the generated world context can be associated with description generated based on an application being run at the transmitter device.
  • the generated world context can be based upon to generate the media stream in a manner such that description of the generated media stream can be associated with the description generated based on the application being run.
  • Processing of the received world context can be in a manner so as to replicate the media stream generated at the transmitter device.
  • the replicated media stream can be associated with a description.
  • the description of the replicated media stream can correspond to the description of the media stream at the transmitter device.
  • the received world context can be further processed in a manner so as to change the description associable with the replicated media stream.
  • FIG. 1 shows a typical communication system in association with communication of information
  • FIG. 2 shows a system which includes a transmitter device and a receiver device, in accordance with an embodiment of the disclosure
  • FIG. 3 shows, in further detail, the transmitter device of FIG. 2 , in accordance with an embodiment of the disclosure
  • FIG. 4 shows a process flow diagram in relation to a method in association with the system of FIG. 2 , in accordance with an embodiment of the disclosure.
  • FIG. 5 shows a variation of the system of FIG. 2 , in accordance with an embodiment of the disclosure.
  • FIG. 2 shows a system 200 in accordance with an embodiment of the disclosure.
  • the system 200 can include a transmitter device 200 a and a receiver device 200 b.
  • the transmitter device 200 a can be coupled to the receiver device 200 b.
  • the system 200 can be a computer, and the transmitter and receiver devices 200 a / 200 b can be components of the computer.
  • each of the transmitter device 200 a and the receiver device 200 b can be a computer in the system 200 .
  • the transmitter device 200 a and the receiver device 200 b can be a computer in the system 200 .
  • each of the transmitter device 200 a and the receiver device 200 b can be suitable for one or both of audio processing and graphics processing. More specifically, each of the transmitter device 200 a and the receiver device 200 b can be suitable for processing and/or generating media stream such as video.
  • the transmitter device 200 a and the receiver device 200 b can be configured to signal communicate with each other.
  • Signal communication between the transmitter device 200 a and the receiver device 200 b can, for example, be based on one or both of wired communication and wireless communication.
  • the transmitter device 200 a can, in one embodiment, include an input module 201 , an applications module 202 , a buffer module 203 and a processor module 204 .
  • the transmitter device 200 a can, in another embodiment, further include a transceiver module 206 , a driver module 208 and an output module 210 .
  • the receiver device 200 b can, in one embodiment, include a transceiver portion 212 , a processor portion 214 , a driver portion 216 and an output portion 218 .
  • the receiver device 200 b can, in another embodiment, further include an input portion 220 .
  • the input module 201 can be coupled to the applications module 202 and the applications module 202 can be coupled to the buffer module 203 .
  • the processor module 204 can be coupled to the buffer module 203 .
  • the processor module 204 can be coupled to the driver module 208 .
  • the driver module 208 can be coupled to the output module 210 .
  • the transceiver module 206 can be coupled to one or both of the buffer module 203 and the processor module 204 .
  • the input module 201 can be coupled to the processor module 204 .
  • the transceiver portion 212 can be coupled to the processor portion 214 , the processor portion 214 can be coupled to the driver portion 216 and the driver portion 216 can be coupled to the output portion 218 . Additionally, the input portion 220 can be coupled to the processor portion 214 .
  • Input signals can be generated by a user using the input module 201 .
  • the input signals can be communicated to the applications module 202 which can produce application signals based on the input signals.
  • input signals can be communicated to the processor module 204 to generate one or more assets.
  • the application signals can subsequently be processed in a manner so as to produce media stream which can be one or both of audibly and visually perceived using the output module 210 as will be discussed in further detail hereinafter.
  • the applications module 202 can be configured to generate application signals.
  • the application signals can be communicated from the applications module 202 to the buffer module 203 .
  • the buffer module 203 can be configured to capture/store the application signals and further communicate the application signals to one or both of the transceiver module 206 and the processor module 204 . More specifically, the buffer module 203 can be configured to capture/store the application signals and pass along (i.e., further communicate) the application signals without substantially modifying/altering the application signals.
  • the processor module 204 can be configured to process the application signals in a manner so as to produce control signals.
  • the processor module 204 can be further configured to communicate the control signals to the driver module 208 .
  • the processor module 204 can optionally be configured to process the input signals to generate assets.
  • the processor module 204 can optionally be further configured to communicate the generated assets to one or both of the transceiver module 206 and the driver module 208 . This will be discussed later in further detail with reference to FIG. 3 .
  • the transceiver module 206 can be configured to further communicate the application signals to the transceiver portion 212 as will be discussed later in further detail. As an option, the transceiver module 206 can be configured to further communicate the generated assets to the transceiver portion 212 as will be discussed later in further detail.
  • the driver module 208 can be configured to receive and process the control signals to produce driver signals as will be discussed later in further detail with reference to FIG. 3 .
  • the driver module 208 can be further configured to communicate the driver signals to the output module 210 .
  • the output module 210 can be configured to receive and process the driver signals. Based on the driver signals, the output module 210 can be configured to produce output signals which can be one or both of audibly perceived and visually perceived.
  • the output signals can correspond to a media stream such as video.
  • output signals can correspond to a media stream (e.g., video) which can be associated with one or both of graphics based signals and audio based signals which can, correspondingly, be one or both of visually and audibly perceived.
  • the transmitter device 200 a and the receiver device 200 b can be configured to signal communicate with each other. Further earlier mentioned, the transceiver module 206 can be configured to further communicate the application signals and/or generated asset(s) to the transceiver portion 212 .
  • the transceiver module 206 can be coupled (e.g., one or both of wired coupling and wireless coupling) to the transceiver portion 212 .
  • the transmitter device 200 a and the receiver device 200 b can be configured to signal communicate with each other via the transceiver module 206 and the transceiver portion 212 .
  • the transceiver portion 212 can be configured to further communicate one or both of the received application signals and received asset(s) to the processor portion 214 for further processing.
  • the processor portion 214 can be configured to process the received application signals and/or received asset(s) in a manner so as to produce control signals.
  • the processor portion 214 can be further configured to communicate the control signals to the driver portion 216 .
  • the driver portion 216 can be configured to receive and process the control signals to produce driver signals.
  • the driver portion 216 can be configured to communicate the driver signals to the output portion 218 .
  • the output portion 218 can be configured to receive and process the driver signals. Based on the driver signals, the output portion 218 can be configured to produce output signals which can be one or both of audibly perceived and visually perceived.
  • the output signals can correspond to a media stream such as video.
  • the media stream generated at the transmitter device 200 a side can be substantially replicated at the receiver device 200 b side.
  • the media stream generated at the receiver device 200 b can correspond to a replicated version of the media stream generated at the transmitter device 200 a side.
  • a media stream generated at the transmitter device 200 a can be substantially replicated (i.e., replicated media stream) at the receiver device 200 b.
  • Replication (i.e., at the receiver device 200 b ) of the generated media stream (i.e., at the transmitter device 200 a ) can relate to re-rendering, at the receiver device 200 b, of the aforementioned graphics based signals and/or audio based signals associated with the media stream generated at the transmitter device 200 a.
  • media stream generated at the transmitter device 200 a is not communicated to the receiver device 200 b per se, there is no need to dedicate substantial resources (e.g., communication bandwidth) for the purpose of sharing the generated media stream.
  • the media stream generated at the transmitter device 200 a can simply be replicated at the receiver device 200 b based on the received application signals.
  • the application signals/received application signals can correspond to commands for generating media stream/replicating media stream. It is appreciable that commands for generating media stream are substantially smaller in terms of data size compared to the generated media stream.
  • the input portion 220 can be configured to generate (by, for example, a user) and communicate input signals to the processor portion 214 .
  • the processor portion 214 can be configured to process the input signals and the received application signals (and, optionally, received asset(s)) to produce control signals.
  • Modification/change can be in relation to one or both of visual and audio. This will be discussed in further detail with reference to FIG. 3 .
  • the input module 201 , the processor module 204 , the transceiver module 206 , the driver module 208 and the output module 210 can be analogous to the input portion 220 , the processor portion 214 , the transceiver portion 212 , the driver portion 216 and the output portion 218 respectively.
  • the system 200 will be discussed in further detail with reference to FIG. 3 hereinafter.
  • FIG. 3 shows the system 200 in further detail, in accordance with an embodiment of the disclosure.
  • the transmitter device 200 a is shown in further detail, in accordance with an embodiment of the disclosure.
  • the driver module 208 can include an audio application programming interface (API) portion 302 , an audio driver portion 304 , a graphics API portion 306 and/or a graphics driver portion 308 .
  • the driver module 208 can include one or both of an audio API portion 302 and a graphics API portion 306 .
  • the driver module 208 can further include one or both of an audio driver portion 304 and a graphics driver portion 308 .
  • the output module 210 can include one or both of an audio processing unit (APU) portion 310 and a graphics processing unit (GPU) portion 312 .
  • the output module 210 can further include one or both of an audio reproduction portion 314 and a display portion 316 .
  • the audio API portion 302 can be coupled to the audio driver portion 304 and the graphics API portion 306 can be coupled to the graphics driver portion 308 .
  • the audio driver portion 304 can be coupled to the APU portion 310 and the graphics driver portion 308 can be coupled to the GPU portion 312 .
  • the APU portion 310 can be coupled to the audio reproduction portion 314 and the GPU portion 312 can be coupled to the display portion 316 .
  • the audio API portion 302 and graphics API portion 306 can be associated with an audio library and a graphics library respectively.
  • the audio library can include a collection of audio files such as mp3 based audio files or a collection of audio streams.
  • the graphics library can include a collection of graphics files/pictures files/clips.
  • the audio library and the graphics library can each be regarded as a standard library having a standard collection (e.g., of audio files and/or graphics files/pictures files/clips).
  • one or more assets can be generated based on input signals (i.e., from the input module 201 ) communicated to the processor module 204 . Therefore, an asset can, for example, correspond to a customized audio file and/or a customized graphics file not available in the audio library and/or graphics library.
  • the generated asset(s) can be communicated from the processor module 204 to one or both of the transceiver module 206 for further communication to the receiver device 200 b and driver module 208 for addition to the standard library.
  • the driver module 208 can be configured to receive and process the control signals to produce driver signals.
  • the audio API portion 302 and the audio driver portion 304 in combination, can be configured to receive and process the control signals to produce audio driver signals.
  • the graphics API portion 306 and the graphics driver portion 308 in combination, can be configured to receive and process the control signals to produce graphics driver signals.
  • appropriate selection(s) from one or both of the audio library and the graphics library can be made.
  • one or both of audio and graphics driver signals can be generated.
  • the audio driver signals can be based on one or more audio files from the audio library and the graphics driver signals can be based on one or more graphics files/clips from the graphics library. Therefore the application signals can effectively be considered to be commands for making appropriate selection(s) from the audio library and/or the graphics library.
  • the output module 210 can be configured to receive and process one or both of the audio and graphics driver signals to produce output signals. Therefore, output signals can include one or both of audio output signals and graphics output signals.
  • the APU portion 310 and audio reproduction portion 314 can, in combination, be configured to receive and process the audio driver signals to produce audio output signals.
  • the GPU portion 312 and the display portion 316 can, in combination, be configured to receive and process the graphics driver signals to produce graphics output signals.
  • the output signals can, for example, correspond to a media stream which can be suitable for user perception.
  • the media stream can, for example, include audio output signals which can be audibly perceived and/or graphics output signals which can be visually perceived.
  • the audio output signals can correspond to the aforementioned audio based signals and the graphics output signals can correspond to the aforementioned graphics based signals.
  • the audio reproduction portion 314 can, for example, correspond to one or more speaker units.
  • the display portion 316 can, for example, correspond to a display unit.
  • the audio reproduction portion 314 can include a left speaker unit and a right speaker unit.
  • the left and right speaker units can be located at the left side of the display unit and at the right side of the display unit respectively.
  • the display unit can, for example, be a touch screen based display or a stereoscopic liquid crystal display (LCD).
  • the input module 201 , the processor module 204 , the transceiver module 206 , the driver module 208 and the output module 210 can be analogous to the input portion 220 , the processor portion 214 , the transceiver portion 212 , the driver portion 216 and the output portion 218 respectively.
  • driver module 208 and the output module 210 can analogously apply to the driver portion 216 and the output portion 218 respectively.
  • the applications module 202 may, for example, be running an application software or application program.
  • the application software/program being run can correspond to, for example, a game based application. Therefore, the application software/program can be associable with one or both of visually and audibly perceivable output (i.e., via the output module 210 ).
  • the system 200 will be discussed in further detail hereinafter in the context of the applications module 202 running, for example, a game based application.
  • the game based application can correspond to an electronic game which can be played by a user using, for example, the transmitter device 200 a.
  • a game there will be one or more game characters and one or more game environments (i.e., scene setting) which can be visually perceived via the display portion 316 .
  • game environments i.e., scene setting
  • a gamer can play the game in accordance with the storyline or game rules.
  • the gamer may be a need for the gamer to move one or more game characters in a game environment so as to achieve a certain objective.
  • the movable game character(s) can be moved in accordance with gamer control to achieve a certain objective in the game.
  • the game can include accompanying game audio such as background music, soundtracks and/or sound effects which can be audibly perceived via the audio reproduction portion 314 .
  • the game characters can include a bowler, a bowling ball and a plurality of bowling pins.
  • the game environment can be a bowling alley.
  • the movable game characters can be the bowler and the bowling ball.
  • the stationary game characters can be the bowling pins.
  • the game objective can be to knock down the bowling pins using the bowling ball and the game rules can correspond to real life bowling rules.
  • the bowler and the bowling ball can be moved in a manner so as to knock down as many bowling pins as possible.
  • the game audio can be the sound effect of a bowling ball knocking bowling pins down as the bowling ball contacts the bowling pins (i.e., collision).
  • game data which also can be referred to as application data.
  • Game data i.e., application data
  • game data can correspond to the aforementioned application signals.
  • game data can, for example, be associated with one or more game characters and/or one or more game environments.
  • the game data can, for example, be further associated with game audio.
  • game data can include/be associated with audio accompaniment data.
  • Audio accompaniment data can be associated with sound effects data, background music data and/or soundtrack data in relation, respectively, to the aforementioned sound effects, background music and/or soundtracks.
  • Audio accompaniment data can further be associated with timing data. Timing data can relate to a specific instance/specific instances in the game when a certain audio file/certain audio files/audio stream(s) is/are played/accessed.
  • game data can include/be associated with one or both of object data corresponding to the aforementioned game character(s).
  • the object data can be associated with several objects. Of the several objects, there could be one or more objects of interest. The remaining objects (i.e., aside the one or more objects of interest) can be considered secondary objects.
  • object data can be associated with one or both of object(s) of interest and secondary object(s).
  • an object of interest can be the bowling ball and the secondary objects can be the bowling pins.
  • game data can include/be associated with scene data corresponding to the aforementioned game environment(s).
  • scene data can be associated with visually perceivable background/backdrop/scene depicting the scene setting relevant to the game.
  • the game environment relate to, for example, the bowling alley and/or a bowling lane in the bowling alley.
  • the object data can be associated with object description(s) and scene data can be associated with scene description(s).
  • Each of the object description(s) and scene description(s) can, for example, be associated with vertex data, shape data, texture data and color data or any combination thereof.
  • Vertex data can be used as a basis for identification of movement and/or location as will be discussed later in further detail.
  • Texture data can be associated with appearance and/or perceived tactile quality of a surface.
  • texture data can be associated with, for example, surface type of the game character(s) (e.g., the bowling ball) and/or other objects in the game environment (e.g., the bowling lane).
  • texture data can be associated with whether the surface type of a game character(s) or an object in the game environment is reflective, shiny or non-reflective (e.g., the bowling ball has a glittering type surface/the bowling lane has a matt wood type surface).
  • Color data can be associated with visually perceivable color.
  • color data can be associated with color of the game character(s) (e.g., the bowling ball) and/or other objects in the game environment (e.g., the bowling lane).
  • the color data can be indicative that the bowling ball is yellow in color and/or the bowling lane is brown in color.
  • Shape data can be associated with perceived outline/form. Specifically, shape data can be associated with/indicative of, for example, shape of the game character(s) (e.g., geometric shape of the bowling ball, bowling pins) and/or shapes of other objects in the game environment.
  • shape data can be associated with/indicative of, for example, shape of the game character(s) (e.g., geometric shape of the bowling ball, bowling pins) and/or shapes of other objects in the game environment.
  • driver signals can be generated based on the control signals and the control signals can be generated based on the game data
  • the game data can effectively be considered to be commands for making appropriate selection(s) from the audio library and/or the graphics library.
  • an audio file can be selected from the audio library based on audio accompaniment data. Therefore based, effectively, on audio accompaniment data, audio driver signals can be generated.
  • a graphics file can be selected from the graphics library based on object data and/or scene data. Therefore based, effectively, on object data and/or scene data, graphics driver signals can be generated.
  • input signals can be communicated from the input module 201 to the applications module 202 .
  • the input signals can be based on the aforementioned gamer control.
  • application signals can be generated based on the input signals. Since application signals can be generated by the applications module 202 based on the input signals, it can be appreciated that the input signals can effectively affect visual and/or audio output at the output module 210 (i.e., affect visual and/or audible perception of the media stream).
  • input signals i.e., gamer control
  • the applications module 202 can be communicated from the input module 201 to the applications module 202 for, for example, moving an object of interest.
  • the applications module 202 can produce game data corresponding to, for example, movement of the object of interest.
  • Control signals generated by the processor module 204 can thus be based at least on movement of object of interest.
  • an appropriate selection can be made from the graphics library to produce corresponding graphics driver signals.
  • output signals corresponding to a media stream showing (i.e., visually perceivable) an object of interest moving can be produced. Therefore visual perception at the output module 210 can be affected depending on the input signals.
  • vertex data can be used as a basis for identification of movement and/or location.
  • the processor module 204 can be configured to process the vertex data of the object of interest in a manner so as to identify location of the object of interest as it moves.
  • the processor module 204 can be configured to process the vertex data of the object of interest so as to identify the location of the object of interest on the display unit (i.e., onscreen).
  • the processor module 204 can be configured to process the vertex data of the object of interest so as to identify the initial location of the object of interest, the location(s) of the object of interest as it moves and the end location of the object of interest after it stops moving (i.e., comes to rest).
  • game data can, for example, be associated with one or more game characters, one or more game environments and/or game audio, visual and/or audio perception can be affected by the input signals in other ways apart from the above example of movement of object of interest.
  • appearance of game environment e.g., by modifying vertex data, texture data and/or color data based on the input signals
  • appearance of an object of interest/secondary object e.g., by modifying vertex data, texture data and/or color data based on the input signals
  • audio accompaniment data such as sound effects data, background music data, soundtrack data and/or timing data can be altered based on input signals (i.e., audio perception can be affected by the input signals).
  • input signals can be communicated for selecting one or both of object(s) of interest and the secondary object(s), and the selection can be one or both of audibly and visually perceived at the output module 210 .
  • Other examples are also useful.
  • input signals can effectively affect audio output at the output module 210 .
  • the manner in which input signals can effectively affect audio output at the output module 210 will now be discussed in further detail.
  • the processor module 204 can be configured to process audio accompaniment data based on location of the object of interest as it moves.
  • the processor module 204 can, for example, be configured to process timing data and sound effects data based on location of the object of interest as it moves.
  • timing data and sound effects data can be processed by the processor module 204 such that a “thud” sound effect can be audibly perceived as the bowling ball is dropped at the start point, a “rolling” sound effect can be audibly perceived as the bowling ball rolls towards the bowling pins and a “collision” sound effect can be audibly perceived as the bowling ball collides with the bowling pins.
  • the “thud” sound effect, the “rolling” sound effect and the “collision” sound effect are examples of sound effects data.
  • the start point can be visually perceived to be near the left side of the display portion 316 and the end point can be visually perceived to be near the right side of the display portion 316 . Therefore the timing data can be processed such that the “thud” sound effect, “rolling” sound effect and “collision” sound effect are timed such that the “thud” sound effect can be substantially audibly perceived only at the left side of the display portion 316 (i.e., via the left speaker unit) as the bowler is visually perceived to drop the bowling ball, the “rolling” sound effect can be substantially audibly perceived to vary in loudness as the bowling ball is visually perceived to roll from the left side to right side of the display portion 316 (i.e., initially loudest at the left side of the display portion 316 at the start point, gradually reducing loudness at the left side of the display portion 316 as the bowling ball rolls towards the right side of the display portion 316 , gradually increasing loudness at the right side of the display portion 316 as the bowling ball approaches the right side of
  • the processor module 204 can, in one embodiment, be configured to process the audio accompaniment data (associable with timing data and sound effect(s) data in a manner so as to time sound effect(s) in accordance with visual perception of the object(s) of interest.
  • the “thud” sound effect can be timed such that it is heard when it can be visually perceived that the bowler has dropped the bowling ball and the “collision” sound effect can be timed such that it is heard when it can be visually perceived that the bowling ball collides with the bowling pins.
  • the processor module 204 can, in another embodiment, be configured to process the audio accompaniment data in a manner so as to position the sound effect(s) in accordance with visual perception of the object(s) of interest.
  • the sound effect(s) can be associated with a location in the game environment (e.g., bowling alley).
  • the “thud” sound effect can be associated with a location at the start point of the game environment (e.g., location of the bowler) and the “collision” sound effect can be associated with a location at the end point of the game environment (e.g., location of the bowling pins).
  • the “thud” sound effect can be audibly perceived by a gamer to be emitted from a location which is substantially at the left side of the display portion 316 and the “collision” sound effect can be audibly perceived by a gamer to be emitted from a location which is substantially at the right side of the display portion 316 .
  • the processor module 204 can be configured to process the audio accompaniment data in a manner so as to allow audio positioning based on object(s) of interest.
  • the processor module 204 can, in yet another embodiment, be configured to process the audio accompaniment data in a manner so as to vary audio characteristic(s) (e.g., loudness) of the sound effect(s) in accordance with visual perception of the object(s) of interest.
  • the audio characteristic of a sound effect can be loudness of the sound effect.
  • the loudness of the “rolling” sound effect at the right/left side of the display portion 316 can be varied in accordance with rolling movement of the bowling ball.
  • the processor module 204 can be configured to process the audio accompaniment in a manner so as to time sound effect(s) in accordance with visual perception of the object(s) of interest, so as to position the sound effect(s) in accordance with visual perception of the object(s) of interest and/or so as to vary audio characteristic(s) of the sound effect(s) in accordance with visual perception of the object(s) of interest.
  • the processor module 204 can be configured to process the audio accompaniment data in a manner so that sound effect(s) can be audibly perceived in accordance with visual perception of the object(s) of interest.
  • timing of sound effects e.g., “thud,” “rolling,” and “collision”
  • audio characteristic(s) e.g., loudness
  • position of the sound effects can be based on visually perceived location/activities (e.g., drop at the start point, rolling from the start point to the end point and collision at the end point) of the object of interest (e.g., bowling ball).
  • the processor module 204 can be configured to process the audio accompaniment data so that a “glittering”/reverb sound effect/background music/soundtrack can be produced corresponding to the texture data which indicates that the object in the game environment is shiny.
  • Audio accompaniment data can be processed by the processor module 204 (e.g., audio positioning based on object(s) of interest), 3D based audio and/or audio modifications can be made possible.
  • game data can be associated with object data, scene data and/or audio accompaniment data (i.e., any of the object data, scene data and audio accompaniment data or any combination thereof).
  • object data can be associated with object description(s)
  • scene data can be associated with scene description(s).
  • each of the object data, scene data and audio accompaniment data can effectively be a basis for providing a description of the game world (which also can be referred to as application world) which can, for example, be visually and audibly perceived via the output module 210 .
  • the applications module 202 can be considered to be capable of (i.e., configurable) generating a description (e.g., a description of the game world where the application being run is a game based application) based on application software or application program (i.e., an application) being run by the applications module 202 and/or input signals communicated from the input module 201 .
  • the description generated by the applications module 202 can be communicated from the applications module 202 in the form of the aforementioned application signals.
  • media stream generated at the transmitter device 200 a can be based on application signals (e.g., game data) communicated from the applications module 202
  • the generated media stream can be associated with a description which can be one or both of visually (i.e., a visual based description) and audibly (i.e., an audio based description) perceived.
  • the description associated with the generated media stream can, effectively, be associated with/based on the description generated by the applications module 202 .
  • description (communicable in the form of application signals) generated by the applications module 202 can be based on one or both of the application being run and the input signals.
  • the object data can be based upon to provide a visual based description of, for example, an object of interest (shape, color, texture etc.).
  • the scene data can be based upon to provide a visual based description of the game environment (e.g., bowling alley).
  • the audio accompaniment data can be based upon to provide an audio based description of, for example, movement of an object of interest (e.g., visually perceived movement of the bowling ball from one end of the display portion 316 to another end of the display portion 316 ).
  • game data i.e., corresponding to application signals
  • world context can be based upon to provide visual and/or audio based description of the game world.
  • game data i.e., application data
  • game data can correspond to the application signals and the application signals can effectively be considered to be commands for making appropriate selection(s) from the audio library and/or the graphics library. Therefore, the aforementioned commands can correspond to/referred to as “world context”.
  • Game data i.e., world context
  • game data and/or generated asset(s) can be communicated from the transceiver module 206 to the transceiver potion 212 .
  • the received game data (i.e., received world context) and/or received generated asset(s) can be processed at the receiver device 200 b in a manner analogous to the manner in which game data and/or generated asset(s) can be processed at the transmitter device 200 a.
  • the forgoing discussion pertaining to the processing of game data and/or generated asset(s) at the transmitter device 200 a analogously applies to the processing of received game data (i.e., received world context) and/or generated asset(s) at the receiver device 200 b.
  • the received game data i.e., received world context
  • the received game data can be a basis for producing a replicated media stream (i.e., of the generated media stream at the transmitter device 200 a ) at the receiver device 200 b.
  • the replicated media stream can be associated with a description (e.g., one or both of visual based description and audio based description).
  • the description of the replicated media stream can correspond to the description of the media stream at the transmitter device 200 a.
  • input signals can be generated by, for example, a user operating the input portion 220 in a manner so as to manipulate/modify/change any portion/part of the replicated media stream per user preference.
  • input signals can be communicated from the input portion 220 to the processor portion 214 .
  • the processor portion 214 can be configured to process the input signals and the received application signals (i.e., received game data/received world context) and, optionally, received asset(s) to produce control signals.
  • visual based description e.g., object description(s) and/or scene description(s)
  • audio based description i.e., audio accompaniment data
  • vertex data, texture data, shape data and/or color data can be manipulated in a manner so as to change the appearance of, for example, the object of interest/game environment.
  • depth information can be added and/or modified. This is particularly useful for two dimensional (2D) objects in the game. Specifically, if the bowling ball (i.e., object of interest) appears to be 2D in the bowling game, it can be useful to include depth information so that the bowling ball can be visually perceived as a 3D object (i.e., 3D bowling ball instead of the original 2D bowling ball in the game). In this regard, artificial 3D objects can be created and/or depth perception can be enhanced.
  • shadow information can be added and/or modified.
  • shadows can be added to, for example, the object(s) of interest or the original shadow information of the object(s) of interest can be modified.
  • Shadows can, for example, be computed based on shape data (i.e., geometry of, for example, the object(s) of interest) and/or pre-defined light sources.
  • shape data i.e., geometry of, for example, the object(s) of interest
  • the scene description(s) could include lighting data to indicate one or more light sources in the game environment and shadow information can be computed based in lighting data and shape data of the object(s) of interest.
  • shape data can indicate that the geometric shape of the bowling ball (i.e., object of interest) is spherical and lighting data can indicate that there are some light sources (e.g., ceiling lights, spotlights) in the bowling alley (i.e., game environment). Therefore, shadow information can be computed so that the angle/size etc. of the shadow of the bowling ball can change as it rolls along the bowling lane and based on whether it is rolling towards/away from a light source.
  • artificial shadows and/or original shadows of, for example, an object of interest can be created and/or modified to enhance visual perception and/or enhance depth perception.
  • lighting effects can be added/modified/customized.
  • the scene description(s) can include lighting data to indicate one or more light sources in the game environment.
  • the light sources generally are indicative of portions of the game environment that appear to be bright.
  • the game environment can include portions that are dark (e.g., dark corners) where light sources are not available. Therefore, lighting effects can be customized or added so that dark corners in the game environment can become illuminated. Lighting effects can also be modified so as to increase or reduce brightness in bright portions of the game environment.
  • customized visuals (which can also be referred to as the aforementioned assets) can be added.
  • customized visuals i.e., assets
  • scene data can be manipulated in a manner so as to include additional customized visuals.
  • the customized visuals can, for example, be visually perceived to be integrated with the game environment.
  • Visual cues can relate to visual aids to help a gamer (who may be a beginner) to play the game.
  • visual cues can be in the form of arrows or projected rolling paths of the bowling ball.
  • a visual cue can be augmented on-screen (e.g., visually perceivable game environment) to show a gamer how to play the game more effectively.
  • Maps can relate to game maps showing an overview of the game environment. With a game map, a gamer may be able to better appreciate the game and navigate game characters in a more efficient manner while playing the game.
  • Advertisements can relate to, for example, visual banners advertising product(s) and/or service(s) of, for example, a sponsor (e.g., of the game application).
  • the color of an object of interest can be changed per user preference.
  • the texture of an object of interest i.e., bowling ball
  • the texture of an object of interest can be changed per user preference (e.g., the surface of the bowling ball, which may originally appear to be shiny, can be replaced with a “sandy” look or “charcoal” look).
  • audio positioning and/or 3D based audio can be changed based on a user's position relative to the receiver device 200 b.
  • the audio characteristics e.g., pitch, tone, loudness
  • background music e.g., soundtracks and/or sound effects etc.
  • a user operating the receiver device 200 b can be allowed to manipulate one or more portions of the replicated media stream per user preference. More specifically, where the media stream is, for example, a video, a user may be allowed to alter/change/manipulate, for example:
  • the object of interest e.g., bowling ball
  • the secondary objects e.g., bowling pins
  • red in color the color of the object of interest
  • the original color i.e., blue
  • the color of the secondary objects i.e., the bowling pins
  • manipulation of one or more portions of the replicated media stream per user preference can be by way of replacing one or more original portions of the replicated media stream with corresponding new portions.
  • Remaining portions i.e., not subjected to replacement per user preference
  • the replicated media stream can be in relation to the object data being associated with the object of interest (i.e., object data) being the bowling ball and the sound effects data being associated with “glittering”/reverb sound effect for shiny surface (e.g., the bowling ball has a shiny surface).
  • Manipulation of one or more portions of the replicated media stream per user preference can be by way of replacing the bowling ball (original object of interest) with, for example, a bowling pin and the “glittering” (original sound effect) sound effect with another sound effect such as a bird tweeting.
  • one or more original portions (e.g., bowling ball/glittering sound effect) of the replicated media stream can be replaced with corresponding new portions (e.g., bowling pin/bird tweeting sound effect) per user preference.
  • the visually perceivable color of the object of interest is blue
  • the visually perceivable color of the object of interest in the replicated media stream i.e., at the receiver device 200 b
  • the change in terms of the visually perceivable color of the object of interest in the replicated media stream (at the receiver device 200 b ) should not affect the visually perceivable color of the object of interest in the generated media stream (at the transmitter device 200 a ).
  • the visually perceivable color of the object of interest in the generated media stream at the transmitter device 200 a should remain as blue (i.e., original color) even though there is a change (i.e., from blue to green) in terms of visually perceivable color for the object of interest in the replicated media stream at the receiver device 200 b.
  • each of the generated media stream (i.e., at the transmitter device 200 a ) and the replicated media stream (i.e., at the receiver device 200 b ) can include one or more details which can be audio based (i.e., audio based description such as sound effect(s) etc.) and/or visual based (i.e., visual based description such as object(s) of interest, secondary object(s) etc.).
  • the one or more details e.g., of the replicated media stream
  • the details of the replicated media stream at the receiver device 200 b should be similar to, if not substantially the same as, the details of the generated media stream at the transmitter device 200 a.
  • one or more details of the replicated media stream can be changed/altered/manipulated (e.g., per earlier discussed examples regarding manipulation/modification/change in one or both of visual based description and audio based description).
  • one or more specific detail(s) (i.e., one or more portions) of the replicated media stream can be changed/altered/manipulated per user preference while the remaining details (i.e., remaining portions) can remain unchanged (i.e., similar/substantially identical to corresponding details of generated media stream).
  • the graphics file can correspond to, for example, a visually perceivable 640 ⁇ 480 image of a line which is about 300K pixels in terms of data size.
  • a command i.e., world context
  • the communicated command may only be a few bytes in terms of data size. Therefore, for the system 200 , there is need to only communicate a few bytes of data (i.e., world context) instead of 300K pixels (as in conventional information sharing techniques). This facilitates efficient communication (e.g., in terms of substantial reduction in required communication bandwidth and/or communication speed).
  • a communicated (i.e., shared) video may be stored (e.g., at the recipient side) in order for the communicated video to be properly rendered.
  • substantial resources e.g., storage space
  • resource requirement e.g., storage space
  • rendering at the receiver device 200 b side can be of arbitrary resolution without pixellation.
  • the system 200 since a user can be allowed to manipulate one or more portions (i.e., one or more details) of the replicated media stream per user preference, user interaction with the replicated media stream (e.g., video) in a user friendly manner can be facilitated.
  • one or more portions i.e., one or more details
  • the replicated media stream e.g., video
  • FIG. 4 shows a process flow diagram in relation to a method 400 in association with the system 200 in accordance with one embodiment of the disclosure.
  • FIG. 4 shows a method 400 for replicating a media stream generated by the transmitter device 200 a.
  • the media stream can, for example, be replicated at the receiver device 200 b.
  • the media stream being replicated can be capable of user interaction in addition to being one or both of visually and audibly perceivable.
  • the replicated media stream can be one or both of visually and audibly perceivable at the receiver device 200 b.
  • the method 400 can include a communication step 402 .
  • the communication step 402 can include communicating world context generated at the transmitter device 200 a.
  • the generated world context can be communicated from the transmitter device 200 a.
  • world context can be generated at the transmitter device 200 a and communicated to the receiver device 200 b from the transmitter device 200 a.
  • the generated world context can be associated with description generated by the applications module 202 at the transmitter device 200 a.
  • Description generated by the applications module 202 can be based on one or both of an application being run (i.e., by the applications module 202 ) and input signals (i.e., communicated from the input module 201 to the applications module 202 ).
  • description generated by the applications module 202 can be communicated from the applications module 202 in the form of the aforementioned application signals.
  • the generated world context can be based upon to generate the media stream. Additionally, the generated media stream can be associated with a description which can be one or both of visually perceived (i.e., visual based description) and audibly perceived (i.e., audio based description). Description of the generated media stream can be associated with/based on description generated by the applications module 202 based on the application being run and/or the input signals.
  • the method can further include a processing step 404 .
  • the processing step 404 can include processing the received world context (e.g., by the processor portion 214 ) in a manner so as to replicate the media stream generated at the transmitter device 200 a.
  • world context communicated from the transmitter device 200 a can be received at the receiver device 200 b and processed in a manner so as to replicate the media stream.
  • the replicated media stream at the receiver device 200 b can be associated with a description (i.e., one or both of audio based description and visual based description) corresponding to the description of the media stream generated at the transmitter device 200 a.
  • a description i.e., one or both of audio based description and visual based description
  • the details of the replicated media stream at the receiver device 200 b should be one or both of visually perceivable and audibly perceivable to be similar to, if not substantially the same as, the details of the generated media stream at the transmitter device 200 a.
  • the received world context is capable of being further processed in a manner so as to change the description associable with the replicated the media stream. In this manner, user interaction can be facilitated.
  • received world context can be processed in a manner so as to allow the aforementioned manipulation of one or more portions of the replicated media stream per user preference. With manipulation per user preference (i.e., based on input signals communicated from the input portion 220 ), it is appreciable that one or more details of the replicated media stream can be changed/altered/manipulated (e.g., per earlier discussed examples regarding manipulation/modification/change in one or both of visual based description and audio based description).
  • one or more specific detail(s) (i.e., one or more portions) of the replicated media stream can be changed/altered/manipulated per user preference while the remaining details (i.e., remaining portions) can remain unchanged (i.e., similar/substantially identical to corresponding details of generated media stream).
  • FIG. 5 shows a variation of the system 200 in accordance with an embodiment of the disclosure.
  • the system 200 can, as an option, include a recipient device 500 .
  • the recipient device 500 can be coupled to one or both of the transmitter device 200 a and the receiver device 200 b.
  • one or both of the generated media stream (from the transmitter device 200 a ) and the replicated media stream (from the receiver device 200 b ) can be communicated to the recipient device 500 .
  • the recipient device 500 can include an input device (not shown) and an output device (not shown).
  • the input device can be analogous to the aforementioned input module 201 and input portion 220 .
  • the input device can be used to generate input signals which can be communicated from the recipient device 500 to one or both of the transmitter device 200 a and the receiver device 200 b.
  • the output device can be analogous to the output module 210 and the output portion 218 .
  • the input signals communicated from the recipient device 500 can be used to manipulate the generated media stream and/or manipulate the replicated media stream.
  • the generated media stream and/or replicated media stream can be manipulated by input signals communicated from the recipient device in a manner analogous to the manipulation of replicated media stream at the receiver device 200 b based on input signals generated via the input portion 220 per earlier discussion.
  • replicated media stream can be further communicated from the receiver device 200 b to the recipient device 500 and input signals can be communicated from the recipient device 500 to the receiver device 200 b.
  • the replicated media stream can be one or both of visually and audibly perceived at the recipient device 500 via the output device.
  • Input signals can be communicated from recipient device 500 to manipulate the replicated media stream (at the receiver device 200 b ) in a manner analogous to manipulation of replicated media stream at the receiver device 200 b based on input signals generated via the input portion 220 per earlier discussion. Therefore, a manipulated replicated media stream can be generated based on processing by the processor portion 214 of the received world context and input signals communicated from the recipient device 500 .
  • the manipulated replicated media stream can be communicated from the receiver device 200 b to the recipient device 500 .
  • the manipulated replicated media stream can be one or both of visually and audibly perceived at the recipient device 500 via the output device.
  • the earlier mentioned method 400 of FIG. 4 can, as an option, further include (not shown) further communicating the replicated media stream from the receiver device 200 b to the recipient device 500 so that the replicated media stream can be one or both of visually and audibly perceivable at the recipient device 500 .
  • the method 400 can, as an option, yet further include (not shown) generating and communicating input signals from the recipient device 500 to the receiver device 200 b. Appreciably, the input signals communicated from the recipient device 500 can be used to change the description associable with the replicated media stream.
  • At least one virtual camera model (not shown) can be defined.
  • the virtual camera model can be configured to view a primary scene.
  • the primary scene can, for example, be a game scene (e.g., scene data) showing movement of the object of interest.
  • the virtual camera model can be initially positioned to view the primary scene.
  • the virtual camera model for example, can be positioned to view a primary scene where the bowling ball (i.e., object of interest) is rolled across the bowling lane. It is appreciable that the virtual camera model can be further configured to view a secondary scene instead of the primary scene. In this regard, position of the virtual camera model can be changed from the initial position so as to view the secondary scene.
  • the secondary scene can, for example, be associated with a secondary object.
  • the virtual camera model can, for example, be positioned to view a secondary scene where the bowling pins (i.e., secondary object(s)) are located. This is useful where a view from a different perspective is desired. More specifically, a user of the receiver device 200 b may wish to only observe how the bowling ball collides with the bowling pins as opposed to observing the entire process of the bowling ball rolling across the bowling lane.
  • visual based description can be in relation to perspective description(s).
  • Perspective description(s) can relate to, for example, change of view (i.e., perspective) from the primary scene to the secondary scene.
  • the processor portion 214 can be configured to receive and process game data in a manner so as to change position of the virtual camera model. Changing of position of the virtual camera model can be regarded of change in perspective description.
  • processor module 204 and/or processor portion 214 may be possible.
  • commands i.e., world context
  • commands from the transmitter device 200 a can be based on/in the form of an Open Graphics Library based instruction set.
  • the Open Graphics Library based instruction set can be translated to another instruction set such as Web Graphics Library based instruction set for, for example, rendering in a web (i.e., internet) browser at the receiver device 200 b (e.g., where the output portion 218 corresponds to a web browser).
  • Translation can be performed, either manually or dynamically, by one or both of the processor module 204 and the processor portion 214 .
  • Synchronization can be in relation to time-stamping the generated commands (i.e., world context) so as to synchronize the aforementioned graphics based signals and accompanying audio based signals.
  • graphics based signals and accompanying audio based signals communicated from transmitter device 200 a can be synchronized at the receiver device 200 b. This is useful for rendering commands (i.e., world context) with appropriate frame-rate that can be supported at the receiver device 200 b.
  • Synchronization can be performed, either manually or dynamically, by one or both of the processor module 204 and the processor portion 214 .
  • Encoding/compression may be possible. Encoding/compression can be in relation to reducing amount of data (per frame) communicated from the transmitter device 200 a to the receiver device 200 b. For example, in terms of encoding, a coding/an encoding scheme such as differential coding can be used. Alternatively, in terms of compression, instructions that can be encoded into compact commands can be rendered. Encoding/compression can be performed, either manually or dynamically, by one or both of the processor module 204 and the processor portion 214 .
  • the system 200 can further include an intermediary server (not shown).
  • the intermediary server can be referred to as a proxy or simply referred to as a server.
  • the transmitter device 200 a can be coupled to the receiver device 200 b via the intermediary server.
  • commands i.e., world context
  • the intermediary server can include a processor (not shown).
  • input signals can be generated using, for example, the input portion 220 and can be communicated from the receiver device 200 b to the intermediary server 200 b.
  • the input portion 220 can be coupled (not shown) to the transceiver portion 212 for transmitting the input signals from the receiver device 200 b. Therefore, it is possible for a user to manipulate one or more portions of the replicated media stream per user preference in the manner analogous per description with reference to input signals being communicated from the input portion 220 to the processor portion 214 for processing.
  • the processor portion 214 at the receiver device 200 b side is useful where it is desired to avoid processing burden to, for example, the processor portion 214 at the receiver device 200 b side.
  • the input signals can be communicated to the intermediary server's processor so that the intermediary server's processor can process the received world context based on the input signals.
  • world context can be communicated from the transmitter device 200 a to the intermediary server and input signals can be communicated from the receiver device 200 b to the intermediary server.
  • the intermediary server's processor can be configured to process the received world context based on the input signals to generate processed world context signals.
  • the processed world context signals can correspond to the aforementioned control signals.
  • the processed world context signals can be communicated from the intermediary server to, for example, the receiver device 200 b for further processing by the driver portion 216 .
  • the driver portion 216 can be configured to process the processed world context signals in a manner analogous to control signals communicated from the processor portion 214 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Stereophonic System (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Information Transfer Between Computers (AREA)
  • Controls And Circuits For Display Device (AREA)
US14/294,898 2014-01-03 2014-06-03 System suitable for efficient communication of media stream and a method in association therewith Abandoned US20150195628A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
TW103143806A TW201531324A (zh) 2014-01-03 2014-12-16 適於媒體串流之有效通信的系統及其相關方法
PCT/SG2014/000616 WO2015102532A1 (fr) 2014-01-03 2014-12-24 Système adapté à une communication efficace de flux multimédia et méthode associée
US16/432,191 US11410358B2 (en) 2014-01-03 2019-06-05 System suitable for efficient communication of media stream and a method in association therewith

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG2014000889A SG2014000889A (en) 2014-01-03 2014-01-03 A system suitable for one or both of audio processing and graphics processing and a method of processing in association therewith
SG201400088-9 2014-01-03

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/432,191 Continuation US11410358B2 (en) 2014-01-03 2019-06-05 System suitable for efficient communication of media stream and a method in association therewith

Publications (1)

Publication Number Publication Date
US20150195628A1 true US20150195628A1 (en) 2015-07-09

Family

ID=54196682

Family Applications (4)

Application Number Title Priority Date Filing Date
US14/294,898 Abandoned US20150195628A1 (en) 2014-01-03 2014-06-03 System suitable for efficient communication of media stream and a method in association therewith
US15/109,628 Abandoned US20160335788A1 (en) 2014-01-03 2014-12-24 A system suitable for one or both of audio processing and graphics processing and a method of processing in association therewith
US16/198,368 Active US10991140B2 (en) 2014-01-03 2018-11-21 System suitable for one or both of audio processing and graphics processing and a method of processing in association therewith
US16/432,191 Active US11410358B2 (en) 2014-01-03 2019-06-05 System suitable for efficient communication of media stream and a method in association therewith

Family Applications After (3)

Application Number Title Priority Date Filing Date
US15/109,628 Abandoned US20160335788A1 (en) 2014-01-03 2014-12-24 A system suitable for one or both of audio processing and graphics processing and a method of processing in association therewith
US16/198,368 Active US10991140B2 (en) 2014-01-03 2018-11-21 System suitable for one or both of audio processing and graphics processing and a method of processing in association therewith
US16/432,191 Active US11410358B2 (en) 2014-01-03 2019-06-05 System suitable for efficient communication of media stream and a method in association therewith

Country Status (4)

Country Link
US (4) US20150195628A1 (fr)
SG (1) SG2014000889A (fr)
TW (2) TW201531324A (fr)
WO (1) WO2015102533A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10176644B2 (en) * 2015-06-07 2019-01-08 Apple Inc. Automatic rendering of 3D sound
US11670271B2 (en) * 2016-12-30 2023-06-06 Spotify Ab System and method for providing a video with lyrics overlay for use in a social messaging environment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004049607A1 (fr) * 2002-11-12 2004-06-10 Koninklijke Philips Electronics N.V. Distribution de donnees a l'aide d'un reseau de diffusion et sans fil
US20100227685A1 (en) * 2006-02-17 2010-09-09 Konami Digital Entertainment Co., Ltd. Game server device, game service method, information recording medium, and program
US20130172079A1 (en) * 2011-12-28 2013-07-04 Eugene Ivanov Client-Server Gaming
US20130344960A1 (en) * 2007-12-15 2013-12-26 Sony Computer Entertainment America Llc Massive Multi-Player Online (MMO) Games Server and Methods for Executing the Same

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3215306B2 (ja) 1995-09-19 2001-10-02 株式会社ナムコ 画像合成方法及び装置
IL119928A (en) * 1996-12-29 2001-01-11 Univ Ramot Model-based view extrapolation for interactive virtual reality systems
US20020082077A1 (en) * 2000-12-26 2002-06-27 Johnson Douglas R. Interactive video game system with characters that evolve physical and cognitive traits
US7092554B2 (en) * 2001-05-01 2006-08-15 Eastman Kodak Company Method for detecting eye and mouth positions in a digital image
JP4409956B2 (ja) * 2002-03-01 2010-02-03 ティーファイヴ ラブズ リミテッド 集中型対話グラフィカルアプリケーションサーバ
US8019121B2 (en) * 2002-07-27 2011-09-13 Sony Computer Entertainment Inc. Method and system for processing intensity from input devices for interfacing with a computer program
DE60222890T2 (de) 2002-08-12 2008-02-07 Alcatel Lucent Verfahren und Vorrichtungen zur Implementerung von hochinteraktiven Unterhaltungsdiensten unter Verwendung der Medienströmungstechnologie, das die Bereitstellung auf Abstand von Virtuelle Realitätdiensten ermöglicht
US8133115B2 (en) 2003-10-22 2012-03-13 Sony Computer Entertainment America Llc System and method for recording and displaying a graphical path in a video game
JP3949703B1 (ja) * 2006-03-29 2007-07-25 株式会社コナミデジタルエンタテインメント 画像生成装置、キャラクタ外観変更方法、および、プログラム
US20090054117A1 (en) * 2007-08-20 2009-02-26 James Beser Independently-defined alteration of output from software executable using later-integrated code
JP5116161B2 (ja) 2008-10-17 2013-01-09 サミー株式会社 画像生成装置、遊技機、及び画像生成プログラム
US9302182B2 (en) * 2012-05-23 2016-04-05 Side-Kick Ltd Method and apparatus for converting computer games between platforms using different modalities
US8905838B2 (en) * 2012-06-26 2014-12-09 Empire Technology Development Llc Detecting game play-style convergence and changing games
US9129430B2 (en) * 2013-06-25 2015-09-08 Microsoft Technology Licensing, Llc Indicating out-of-view augmented reality images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004049607A1 (fr) * 2002-11-12 2004-06-10 Koninklijke Philips Electronics N.V. Distribution de donnees a l'aide d'un reseau de diffusion et sans fil
US20100227685A1 (en) * 2006-02-17 2010-09-09 Konami Digital Entertainment Co., Ltd. Game server device, game service method, information recording medium, and program
US20130344960A1 (en) * 2007-12-15 2013-12-26 Sony Computer Entertainment America Llc Massive Multi-Player Online (MMO) Games Server and Methods for Executing the Same
US20130172079A1 (en) * 2011-12-28 2013-07-04 Eugene Ivanov Client-Server Gaming

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10176644B2 (en) * 2015-06-07 2019-01-08 Apple Inc. Automatic rendering of 3D sound
US11423629B2 (en) 2015-06-07 2022-08-23 Apple Inc. Automatic rendering of 3D sound
US11670271B2 (en) * 2016-12-30 2023-06-06 Spotify Ab System and method for providing a video with lyrics overlay for use in a social messaging environment

Also Published As

Publication number Publication date
WO2015102533A1 (fr) 2015-07-09
US20190096110A1 (en) 2019-03-28
TW201531324A (zh) 2015-08-16
US20160335788A1 (en) 2016-11-17
SG2014000889A (en) 2015-08-28
TW201531884A (zh) 2015-08-16
US10991140B2 (en) 2021-04-27
US20190306217A1 (en) 2019-10-03
US11410358B2 (en) 2022-08-09

Similar Documents

Publication Publication Date Title
US11962741B2 (en) Methods and system for generating and displaying 3D videos in a virtual, augmented, or mixed reality environment
US11985360B2 (en) Immersive event production and distribution
US9370718B2 (en) System and method for delivering media over network
US9480907B2 (en) Immersive display with peripheral illusions
JP7048595B2 (ja) ビデオコンテンツの同期の方法および装置
WO2019167632A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
CN103561293A (zh) 用于增强视频的系统和方法
US10859852B2 (en) Real-time video processing for pyramid holographic projections
US11410358B2 (en) System suitable for efficient communication of media stream and a method in association therewith
KR20160096019A (ko) 게임 플레이 동영상에 광고 컨텐츠를 추가 적용하여 제공하는 서비스 방법
CN107533184A (zh) 用于增强式佩珀尔幽灵幻像的三维图像源
US11770252B2 (en) System and method for generating a pepper's ghost artifice in a virtual three-dimensional environment
WO2015102532A1 (fr) Système adapté à une communication efficace de flux multimédia et méthode associée
US8375311B2 (en) System and method for determining placement of a virtual object according to a real-time performance
JP2016166928A (ja) 演出装置、演出方法、プログラム、ならびにアミューズメントシステム
TWI706292B (zh) 虛擬劇場演播系統
KR20220077014A (ko) 가상현실(vr) 기술 기반의 360도 돔 영상관 상영 방법
CN106997770A (zh) 影音同步控制方法、影音同步控制系统及相关的电子装置
KR101895281B1 (ko) 증강현실 환경에서의 막대형 물체를 캡처하기 위한 장치 및 그 방법
JP2006259818A (ja) 画像処理装置および画像処理方法
KR101486959B1 (ko) 몰입감을 향상시키는 노래방 시스템
US11694230B2 (en) Apparatus, system, and method of providing a three dimensional virtual local presence

Legal Events

Date Code Title Description
AS Assignment

Owner name: CREATIVE TECHNOLOGY LTD, SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, TECK CHEE;NATHAN, DARRAN;CHUNG, SHIN YEE;AND OTHERS;REEL/FRAME:034270/0680

Effective date: 20140721

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION