WO2017148413A1 - 一种内容呈现的方法、用户设备及系统 - Google Patents

一种内容呈现的方法、用户设备及系统 Download PDF

Info

Publication number
WO2017148413A1
WO2017148413A1 PCT/CN2017/075437 CN2017075437W WO2017148413A1 WO 2017148413 A1 WO2017148413 A1 WO 2017148413A1 CN 2017075437 W CN2017075437 W CN 2017075437W WO 2017148413 A1 WO2017148413 A1 WO 2017148413A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
application
user equipment
audio
video
Prior art date
Application number
PCT/CN2017/075437
Other languages
English (en)
French (fr)
Inventor
朱钰璋
江雷
王俊宏
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to US15/774,818 priority Critical patent/US11179634B2/en
Publication of WO2017148413A1 publication Critical patent/WO2017148413A1/zh
Priority to US17/500,478 priority patent/US11707676B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/86Watching games played by other players
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/60Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals
    • H04N5/602Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals for digital sound signals
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/482Application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • H04N5/45Picture in picture, e.g. displaying simultaneously another television channel in a region of the screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/68Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits
    • H04N9/69Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits for modifying the colour signals by gamma correction

Definitions

  • the present application relates to the field of 3D (three dimension) technology, and in particular, to a method, user equipment and system for content presentation.
  • the interactive application involving the 3D scene in the prior art has been very popular.
  • the 3D application system usually includes a user equipment and a 3D application server, and the user equipment can acquire data of the interactive application from the 3D application server and display the interactive application.
  • the embodiment of the present invention provides a method for content presentation, which can intuitively present video content in a 3D application, without requiring the user to additionally open a small window, which not only improves the presentation quality of the video content, but also improves the user and the interactive application. Communication efficiency of video content.
  • the embodiment of the present application also provides a corresponding user equipment and system.
  • the first aspect of the present application provides a method for content presentation, where the method is applied to a 3D application system, where the 3D application system includes a user equipment, a 3D application server, and a content providing server, and the method includes:
  • the user equipment starts the 3D application in response to a startup instruction of a 3D application, where the 3D application includes a simulation object and a virtual screen for viewing the video content by the simulation object;
  • the user equipment acquires audio and video data from the content providing server according to the content source address, and renders the audio and video data to obtain video content and audio content;
  • the user equipment plays the audio content in the 3D application and presents the video content through the virtual screen.
  • a response unit configured to start the 3D application in response to a startup instruction of the 3D application, where the 3D application includes a simulation object and a virtual screen for viewing the video content by the simulation object;
  • a receiving unit configured to receive, after the responding unit starts the 3D application, a content source address sent by the 3D application server, where the content source address is an address of a content currently being broadcasted by the 3D application server;
  • An obtaining unit configured to acquire audio and video data from the content providing server according to the content source address received by the receiving unit
  • a rendering unit configured to render the audio and video data acquired by the acquiring unit, to obtain video content and audio content
  • a playing unit configured to play the audio content rendered by the rendering unit in the 3D application
  • a display unit configured to display the video content rendered by the rendering unit through the virtual screen.
  • a third aspect of the present application provides a 3D application system, where the 3D application system includes a user equipment, a 3D application server, and a content providing server;
  • the user equipment is the user equipment described in the second aspect.
  • the content presentation method provided by the embodiment of the present application can present the video content on the virtual screen of the 3D application, compared to the prior art, which can only display the other video content while displaying the 3D application. Users need to open small windows to improve the presentation quality of video content and improve the communication efficiency between users and interactive applications and video content.
  • FIG. 1 is a diagram showing an example of a scenario of a 3D application in an embodiment of the present application
  • FIG. 2 is a schematic diagram of an embodiment of a 3D application system in an embodiment of the present application
  • FIG. 3A is a schematic diagram of an embodiment of a method for presenting content in an embodiment of the present application.
  • FIG. 3B is a schematic diagram showing an example of a method for presenting content in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of another embodiment of a 3D application system in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of another embodiment of a method for presenting content in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an embodiment of cross-process image rendering in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an embodiment of a dirty area update in an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an example of a webpage in the embodiment of the present application.
  • FIG. 9 is a schematic diagram of an example of a webpage added to a virtual screen in the embodiment of the present application.
  • FIG. 10 is a schematic diagram of an example of a webpage that has undergone anti-gamma correction after adding a virtual screen in the embodiment of the present application;
  • FIG. 11 is a schematic diagram of an embodiment of cross-process audio rendering in an embodiment of the present application.
  • FIG. 12 is a schematic diagram of an embodiment of a user equipment in an embodiment of the present application.
  • FIG. 13 is a schematic diagram of another embodiment of a user equipment in an embodiment of the present application.
  • FIG. 14 is a schematic diagram of another embodiment of a user equipment in an embodiment of the present application.
  • FIG. 15 is a schematic diagram of another embodiment of a user equipment in an embodiment of the present application.
  • the embodiment of the present invention provides a method for content presentation, which can intuitively present video content in a 3D application, without requiring the user to additionally open a small window, which not only improves the presentation quality of the video content, but also improves the user and the interactive application. Communication efficiency of video content.
  • the embodiment of the present application also provides a corresponding user equipment and system. The details are described below separately.
  • the 3D application system in the embodiment of the present application can be understood as a 3D game system.
  • 3D games 3D electronic games based on 3D computer graphics, including but not limited to online 3D games for multiplayer online, single 3D games for single players, and 3D based games.
  • the virtual reality game system is established and has universal applicable properties for the platform.
  • the game console platform, mobile game platform, and 3D games in the PC game platform are all included.
  • Virtual Community A virtual community environment in 3D games is a game environment based on 3D computer graphics.
  • the virtual community may include a simulated object corresponding to the player in the game, and the virtual community in the present application includes a virtual screen, which may be a virtual large screen similar to the external field placement.
  • Game anchor refers to the individual subject of game reporting and commentary on electronic media such as the Internet.
  • Live game refers to the use of Internet technology to broadcast while the game is in progress.
  • the game client usually has a built-in browser, which can be used by players in the game client to watch real-time live video, and simple interaction can occur.
  • this product does not contain the concept of a virtual community, and gamers can't directly feel each other's existence.
  • the prior art solution is more like watching the game in front of the TV, and the solution provided by the present application is mainly like creating an atmosphere for watching the game at the game site.
  • the embodiment of the present application introduces a video combination scheme with the Internet live broadcast in the 3D immersive virtual community.
  • the solution provided by the embodiment of the present application enables the player to view the live video on the Internet in the immersive virtual community.
  • the live video on the virtual screen of the virtual community may be the broadcast video of the game anchor or other game video being broadcasted.
  • the virtual community or 3D application scenario examples involved in the embodiment of the present application may be understood by referring to FIG. 1 .
  • FIG. 2 is a schematic diagram of an embodiment of a 3D application system according to an embodiment of the present application.
  • the 3D application system includes a 3D application server, a content providing server, and a plurality of user devices.
  • Each user device can correspond to one player, and each user device is installed with a client of the 3D application.
  • the mock object in the application can be the virtual identity of the player in the 3D application.
  • the 3D application server requests the address of the content currently being broadcasted on the virtual screen to the 3D application server, that is, the content source address in the embodiment of the present application.
  • the 3D application server sends the content source address to the user equipment according to the content currently being broadcasted.
  • the user equipment After receiving the content source address, the user equipment acquires audio and video data from the content providing server according to the content source address, and the audio and video data is audio and video data of the content being broadcasted.
  • the user equipment After acquiring the audio and video data of the content being broadcasted, the user equipment renders the audio and video data, obtains corresponding audio content and video content, plays the audio content in the 3D application, and displays the virtual content through the virtual screen. Video content.
  • FIG. 3A is a schematic diagram of an embodiment of a method for presenting content in an embodiment of the present application.
  • the method of presenting live video in a 3D application can be understood by referring to the process of FIG. 3:
  • the user equipment starts the 3D application in response to a startup instruction of the 3D application.
  • the 3D application includes a simulation object and a virtual screen for the analog object to view video content.
  • the user equipment receives the content source address sent by the 3D application server, and sends the content source address to the content providing server.
  • the content source address is an address of content currently being broadcasted by the 3D application server.
  • the user equipment acquires audio and video data from the content providing server according to the content source address.
  • the audio and video data is audio and video data of a game or other video content currently being played on the game server.
  • the user equipment plays the audio content in the 3D application, and displays the video content through the virtual screen.
  • the content presentation method provided by the embodiment of the present application can present the video content on the virtual screen of the 3D application, compared to the prior art, which can only display the other video content while displaying the 3D application. Users need to open small windows to improve the presentation quality of video content and improve the communication efficiency between users and interactive applications and video content.
  • the audio and video data includes audio data and video data
  • the rendering the audio and video data to obtain video content and audio content including:
  • the user equipment renders the audio data through a webpage process to obtain the audio content, and the video data is rendered by the webpage process to obtain the video content;
  • the user equipment plays the audio content in the 3D application, and displays the video content through the virtual screen, including:
  • the user equipment plays the audio content through a 3D application process, and displays the video content on the virtual screen through a 3D application process.
  • the audio data and the video data are rendered by the webpage process, and then played by the 3D application process, so that the 3D application process extracts the page rendered by the webpage process through cross-process communication when the webpage rendering data is needed.
  • the 3D application process and the webpage process can be separated, which increases the stability of the 3D application process.
  • the video image data includes a plurality of image frames
  • the video data is rendered by the webpage process to obtain the video content, including:
  • the user equipment When the user equipment renders the N+1th image frame by using the webpage process, determining a difference content between the (N+1)th image frame and the Nth image frame, and rendering the N+th The difference content is only rendered in one image frame, and the N is an integer greater than zero.
  • the repeated content when the image frame is rendered, the repeated content is not repeatedly rendered for the repeated content, and only the difference content between two consecutive frames is rendered, thereby reducing the GPU bandwidth consumption and improving the program execution efficiency and the user experience.
  • the method further includes:
  • the user equipment performs an inverse Gamma correction on the texture of the video content
  • the displaying the video content on the virtual screen by using a 3D application process including:
  • the user equipment displays the anti-gamma corrected video content on the virtual screen through a 3D application process.
  • the commonly used textures are all corrected by Gamma, and if Gamma correction is not performed, an erroneous operation result will be generated. At this time, the image data submitted from another thread is corrected, and it will be color cast directly into the high-end rendering pipeline. Therefore, an inverse Gamma correction is performed on the texture of the video content.
  • the method further includes:
  • the user equipment introduces the audio content into a coordinate system of an interactive application in which the simulation object is located, and determines a sound intensity of the audio content at different coordinate points;
  • the user equipment plays the audio content through a 3D application process, including:
  • Audio playback at different coordinate points is different, so combining audio into the coordinate system produces a stereo effect.
  • the method further includes:
  • the user equipment acquires an interaction content between an anchor of the video content played on the virtual screen and the simulated object;
  • the method further includes:
  • the user equipment displays the interactive content through the virtual screen.
  • the interaction between the anchor and the simulated object can be displayed through the virtual screen, thereby improving the user's interaction with the anchor.
  • the method further includes:
  • the interactive content of the simulated object and other simulated objects is displayed at the location of the simulated object.
  • Players in 3D applications have their own game character representations in a relatively real leisure zone. Roles can talk to each other and do gestures. Players in the same leisure area can watch the same content and have the same topicality. As shown in FIG. 3B, a simulation object can be displayed with other simulated objects in the vicinity of the simulated object "What do you want to drink".
  • the 3D application scenario it is more suitable to play the live broadcast content of the game anchor to the game on the large screen.
  • the scenario involving the anchor can be understood by referring to the schematic diagram of another embodiment of the 3D application system shown in FIG. 4 .
  • the 3D application system further includes a user equipment used by the anchor, and the anchor broadcasts the game through the user equipment, and the live game video stream is uploaded to the content providing server, and the anchor is pre-registered on the game server. Therefore, the source address of the currently live content is stored in the game server, and the player can also interact with the anchor. Therefore, during the live broadcast, there may be interactive content between the anchor and the simulated object, so the user device can Provide the server to obtain live video streams and interactive content.
  • the audio and video are rendered to obtain corresponding audio content and video content
  • the audio content is played in the 3D application
  • the video is displayed through the virtual screen.
  • Content and interactive content is provided.
  • FIG. 5 is a schematic diagram of an embodiment of a method for presenting content in an embodiment of the present application.
  • An anchor broadcasts a video on the Internet, and the anchor submits the live video stream to the content providing server through the user equipment.
  • the program starts to initialize the 3D rendering engine within the 3D application.
  • the program starts to automatically request the address of the anchor video source currently being broadcasted.
  • the 3D application server sends a video source address to the audio and video rendering module.
  • the audio and video rendering module requests a data stream of the live video from the content providing server.
  • the live video server returns a video data stream to the audio and video rendering module.
  • the audio and video data is used in the audio and video rendering module to render the audio.
  • the audio and video rendering module uses audio and video data to render a video single frame image.
  • the rendering engine uses the rendered still frame data, renders it in the 3D world, plays the audio content, and presents the video image to the user.
  • step 209 The image rendering process for step 209 is understood to be:
  • the Flash plug-in used Since the live video on the Internet usually uses the Flash technology framework, the Flash plug-in used has more uncertainties. In order to prevent the flash plug-in from affecting the stability of the game, in this application, another independent process is used. For web pages, the game is only responsible for extracting the pages that need to be rendered through cross-process communication when the web page needs to render the data. Through such processing, the game process and the webpage rendering process can be separated, which increases the stability of the game process.
  • the process of cross-process image rendering can be understood by referring to FIG. 6 : divided into two processes: a game rendering process and a webpage rendering process.
  • the rendering process of two processes may include the following steps:
  • step 20915. Check whether there is internet video stream rendering. If yes, go to step 20923. If no, go to step 20917.
  • Checking dirty pages is to check whether there is content difference between two image frames. If there is content difference, the content of the difference is dirty.
  • the process of dirty area update can also be understood by referring to FIG. 7.
  • the webpage image in the central processing unit CPU caches the dirty area, and through the update process, the dirty area in the memory texture in the image processor is obtained.
  • the above method for updating dirty regions can reduce GPU bandwidth consumption, improve program execution efficiency, and improve user experience.
  • Step 211 will be described in detail below.
  • Modern graphics engines often have multiple rendering pipelines to handle different image quality requirements.
  • the commonly used textures are corrected in Gamma, as shown in Figure 8, and if Gamma correction is not performed, it will produce erroneous results.
  • the image data submitted by the game process from another web page process is corrected by Gamma, and directly into the high-end rendering pipeline will produce a color cast, as shown in Figure 9, such as the color of the picture in 9. Significantly darker than the picture in Figure 8, a color cast was produced.
  • the browser component renders a texture that has been corrected by Gamma. You need to do an inverse Gamma correction to the linear color space first, and you can get the image without color cast as shown in Figure 10.
  • step 207 is highlighted.
  • FIG. 11 a stereo audio rendering process of cross-process audio in the embodiment of the present application is illustrated:
  • step 20714 Check whether there is a webpage sound requirement. If yes, go to step 20723. If no, go to step 20716.
  • this application sets up a universal sound interface suitable for common systems such as XP and win7. It can be exported to the data stream through this interface, and the obtained data stream can be configured into the 3D coordinate system in the 3D game, so that the sound intensity of the player's simulated object standing in different game orientations can be created. The same effect. Through the above method, we achieved the effect of stereo.
  • the application solves the function of playing an internet video stream in a 3D game, and combines game entertainment with internet video, so that the player can watch the favorite video program while playing the game, and solves the frequent switching of the player in the traditional browser and the game.
  • the user equipment 30 provided by the embodiment of the present application is applied to a 3D application system, where the 3D application system further includes a 3D application server and a content providing server, where the user equipment 30 includes:
  • the response unit 301 is configured to start the 3D application in response to a startup instruction of the 3D application, where the 3D application includes a simulation object and a virtual screen for viewing the video content by the simulation object;
  • the receiving unit 302 is configured to receive the 3D response after the response unit 301 starts the 3D application.
  • a content source address sent by the server where the content source address is an address of a content currently being broadcasted by the 3D application server;
  • the obtaining unit 303 is configured to acquire audio and video data from the content providing server according to the content source address received by the receiving unit 302;
  • a rendering unit 304 configured to render the audio and video data acquired by the acquiring unit 303, to obtain video content and audio content;
  • the playing unit 305 is configured to play the audio content rendered by the rendering unit 304 in the 3D application;
  • the display unit 306 is configured to display the video content rendered by the rendering unit 304 through the virtual screen.
  • the response unit 301 starts the 3D application in response to a startup instruction of the 3D application, where the 3D application includes a simulation object and a virtual screen for viewing the video content by the simulation object; the receiving unit 302 is in the After the 3D application is started, the response unit 301 receives the content source address sent by the 3D application server, where the content source address is the address of the content that the 3D application server is currently broadcasting; the obtaining unit 303 is configured according to the receiving unit 302.
  • the rendering unit 304 is configured to: when the audio and video data includes audio data and video data, render the audio data by using a webpage process to obtain the audio content, and render the video data by using the webpage process to obtain a Video content;
  • the playing unit 305 is configured to play the audio content through a 3D application process
  • the display unit 306 is configured to display the video content on the virtual screen by using a 3D application process.
  • the rendering unit 304 is configured to: when the video image data includes multiple image frames, when the N+1th image frame is rendered by the webpage process, determine the (N+1)th image frame and the first The difference content between the N image frames, only the difference content is rendered when the N+1th image frame is rendered, and the N is an integer greater than 0.
  • the user equipment further includes a calibration unit.
  • the correcting unit 307 is configured to perform an inverse Gamma correction on the texture of the video content before the displaying unit 306 displays the video content.
  • the display unit 306 is configured to display the anti-gamma corrected video content of the correcting unit 307 on the virtual screen by using a 3D application process.
  • the fourth user equipment provided by the embodiment of the present application is provided.
  • the user equipment further includes a determining unit 308,
  • the determining unit 308 is configured to introduce the audio content into a coordinate system of an interactive application where the simulation object is located, and determine a sound strength of the audio content at different coordinate points;
  • the playing unit 305 is configured to play the audio content according to the sound intensity of the corresponding coordinate point determined by the determining unit 308 at the different coordinate points.
  • the user equipment provided by the embodiment of the present application is provided on the basis of any one of the foregoing optional embodiment to the third optional embodiment of the user equipment.
  • the user equipment provided by the embodiment of the present application is provided on the basis of any one of the foregoing optional embodiment to the third optional embodiment of the user equipment.
  • the obtaining unit 303 is further configured to acquire an interaction content between an anchor of the video content played on the virtual screen and the simulated object;
  • the display unit 306 is further configured to display the interactive content by using the virtual screen.
  • the user equipment provided by the embodiment of the present application is provided on the basis of any one of the foregoing optional embodiment to the third optional embodiment of the user equipment.
  • the user equipment provided by the embodiment of the present application is provided on the basis of any one of the foregoing optional embodiment to the third optional embodiment of the user equipment.
  • the display unit 306 is further configured to display the simulated object and other at a position of the simulated object Simulate the interactive content of the object.
  • the above user equipment 30 can be understood by referring to the related description in the parts of FIG. 1 to FIG. 11 , and no further description is made herein.
  • FIG. 15 is a schematic structural diagram of a user equipment 30 according to an embodiment of the present application.
  • the user equipment 30 is applied to a 3D application system, where the 3D application system includes a user equipment, a 3D application server, and a content providing server, and the user equipment 30 includes a central processing unit (CPU) 3101 and a graphics processor (Graphic).
  • CPU central processing unit
  • GPU graphics processor
  • Processing Unit (GPU) 3102, transceiver 340, memory 350 and input/output (I/O) device 330, input/output (I/O) device 330 may be a keyboard or mouse, and graphics processor 3102 is used for graphics rendering
  • Memory 350 can include read only memory and random access memory and provides operational instructions and data to processor 310.
  • a portion of memory 350 may also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the memory 350 stores elements, executable modules or data structures, or a subset thereof, or their extended set:
  • the operation instruction can be stored in an operating system
  • the input/output device 330 is configured to receive a startup instruction of a 3D application
  • the central processing unit 3101 is configured to start the 3D application in response to a startup instruction of a 3D application, where the 3D application includes a simulation object and a virtual screen for viewing the video content by the simulation object;
  • the transceiver 340 is configured to receive a content source address sent by the 3D application server, where the content source address is an address of a content that the 3D application server is currently broadcasting;
  • the central processing unit 3101 is configured to acquire audio and video data from the content providing server according to the content source address;
  • the graphics processor 3102 is configured to render the audio and video data to obtain video content and audio content;
  • the input/output device 330 is configured to play the audio content in the 3D application and display the video content through the virtual screen.
  • the user equipment provided by the embodiment of the present application can present the video content on the virtual screen of the 3D application, and the user does not need the user.
  • the additional opening of the small window not only improves the presentation quality of the video content, but also improves the communication efficiency between the user and the interactive application and the video content.
  • the central processing unit 3101 controls the operation of the user equipment 30.
  • the memory 350 can include a read only memory and The memory is randomly accessed and instructions and data are provided to the central processing unit 3101. A portion of memory 350 may also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the various components of the user equipment 30 are coupled together by a bus system 320.
  • the bus system 320 may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus. However, for clarity of description, various buses are labeled as bus system 320 in the figure.
  • Processor 310 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 310 or an instruction in a form of software.
  • the processor 310 described above may be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or discrete hardware. Component.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA off-the-shelf programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 350, and the processor 310 reads the information in the memory 350 and performs the steps of the above method in combination with its hardware.
  • the graphics processor 3102 is configured to render the audio data by using a webpage process to obtain the audio content, and the video data is rendered by the webpage process to obtain the video content.
  • the input/output device 330 is configured to play the audio content through a 3D application process and display the video content on the virtual screen through a 3D application process.
  • the image processor 3102 is configured to determine a difference content between the (N+1)th image frame and the Nth image frame when the N+1th image frame is rendered by the webpage process, The difference content is only rendered when the N+1th image frame is rendered, and the N is an integer greater than 0.
  • the central processing unit 3101 is configured to perform an inverse Gamma correction on the texture of the video content.
  • the input/output device 330 is configured to display the anti-gamma corrected video content on the virtual screen through a 3D application process.
  • the central processing unit 3101 is configured to introduce the audio content into a coordinate system of an interactive application where the simulation object is located, and determine a sound intensity of the audio content at different coordinate points;
  • the input/output device 330 is configured to play the audio content at the different coordinate points in accordance with the sound intensity corresponding to the coordinate point.
  • the central processing unit 3101 is configured to acquire an interaction content between an anchor of the video content played on the virtual screen and the simulated object;
  • An input/output device 330 is configured to present the interactive content through the virtual screen.
  • the input/output device 330 is configured to display the interactive content of the simulated object and other simulated objects at the location of the simulated object.
  • the above user equipment 30 can be understood by referring to the related description in the parts of FIG. 1 to FIG. 11 , and no further description is made herein.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: ROM, RAM, disk or CD.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Information Transfer Between Computers (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本申请公开了一种内容呈现的方法,包括:用户设备响应3D应用的启动指令,启动3D应用,3D应用中包括模拟对象和用于模拟对象观看视频内容的虚拟屏幕;接收3D应用服务器发送的内容源地址,内容源地址为3D应用服务器当前正在直播的内容的地址;根据内容源地址从内容提供服务器获取音视频数据,并渲染音视频数据,得到视频内容和音频内容;在3D应用中播放音频内容,并通过虚拟屏幕展示视频内容。本申请实施例提供的内容呈现的方法可以将视频内容直观的呈现在3D应用中,不需要用户额外打开小窗口,既提高了视频内容的呈现质量,又提高了用户与交互式应用和视频内容的沟通效率。

Description

一种内容呈现的方法、用户设备及系统
本申请要求于2016年3月3日提交中国专利局、申请号201610120288.5、发明名称为“一种内容呈现方法、用户设备及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及3D(three dimension)技术领域,具体涉及一种内容呈现的方法、用户设备及系统。
背景技术
现有技术中涉及3D场景的交互式应用已经非常普遍了,3D应用系统中通常包括用户设备和3D应用服务器,用户设备可以从3D应用服务器获取交互式应用的数据,并展示交互式应用。
现有技术中用户设备在展示3D交互式应用时,如果要展示其他的应用或视频,只能做视频切换,操作非常繁琐,也有的一些交互式应用中会内置一个浏览器,该浏览器可以提供用户在开启3D交互式应用时,在角落里打开一个小窗口,使用户在观看3D交互式应用时,可以同时在小窗口中观看其他内容,但这种呈现方式只是在小窗口中的平面化的呈现,视觉效果很差。
发明内容
本申请实施例提供一种内容呈现的方法,可以将视频内容直观的呈现在3D应用中,不需要用户额外打开小窗口,既提高了视频内容的呈现质量,又提高了用户与交互式应用和视频内容的沟通效率。本申请实施例还提供了相应的用户设备和系统。
本申请第一方面提供一种内容呈现的方法,所述方法应用于3D应用系统,所述3D应用系统包括用户设备、3D应用服务器和内容提供服务器,所述方法包括:
所述用户设备响应3D应用的启动指令,启动所述3D应用,所述3D应用中包括模拟对象和用于所述模拟对象观看视频内容的虚拟屏幕;
所述用户设备接收所述3D应用服务器发送的内容源地址,所述内容源地址 为所述3D应用服务器当前正在直播的内容的地址;
所述用户设备根据所述内容源地址从所述内容提供服务器获取音视频数据,并渲染所述音视频数据,得到视频内容和音频内容;
所述用户设备在所述3D应用中播放所述音频内容,并通过所述虚拟屏幕展示所述视频内容。
本申请第二方面提供一种用户设备,所述用户设备应用于3D应用系统,所述3D应用系统还3D应用服务器和内容提供服务器,所述用户设备包括:
响应单元,用于响应3D应用的启动指令,启动所述3D应用,所述3D应用中包括模拟对象和用于所述模拟对象观看视频内容的虚拟屏幕;
接收单元,用于在所述响应单元启动所述3D应用后,接收所述3D应用服务器发送的内容源地址,所述内容源地址为所述3D应用服务器当前正在直播的内容的地址;
获取单元,用于根据所述接收单元接收的所述内容源地址从所述内容提供服务器获取音视频数据;
渲染单元,用于渲染所述获取单元获取的所述音视频数据,得到视频内容和音频内容;
播放单元,用于在所述3D应用中播放所述渲染单元渲染的所述音频内容;
展示单元,用于通过所述虚拟屏幕展示所述渲染单元渲染的所述视频内容。
本申请第三方面提供一种3D应用系统,所述3D应用系统包括用户设备、3D应用服务器和内容提供服务器;
所述用户设备为上第二方面所述的用户设备。
与现有技术中只能通过小窗口的方式才能在展示3D应用的同时展示其他视频内容相比,本申请实施例提供的内容呈现的方法,可以在3D应用的虚拟屏幕上呈现视频内容,不需要用户额外打开小窗口,既提高了视频内容的呈现质量,又提高了用户与交互式应用和视频内容的沟通效率。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例中3D应用的场景示例图;
图2是本申请实施例中3D应用系统的一实施例示意图;
图3A是本申请实施例中内容呈现的方法的一实施例示意图;
图3B是本申请实施例中内容呈现的方法的一示例示意图;
图4是本申请实施例中3D应用系统的另一实施例示意图;
图5是本申请实施例中内容呈现的方法的另一实施例示意图;
图6是本申请实施例中跨进程图像渲染的一实施例示意图;
图7是本申请实施例中脏区域更新的一实施例示意图;
图8是本申请实施例中一网页示例示意图;
图9是本申请实施例中加入虚拟屏幕的一网页示例示意图;
图10是本申请实施例中加入虚拟屏幕后做过反Gamma校正的一网页示例示意图;
图11是本申请实施例中跨进程音频渲染的一实施例示意图;
图12是本申请实施例中用户设备的一实施例示意图;
图13是本申请实施例中用户设备的另一实施例示意图;
图14是本申请实施例中用户设备的另一实施例示意图;
图15是本申请实施例中用户设备的另一实施例示意图。
具体实施方式
本申请实施例提供一种内容呈现的方法,可以将视频内容直观的呈现在3D应用中,不需要用户额外打开小窗口,既提高了视频内容的呈现质量,又提高了用户与交互式应用和视频内容的沟通效率。本申请实施例还提供了相应的用户设备和系统。以下分别进行详细说明。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
为了便于理解,预先对本申请中涉及到的名词做如下简单介绍:
本申请实施例中的3D应用系统可以理解为是3D游戏系统。
3D游戏:是指以三维计算机图形为基础制作的立体电子游戏,包括但不限于多人在线联机的网络3D游戏,单人进行游戏的单人3D游戏,和基于3D游戏系 统建立的虚拟现实游戏系统,并且对平台具有通用适用属性,游戏主机平台,手机游戏平台,个人计算机游戏平台内的3D游戏都包含在内。
虚拟社区:3D游戏中的一种虚拟的社区环境,是指以三维计算机图形为基础制作的游戏环境。虚拟社区中可以包括玩家在游戏中对应的模拟对象,并且本申请中的虚拟社区中包括一个虚拟屏幕,该虚拟屏幕可以是类似于外场投放的虚拟大屏幕。
游戏主播:指的是在互联网等电子媒体上,从事游戏报道及解说的个人主体。
游戏直播:是指利用互联网技术在游戏进行的同时进行播报。
现有技术中,以英雄联盟(League of Legends,LOL)为例,游戏客户端通常会内置的一个浏览器,能够供游戏客户端内的玩家观看到实时直播的视频,也能发生简单的交互,但是这个产品并没有包含一种虚拟社区的概念,游戏玩家并不能直接感受到相互的存在。现有技术的方案更像是在电视前观看比赛,而本申请提供的方案主要是像营造一种在比赛现场观看比赛的氛围。
本申请实施例介绍了一种在3D拟真的虚拟社区里与互联网直播的视频结合方案,本申请实施例提供的方案使玩家可以在拟真的虚拟社区内观看到互联网上正在直播的视频。
虚拟社区的虚拟屏幕上的直播视频可以是游戏主播的播报视频,也可以是正在直播的其他游戏视频,本申请实施例中所涉及的虚拟社区或3D应用场景示例可以参阅图1进行理解。
图2为本申请实施例中3D应用系统的一实施例示意图。
如图2所示,3D应用系统包括3D应用服务器、内容提供服务器和多个用户设备,每个用户设备可以对应一个玩家,每个用户设备上都安装有3D应用的客户端。
玩家在用户设备上点击3D应用的客户端,用户设备响应3D应用的启动指令,启动所述3D应用,所述3D应用中包括模拟对象和用于所述模拟对象观看视频内容的虚拟屏幕,3D应用中的模拟对象可以是玩家在3D应用中的虚拟身份。
3D应用在启动的过程中,会向3D应用服务器请求虚拟屏幕上当前正在直播的内容的地址,也就是本申请实施例中的内容源地址。3D应用服务器根据当前正在直播的内容,确定内容源地址后,向用户设备发送。
用户设备接收到内容源地址后,根据所述内容源地址从所述内容提供服务器获取音视频数据,该音视频数据为正在直播的内容的音视频数据。
用户设备获取到正在直播的内容的音视频数据后,渲染该音视频数据,得到相应的音频内容和视频内容,在所述3D应用中播放所述音频内容,并通过所述虚拟屏幕展示所述视频内容。
图3A为本申请实施例中内容呈现的方法的一实施例示意图。
在3D应用中呈现直播视频的方法可以参阅图3的过程进行理解:
101、用户设备响应3D应用的启动指令,启动所述3D应用。
所述3D应用中包括模拟对象和用于所述模拟对象观看视频内容的虚拟屏幕。
102、用户设备接收所述3D应用服务器发送的内容源地址,并向内容提供服务器发送所述内容源地址。
所述内容源地址为所述3D应用服务器当前正在直播的内容的地址。
103、用户设备根据所述内容源地址从所述内容提供服务器获取音视频数据。
音视频数据为游戏服务器上当前正在播放的游戏或者其他视频内容的音频和视频数据。
104、渲染所述音视频数据,得到视频内容和音频内容。
105、用户设备在所述3D应用中播放所述音频内容,并通过所述虚拟屏幕展示所述视频内容。
与现有技术中只能通过小窗口的方式才能在展示3D应用的同时展示其他视频内容相比,本申请实施例提供的内容呈现的方法,可以在3D应用的虚拟屏幕上呈现视频内容,不需要用户额外打开小窗口,既提高了视频内容的呈现质量,又提高了用户与交互式应用和视频内容的沟通效率。
可选地,所述音视频数据包括音频数据和视频数据,所述渲染所述音视频数据,得到视频内容和音频内容,包括:
所述用户设备通过网页进程渲染所述音频数据,得到所述音频内容,通过所述网页进程渲染所述视频数据,得到所述视频内容;
所述用户设备在所述3D应用中播放所述音频内容,并通过所述虚拟屏幕展示所述视频内容,包括:
所述用户设备通过3D应用进程播放所述音频内容,并通过3D应用进程在所述虚拟屏幕上展示所述视频内容。
本申请实施例中,通过网页进程渲染音频数据和视频数据,然后通过3D应用进程播放,实现了3D应用进程在需要网页渲染数据的时候通过跨进程通讯将网页进程渲染的页面提取出来。通过这样的处理可以将3D应用进程和网页进程独立开来,增加了3D应用进程的稳定性。
进一步地,所述视频图像数据包括多个图像帧,所述通过所述网页进程渲染所述视频数据,得到所述视频内容,包括:
所述用户设备通过所述网页进程渲染第N+1个图像帧时,确定所述第N+1个图像帧与所述第N个图像帧之间的差异内容,在渲染所述第N+1个图像帧时只渲染所述差异内容,所述N为大于0的整数。
本申请实施例中,在图像帧渲染时,针对重复的内容不再重复渲染,只渲染连续两帧之间的差异内容,从而减少了GPU带宽消耗,提高了程序执行效率和用户体验。
进一步地,所述通过3D应用进程在所述虚拟屏幕上展示所述视频内容之前,所述方法还包括:
所述用户设备对所述视频内容的纹理做一次反Gamma校正;
所述通过3D应用进程在所述虚拟屏幕上展示所述视频内容,包括:
所述用户设备通过3D应用进程在所述虚拟屏幕上展示反Gamma校正后的视频内容。
本申请实施例中,通常使用的贴图都是Gamma矫正过的,而如果不进行Gamma矫正的话,将会产生错误的运算结果。而此时,从另外一个线程提交过来的图像数据是经过矫正过的,直接放入高端渲染管线中将会产生偏色。所以对所述视频内容的纹理做一次反Gamma校正。
进一步地,所述用户设备通过3D应用进程播放所述音频内容之前,所述方法还包括:
所述用户设备将所述音频内容引入到所述模拟对象所处的交互式应用的坐标系中,确定所述音频内容在不同坐标点的声音强弱度;
所述用户设备通过3D应用进程播放所述音频内容,包括:
所述用户设备在所述不同的坐标点按照对应该坐标点的声音强度播放所述 音频内容。
不同坐标点的音频播放强弱不同,所以将音频结合到坐标系中,可以制造出立体声的效果。
进一步地,所述用户设备根据所述内容源地址从所述内容提供服务器获取音视频数据时,所述方法还包括:
所述用户设备获取所述虚拟屏幕上播放的视频内容的主播与所述模拟对象的交互内容;
所述通过所述虚拟屏幕展示所述视频内容时,所述方法还包括:
所述用户设备通过所述虚拟屏幕展示所述交互内容。
通过虚拟屏幕可以展示主播与模拟对象的交互内容,从而提高了用户与主播的互动性。
进一步地,所述方法还包括:
在所述模拟对象的位置展示所述模拟对象与其他模拟对象的互动内容。
3D应用中的玩家在一个相对真实的休闲区中,有各自的游戏角色表示。角色之间可以相互对话,做动作示意。同一个休闲区的玩家可以观看到同样的内容,拥有相同的话题性。如图3B所示,一个模拟对象可以与其他模拟对象的对话显示在该模拟对象附近“你想喝点什么”。
在3D应用场景中,较为适用的是在大屏幕上播放游戏主播对游戏的直播内容,这种涉及主播的场景,可以参阅图4所示的3D应用系统的另一实施例示意图进行理解。
如图4所示,3D应用系统中还包括一个主播所使用的用户设备,主播通过该用户设备进行游戏直播,直播的游戏视频流上传到内容提供服务器,该主播是预先在游戏服务器上注册过的,所以,游戏服务器中会存储有当前直播的内容的源地址,玩家还可以与主播互动,因此,在直播过程中还可能会有主播与模拟对象的互动内容,因此,用户设备可以从内容提供服务器获取直播的视频流和互动内容。
用户设备获取直播的视频流和互动内容后,进行音频和视频的渲染,得到相应的音频内容和视频内容,在所述3D应用中播放所述音频内容,并通过所述虚拟屏幕展示所述视频内容和互动内容。
下面结合图5介绍包含主播直播的场景下的内容呈现的方法。
图5是本申请实施例中内容呈现的方法的一实施例示意图。
201、一个主播正在互联网上直播视频,该主播通过用户设备将其直播的视频流提交到内容提供服务器。
202、用户打开3D应用程序后,程序开始初始化3D应用内的3D渲染引擎。
203、程序开始自动请求目前正在直播的主播视频源地址。
204、3D应用服务器向音视频渲染模块下发视频源地址。
205、音视频渲染模块向内容提供服务器请求直播视频的数据流。
206、直播视频服务器向音视频渲染模块返回视频数据流。
207、音视频渲染模块内使用音视频数据,渲染音频。
208、向3D应用内的音频引擎提交音频数据。
209、音视频渲染模块使用音视频数据,渲染视频单帧图像。
210、向3D应用内的3D渲染引擎提交图像数据。
211、渲染引擎使用渲染好的静帧数据,在3D世界中渲染出来,播放音频内容,并呈现视频画面给用户。
针对步骤209的图像渲染过程可以理解的是:
由于互联网上直播的视频通常是使用Flash技术框架,而使用的Flash插件存在较多的不确定性,为了防止flash插件对游戏稳定性造成影响,在本申请中,采用了另外一个独立的进程渲染网页,游戏只负责在需要网页渲染数据的时候通过跨进程通讯将需要渲染的页面提取出来。通过这样的处理可以将游戏进程和网页渲染进程独立开来,增加了游戏进程的稳定性。
跨进程图像渲染的过程可以参阅图6进行理解:分为游戏渲染进程和网页渲染进程两个进程,两个进程的渲染过程可以包括如下步骤:
20911、游戏渲染进程。
20921、网页渲染进程。
20922、渲染初始化。
20912、进入游戏主循环。
20913、渲染开始。
20914、游戏内容渲染。
20915、查看是否有互联网视频流渲染,若是,执行步骤20923,若否,执行步骤20917。
20923、等待游戏进程渲染消息。
20924、检测当前是否存在脏页,若是,执行步骤20925和20926,若否,执行步骤20927。
检查脏页就是检查两个图像帧之间是否有内容差异,有内容差异,则差异的内容即为脏页。
20925、更新脏页。
20926、将更新后的页面格式化为游戏可用的格式。
脏区域更新的过程还可以参阅图7进行理解,中央处理器CPU中的网页图像缓存脏区域,通过更新过程,得到图像处理器中的显存纹理中的脏区域。
20927、不存在脏页直接返回。
20928、完成网页的渲染。
20916、将游戏内容和网页内容合并。
20917、完成渲染。
可以再进入游戏主循环,重复下一个上述循环。
上述的脏区域更新的方法可以减少GPU带宽消耗,提高程序执行效率,提高用户体验。
经过上面的步骤,已经能够取得互联网视频渲染的图像数据,但是此时的数据并不能直接用在最后的渲染中。下面对步骤211进行重点描述。
现代化的图形引擎往往具有多条渲染管线,来应对不同画质的需求。
例如对于高端渲染管线中,通常使用的贴图都是在Gamma矫正过的,如图8所示,而如果不进行Gamma矫正的话,将会产生错误的运算结果。而此时,游戏进程得到的从另外一个网页进程提交过来的图像数据是经过Gamma矫正过的,直接放入高端渲染管线中将会产生偏色,如图9所示,如9中的图片颜色明显比图8中的图片颜色深,产生了偏色。
浏览器组件渲染得到的是一张已经做过Gamma校正的纹理,需要先做一次反Gamma校正到线性颜色空间,就可以得到图10所示的不产生偏色的图片。
对于高中低端的兼容问题,由于直播屏幕不需要接受光照,只需要做一次简单的反Gamma校正,渲染消耗较低,所以高中低端机器都可以使用上述处理流程。
经过上面的步骤,完成了视频图像在3D应用中的呈现过程,下面将会对音 频信息的输出和处理进行描述,即重点讲述步骤207。
参阅图11,说明本申请实施例中跨进程音频的立体声音频渲染过程:
20711、游戏渲染进程。
20721、网页渲染进程。
20712、游戏主循环。
20722、网页声音接口初始化。
20713、游戏内声音内容准备。
20714、查看是否有网页声音需求,若是,则执行步骤20723,若否,则执行步骤20716。
20723、等待游戏进程读取声音数据。
20724、提取当前的声音数据流。
20725、将数据流格式转化为通用音频数据。
20715、将网页音源放置到游戏3D坐标系中。
20716、声音合成。
可以再进入游戏主循环,重复下一个上述循环。
游戏结束。
通过对网页渲染引擎声音接口的分析,本申请设置了一套适用于XP,win7等常用系统的通用声音接口。可以通过该接口导出到数据流,并且可以将得到的数据流配置到3D游戏内的3D坐标系中,这样可以制造出玩家的模拟对象站在不同的游戏方位可以听到的声音强弱也不一样的效果。通过上面的方法,我们实现了立体声的效果。
本申请解决了在3D游戏内播放互联网视频流的功能,将游戏娱乐与互联网视频结合起来,让玩家可以在玩游戏的同时观看喜欢的视频节目,解决了玩家在传统浏览器和游戏内频繁切换的烦扰,并提供了一个大家一起观看比赛的直观体验,大大提升了玩家的游戏体验。
参阅图12,本申请实施例提供的用户设备30应用于3D应用系统,所述3D应用系统还包括3D应用服务器和内容提供服务器,所述用户设备30包括:
响应单元301,用于响应3D应用的启动指令,启动所述3D应用,所述3D应用中包括模拟对象和用于所述模拟对象观看视频内容的虚拟屏幕;
接收单元302,用于在所述响应单元301启动所述3D应用后,接收所述3D应 用服务器发送的内容源地址,所述内容源地址为所述3D应用服务器当前正在直播的内容的地址;
获取单元303,用于根据所述接收单元302接收的所述内容源地址从所述内容提供服务器获取音视频数据;
渲染单元304,用于渲染所述获取单元303获取的所述音视频数据,得到视频内容和音频内容;
播放单元305,用于在所述3D应用中播放所述渲染单元304渲染的所述音频内容;
展示单元306,用于通过所述虚拟屏幕展示所述渲染单元304渲染的所述视频内容。
本申请实施例中,响应单元301响应3D应用的启动指令,启动所述3D应用,所述3D应用中包括模拟对象和用于所述模拟对象观看视频内容的虚拟屏幕;接收单元302在所述响应单元301启动所述3D应用后,接收所述3D应用服务器发送的内容源地址,所述内容源地址为所述3D应用服务器当前正在直播的内容的地址;获取单元303根据所述接收单元302接收的所述内容源地址从所述内容提供服务器获取音视频数据;渲染单元304渲染所述获取单元303获取的所述音视频数据,得到视频内容和音频内容;播放单元305在所述3D应用中播放所述渲染单元304渲染的所述音频内容;展示单元306通过所述虚拟屏幕展示所述渲染单元304渲染的所述视频内容。与现有技术中只能通过小窗口的方式才能在展示3D应用的同时展示其他视频内容相比,本申请实施例提供的用户设备,可以在3D应用的虚拟屏幕上呈现视频内容,不需要用户额外打开小窗口,既提高了视频内容的呈现质量,又提高了用户与交互式应用和视频内容的沟通效率。
可选地,在上述图12对应的实施例的基础上,本申请实施例提供的用户设备的第一个可选实施例中,
所述渲染单元304,用于在所述音视频数据包括音频数据和视频数据时,通过网页进程渲染所述音频数据,得到所述音频内容,通过所述网页进程渲染所述视频数据,得到所述视频内容;
所述播放单元305,用于通过3D应用进程播放所述音频内容;
所述展示单元306,用于通过3D应用进程在所述虚拟屏幕上展示所述视频内容。
可选地,在上述用户设备的第一个可选实施例的基础上,本申请实施例提供的用户设备的第二个可选实施例中,
所述渲染单元304,用于在所述视频图像数据包括多个图像帧时,通过所述网页进程渲染第N+1个图像帧时,确定所述第N+1个图像帧与所述第N个图像帧之间的差异内容,在渲染所述第N+1个图像帧时只渲染所述差异内容,所述N为大于0的整数。
可选地,在上述用户设备的第一个可选实施例的基础上,参阅图13,本申请实施例提供的用户设备的第三个可选实施例中,所述用户设备还包括校正单元,
所述校正单元307,用于在所述展示单元306展示所述视频内容之前,对所述视频内容的纹理做一次反Gamma校正;
所述展示单元306,用于通过3D应用进程在所述虚拟屏幕上展示所述校正单元307反Gamma校正后的视频内容。
可选地,在上述用户设备的第一个可选实施例至第三个可选实施例中任一可选实施例的基础上,参阅图14,本申请实施例提供的用户设备的第四个可选实施例中,所述用户设备还包括确定单元308,
所述确定单元308,用于将所述音频内容引入到所述模拟对象所处的交互式应用的坐标系中,确定所述音频内容在不同坐标点的声音强弱度;
所述播放单元305,用于在所述不同的坐标点按照所述确定单元308确定的所述对应该坐标点的声音强度播放所述音频内容。
可选地,在上述图12对应的实施例、用户设备的第一个可选实施例至第三个可选实施例中任一可选实施例的基础上,本申请实施例提供的用户设备的第五个可选实施例中,
所述获取单元303,还用于获取所述虚拟屏幕上播放的视频内容的主播与所述模拟对象的交互内容;
所述展示单元306,还用于通过所述虚拟屏幕展示所述交互内容。
可选地,在上述图12对应的实施例、用户设备的第一个可选实施例至第三个可选实施例中任一可选实施例的基础上,本申请实施例提供的用户设备的第六个可选实施例中,
所述展示单元306,还用于在所述模拟对象的位置展示所述模拟对象与其他 模拟对象的互动内容。
以上的用户设备30可以参阅图1至图11部分的相关描述进行理解,本处不做过多赘述。
图15是本申请实施例提供的用户设备30的结构示意图。所述用户设备30应用于3D应用系统,所3D应用系统包括用户设备、3D应用服务器和内容提供服务器,所述用户设备30包括中央处理器(Central Processing Unit,CPU)3101和图形处理器(Graphic Processing Unit,GPU)3102、收发器340、存储器350和输入/输出(I/O)设备330,输入/输出(I/O)设备330可以是键盘或鼠标,图形处理器3102用于图形渲染,存储器350可以包括只读存储器和随机存取存储器,并向处理器310提供操作指令和数据。存储器350的一部分还可以包括非易失性随机存取存储器(NVRAM)。
在一些实施方式中,存储器350存储了如下的元素,可执行模块或者数据结构,或者他们的子集,或者他们的扩展集:
在本申请实施例中,通过调用存储器350存储的操作指令(该操作指令可存储在操作系统中),
所述输入/输出设备330用于接收3D应用的启动指令;
所述中央处理器3101用于响应3D应用的启动指令,启动所述3D应用,所述3D应用中包括模拟对象和用于所述模拟对象观看视频内容的虚拟屏幕;
所述收发器340用于接收所述3D应用服务器发送的内容源地址,所述内容源地址为所述3D应用服务器当前正在直播的内容的地址;
所述中央处理器3101用于根据所述内容源地址从所述内容提供服务器获取音视频数据;
图形处理器3102用于渲染所述音视频数据,得到视频内容和音频内容;
所述输入/输出设备330用于在所述3D应用中播放所述音频内容,并通过所述虚拟屏幕展示所述视频内容。
与现有技术中只能通过小窗口的方式才能在展示3D应用的同时展示其他视频内容相比,本申请实施例提供的用户设备,可以在3D应用的虚拟屏幕上呈现视频内容,不需要用户额外打开小窗口,既提高了视频内容的呈现质量,又提高了用户与交互式应用和视频内容的沟通效率。
中央处理器3101控制用户设备30的操作。存储器350可以包括只读存储器和 随机存取存储器,并向中央处理器3101提供指令和数据。存储器350的一部分还可以包括非易失性随机存取存储器(NVRAM)。具体的应用中用户设备30的各个组件通过总线系统320耦合在一起,其中总线系统320除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线系统320。
上述本申请实施例揭示的方法可以应用于处理器310中,或者由处理器310实现。处理器310可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器310中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器310可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器350,处理器310读取存储器350中的信息,结合其硬件完成上述方法的步骤。
可选地,图形处理器3102用于通过网页进程渲染所述音频数据,得到所述音频内容,通过所述网页进程渲染所述视频数据,得到所述视频内容;
输入/输出设备330用于通过3D应用进程播放所述音频内容,并通过3D应用进程在所述虚拟屏幕上展示所述视频内容。
可选地,图像处理器3102用于通过所述网页进程渲染第N+1个图像帧时,确定所述第N+1个图像帧与所述第N个图像帧之间的差异内容,在渲染所述第N+1个图像帧时只渲染所述差异内容,所述N为大于0的整数。
可选地,中央处理器3101用于对所述视频内容的纹理做一次反Gamma校正;
输入/输出设备330用于通过3D应用进程在所述虚拟屏幕上展示反Gamma校正后的视频内容。
可选地,中央处理器3101用于将所述音频内容引入到所述模拟对象所处的交互式应用的坐标系中,确定所述音频内容在不同坐标点的声音强弱度;
输入/输出设备330用于在所述不同的坐标点按照对应该坐标点的声音强度播放所述音频内容。
可选地,中央处理器3101用于获取所述虚拟屏幕上播放的视频内容的主播与所述模拟对象的交互内容;
输入/输出设备330用于通过所述虚拟屏幕展示所述交互内容。
可选地,输入/输出设备330用于在所述模拟对象的位置展示所述模拟对象与其他模拟对象的互动内容。
以上的用户设备30可以参阅图1至图11部分的相关描述进行理解,本处不做过多赘述。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:ROM、RAM、磁盘或光盘等。
以上对本申请实施例所提供的内容呈现的方法、用户设备以及系统进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (15)

  1. 一种内容呈现的方法,其特征在于,所述方法应用于3D(three dimension)应用系统,所述3D应用系统包括用户设备、3D应用服务器和内容提供服务器,所述方法包括:
    所述用户设备响应3D应用的启动指令,启动所述3D应用,所述3D应用中包括模拟对象和用于所述模拟对象观看视频内容的虚拟屏幕;
    所述用户设备接收所述3D应用服务器发送的内容源地址,所述内容源地址为所述3D应用服务器当前正在直播的内容的地址;
    所述用户设备根据所述内容源地址从所述内容提供服务器获取音视频数据,并渲染所述音视频数据,得到视频内容和音频内容;以及
    所述用户设备在所述3D应用中播放所述音频内容,并通过所述虚拟屏幕展示所述视频内容。
  2. 根据权利要求1所述的方法,其特征在于,所述音视频数据包括音频数据和视频数据,所述渲染所述音视频数据,得到视频内容和音频内容,包括:
    所述用户设备通过网页进程渲染所述音频数据,得到所述音频内容,通过所述网页进程渲染所述视频数据,得到所述视频内容,其中,所述网页进程独立于3D应用进程;
    所述用户设备在所述3D应用中播放所述音频内容,并通过所述虚拟屏幕展示所述视频内容,包括:
    所述用户设备通过3D应用进程播放所述音频内容,并通过3D应用进程在所述虚拟屏幕上展示所述视频内容。
  3. 根据权利要求2所述的方法,其特征在于,所述视频图像数据包括多个图像帧,所述通过所述网页进程渲染所述视频数据,得到所述视频内容,包括:
    所述用户设备通过所述网页进程渲染第N+1个图像帧时,确定所述第N+1个图像帧与所述第N个图像帧之间的差异内容,在渲染所述第N+1个图像帧时只渲染所述差异内容,所述N为大于0的整数。
  4. 根据权利要求2所述的方法,其特征在于,所述通过3D应用进程在所述虚拟屏幕上展示所述视频内容之前,所述方法还包括:
    所述用户设备对所述视频内容的纹理做一次反Gamma校正;
    所述通过3D应用进程在所述虚拟屏幕上展示所述视频内容,包括:
    所述用户设备通过3D应用进程在所述虚拟屏幕上展示反Gamma校正后的视频内容。
  5. 根据权利要求2-4任一所述的方法,其特征在于,所述用户设备通过3D应用进程播放所述音频内容之前,所述方法还包括:
    所述用户设备将所述音频内容引入到所述模拟对象所处的交互式应用的坐标系中,确定所述音频内容在不同坐标点的声音强弱度;
    所述用户设备通过3D应用进程播放所述音频内容,包括:
    所述用户设备在所述不同的坐标点按照对应该坐标点的声音强度播放所述音频内容。
  6. 根据权利要求1-4任一所述的方法,其特征在于,所述用户设备根据所述内容源地址从所述内容提供服务器获取音视频数据时,所述方法还包括:
    所述用户设备获取所述虚拟屏幕上播放的视频内容的主播与所述模拟对象的交互内容;
    所述通过所述虚拟屏幕展示所述视频内容时,所述方法还包括:
    所述用户设备通过所述虚拟屏幕展示所述交互内容。
  7. 根据权利要求1-4任一所述的方法,其特征在于,所述方法还包括:
    在所述模拟对象的位置展示所述模拟对象与其他模拟对象的互动内容。
  8. 一种用户设备,其特征在于,所述用户设备应用于3D应用系统,所述3D应用系统还3D应用服务器和内容提供服务器,所述用户设备包括:
    响应单元,用于响应3D应用的启动指令,启动所述3D应用,所述3D应用中包括模拟对象和用于所述模拟对象观看视频内容的虚拟屏幕;
    接收单元,用于在所述响应单元启动所述3D应用后,接收所述3D应用服务器发送的内容源地址,所述内容源地址为所述3D应用服务器当前正在直播的内容的地址;
    获取单元,用于根据所述接收单元接收的所述内容源地址从所述内容提供服务器获取音视频数据;
    渲染单元,用于渲染所述获取单元获取的所述音视频数据,得到视频内容和音频内容;
    播放单元,用于在所述3D应用中播放所述渲染单元渲染的所述音频内容; 以及
    展示单元,用于通过所述虚拟屏幕展示所述渲染单元渲染的所述视频内容。
  9. 根据权利要求8所述的用户设备,其特征在于,
    所述渲染单元,用于在所述音视频数据包括音频数据和视频数据时,通过网页进程渲染所述音频数据,得到所述音频内容,通过所述网页进程渲染所述视频数据,得到所述视频内容,其中所述网页进程独立于3D应用进程;
    所述播放单元,用于通过3D应用进程播放所述音频内容;
    所述展示单元,用于通过3D应用进程在所述虚拟屏幕上展示所述视频内容。
  10. 根据权利要求9所述的用户设备,其特征在于,
    所述渲染单元,用于在所述视频图像数据包括多个图像帧时,通过所述网页进程渲染第N+1个图像帧时,确定所述第N+1个图像帧与所述第N个图像帧之间的差异内容,在渲染所述第N+1个图像帧时只渲染所述差异内容,所述N为大于0的整数。
  11. 根据权利要求9所述的用户设备,其特征在于,所述用户设备还包括校正单元,
    所述校正单元,用于在所述展示单元展示所述视频内容之前,对所述视频内容的纹理做一次反Gamma校正;
    所述展示单元,用于通过3D应用进程在所述虚拟屏幕上展示所述校正单元反Gamma校正后的视频内容。
  12. 根据权利要求9-11任一所述的用户设备,其特征在于,所述用户设备还包括确定单元,
    所述确定单元,用于将所述音频内容引入到所述模拟对象所处的交互式应用的坐标系中,确定所述音频内容在不同坐标点的声音强弱度;
    所述播放单元,用于在所述不同的坐标点按照所述确定单元确定的所述对应该坐标点的声音强度播放所述音频内容。
  13. 根据权利要求8-11任一所述的用户设备,其特征在于,
    所述获取单元,还用于获取所述虚拟屏幕上播放的视频内容的主播与所述模拟对象的交互内容;
    所述展示单元,还用于通过所述虚拟屏幕展示所述交互内容。
  14. 根据权利要求8-11任一所述的用户设备,其特征在于,
    所述展示单元,还用于在所述模拟对象的位置展示所述模拟对象与其他模拟对象的互动内容。
  15. 一种3D应用系统,其特征在于,所述3D应用系统包括用户设备、3D应用服务器和内容提供服务器;
    所述用户设备为上述权利要求8-14任一所述的用户设备。
PCT/CN2017/075437 2016-03-03 2017-03-02 一种内容呈现的方法、用户设备及系统 WO2017148413A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/774,818 US11179634B2 (en) 2016-03-03 2017-03-02 Content presenting method, user equipment and system
US17/500,478 US11707676B2 (en) 2016-03-03 2021-10-13 Content presenting method, user equipment and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610120288.5 2016-03-03
CN201610120288.5A CN105740029B (zh) 2016-03-03 2016-03-03 一种内容呈现的方法、用户设备及系统

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US15/774,818 A-371-Of-International US11179634B2 (en) 2016-03-03 2017-03-02 Content presenting method, user equipment and system
US17/500,478 Continuation US11707676B2 (en) 2016-03-03 2021-10-13 Content presenting method, user equipment and system

Publications (1)

Publication Number Publication Date
WO2017148413A1 true WO2017148413A1 (zh) 2017-09-08

Family

ID=56249046

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/075437 WO2017148413A1 (zh) 2016-03-03 2017-03-02 一种内容呈现的方法、用户设备及系统

Country Status (3)

Country Link
US (2) US11179634B2 (zh)
CN (1) CN105740029B (zh)
WO (1) WO2017148413A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112351305A (zh) * 2020-10-15 2021-02-09 深圳Tcl新技术有限公司 显示网络内容的方法、显示设备及计算机可读存储介质
CN115499673A (zh) * 2022-08-30 2022-12-20 深圳市思为软件技术有限公司 一种直播方法及装置

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740029B (zh) 2016-03-03 2019-07-05 腾讯科技(深圳)有限公司 一种内容呈现的方法、用户设备及系统
CN107529067A (zh) * 2016-08-29 2017-12-29 腾讯科技(深圳)有限公司 视频的推荐方法和装置
CN107977272A (zh) * 2016-10-25 2018-05-01 腾讯科技(深圳)有限公司 应用运行的方法及装置
CN106878820B (zh) * 2016-12-09 2020-10-16 北京小米移动软件有限公司 直播互动方法及装置
US10572280B2 (en) * 2017-02-17 2020-02-25 Google Llc Mobile application activity detector
CN109874043B (zh) * 2017-12-01 2021-07-27 腾讯科技(深圳)有限公司 视频流发送方法、播放方法及装置
US10818093B2 (en) * 2018-05-25 2020-10-27 Tiff's Treats Holdings, Inc. Apparatus, method, and system for presentation of multimedia content including augmented reality content
CN109388467B (zh) * 2018-09-30 2022-12-02 阿波罗智联(北京)科技有限公司 地图信息显示方法、装置、计算机设备及存储介质
CN110286960B (zh) * 2019-06-27 2022-07-22 北京金山安全软件有限公司 图像文件的加载方法、装置、电子设备和存储介质
CN110392273B (zh) * 2019-07-16 2023-08-08 北京达佳互联信息技术有限公司 音视频处理的方法、装置、电子设备及存储介质
CN113656638B (zh) * 2021-08-16 2024-05-07 咪咕数字传媒有限公司 一种观看直播的用户信息处理方法、装置及设备
CN114567732A (zh) * 2022-02-23 2022-05-31 咪咕数字传媒有限公司 数据展示方法、装置、电子设备及计算机存储介质
CN115237248B (zh) * 2022-06-20 2024-10-15 北京有竹居网络技术有限公司 虚拟对象的展示方法、装置、设备、存储介质及程序产品

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090021513A1 (en) * 2007-07-18 2009-01-22 Pixblitz Studios Inc. Method of Customizing 3D Computer-Generated Scenes
CN102176197A (zh) * 2011-03-23 2011-09-07 上海那里网络科技有限公司 一种使用虚拟化身和实时影像进行实时互动的方法
CN103096134A (zh) * 2013-02-08 2013-05-08 广州博冠信息科技有限公司 一种基于视频直播和游戏的数据处理方法和设备
CN103517145A (zh) * 2012-06-19 2014-01-15 鸿富锦精密工业(深圳)有限公司 虚拟环境中的影像播放方法及系统
CN104740874A (zh) * 2015-03-26 2015-07-01 广州博冠信息科技有限公司 一种在二维游戏场景中播放视频的方法及系统
CN105187939A (zh) * 2015-09-21 2015-12-23 合一网络技术(北京)有限公司 一种实现网页游戏中播放视频的方法及装置
CN105610868A (zh) * 2016-03-03 2016-05-25 腾讯科技(深圳)有限公司 一种信息交互的方法、设备及系统
CN105740029A (zh) * 2016-03-03 2016-07-06 腾讯科技(深圳)有限公司 一种内容呈现的方法、用户设备及系统

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR930003757A (ko) * 1991-07-31 1993-02-24 오오가 노리오 영상 신호 전송 장치 및 방법
JPH1063470A (ja) * 1996-06-12 1998-03-06 Nintendo Co Ltd 画像表示に連動する音響発生装置
US5850455A (en) * 1996-06-18 1998-12-15 Extreme Audio Reality, Inc. Discrete dynamic positioning of audio signals in a 360° environment
CN1119763C (zh) * 1998-03-13 2003-08-27 西门子共同研究公司 合作动态视频注释的设备和方法
JP2002199500A (ja) * 2000-12-25 2002-07-12 Sony Corp 仮想音像定位処理装置、仮想音像定位処理方法および記録媒体
CN100341029C (zh) * 2004-07-02 2007-10-03 四川华控图形科技有限公司 仿真现场的曲面投影几何校正方法
US7933328B2 (en) * 2005-02-02 2011-04-26 Broadcom Corporation Rate control for digital video compression processing
US20080307473A1 (en) * 2007-06-06 2008-12-11 Virtual Worlds Ppv, Llc Virtual worlds pay-per-view
US8600808B2 (en) * 2007-06-07 2013-12-03 Qurio Holdings, Inc. Methods and systems of presenting advertisements in consumer-defined environments
US8112490B2 (en) * 2008-05-15 2012-02-07 Upton Kevin S System and method for providing a virtual environment with shared video on demand
JP2010122826A (ja) * 2008-11-18 2010-06-03 Sony Computer Entertainment Inc オンライン会話システム、オンライン会話用サーバ、オンライン会話制御方法及びプログラム
CA2775828C (en) * 2009-09-29 2016-03-29 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Audio signal decoder, audio signal encoder, method for providing an upmix signal representation, method for providing a downmix signal representation, computer program and bitstream using a common inter-object-correlation parameter value
US8550914B2 (en) * 2011-04-08 2013-10-08 Disney Enterprises, Inc. Recording audio in order to affect gameplay experience
US8943396B2 (en) * 2011-07-18 2015-01-27 At&T Intellectual Property I, Lp Method and apparatus for multi-experience adaptation of media content
CN103389793B (zh) * 2012-05-07 2016-09-21 深圳泰山在线科技有限公司 人机交互方法和系统
JP6147486B2 (ja) * 2012-11-05 2017-06-14 任天堂株式会社 ゲームシステム、ゲーム処理制御方法、ゲーム装置、および、ゲームプログラム
US20140195594A1 (en) * 2013-01-04 2014-07-10 Nvidia Corporation Method and system for distributed processing, rendering, and displaying of content
US9218468B1 (en) * 2013-12-16 2015-12-22 Matthew B. Rappaport Systems and methods for verifying attributes of users of online systems
US9466278B2 (en) * 2014-05-08 2016-10-11 High Fidelity, Inc. Systems and methods for providing immersive audio experiences in computer-generated virtual environments
US9645648B2 (en) * 2014-09-18 2017-05-09 Mary A. Spio Audio computer system for interacting within a virtual reality environment
GB201508074D0 (en) * 2015-05-12 2015-06-24 Apical Ltd People detection
EP3145220A1 (en) * 2015-09-21 2017-03-22 Dolby Laboratories Licensing Corporation Rendering virtual audio sources using loudspeaker map deformation

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090021513A1 (en) * 2007-07-18 2009-01-22 Pixblitz Studios Inc. Method of Customizing 3D Computer-Generated Scenes
CN102176197A (zh) * 2011-03-23 2011-09-07 上海那里网络科技有限公司 一种使用虚拟化身和实时影像进行实时互动的方法
CN103517145A (zh) * 2012-06-19 2014-01-15 鸿富锦精密工业(深圳)有限公司 虚拟环境中的影像播放方法及系统
CN103096134A (zh) * 2013-02-08 2013-05-08 广州博冠信息科技有限公司 一种基于视频直播和游戏的数据处理方法和设备
CN104740874A (zh) * 2015-03-26 2015-07-01 广州博冠信息科技有限公司 一种在二维游戏场景中播放视频的方法及系统
CN105187939A (zh) * 2015-09-21 2015-12-23 合一网络技术(北京)有限公司 一种实现网页游戏中播放视频的方法及装置
CN105610868A (zh) * 2016-03-03 2016-05-25 腾讯科技(深圳)有限公司 一种信息交互的方法、设备及系统
CN105740029A (zh) * 2016-03-03 2016-07-06 腾讯科技(深圳)有限公司 一种内容呈现的方法、用户设备及系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112351305A (zh) * 2020-10-15 2021-02-09 深圳Tcl新技术有限公司 显示网络内容的方法、显示设备及计算机可读存储介质
CN112351305B (zh) * 2020-10-15 2024-04-30 深圳Tcl新技术有限公司 显示网络内容的方法、显示设备及计算机可读存储介质
CN115499673A (zh) * 2022-08-30 2022-12-20 深圳市思为软件技术有限公司 一种直播方法及装置
CN115499673B (zh) * 2022-08-30 2023-10-20 深圳市思为软件技术有限公司 一种直播方法及装置

Also Published As

Publication number Publication date
US20180318713A1 (en) 2018-11-08
US11179634B2 (en) 2021-11-23
CN105740029B (zh) 2019-07-05
US11707676B2 (en) 2023-07-25
CN105740029A (zh) 2016-07-06
US20220072422A1 (en) 2022-03-10

Similar Documents

Publication Publication Date Title
WO2017148413A1 (zh) 一种内容呈现的方法、用户设备及系统
US9990029B2 (en) Interface object and motion controller for augmented reality
US10861222B2 (en) Information interaction method, device, and system
CN108156520B (zh) 视频播放方法、装置、电子设备及存储介质
US20170195650A1 (en) Method and system for multi point same screen broadcast of video
CN111314720A (zh) 直播连麦控制方法、装置、电子设备及计算机可读介质
US20070150612A1 (en) Method and system of providing multimedia content
US20140087877A1 (en) Compositing interactive video game graphics with pre-recorded background video content
US20140195912A1 (en) Method and system for simultaneous display of video content
JP2017056193A (ja) ブロードキャスタを有するリモートレンダリングサーバ
US8878997B2 (en) Electronic displays having paired canvases
WO2016188276A1 (zh) 视频播放方法、客户端和计算机存储介质
WO2020133372A1 (zh) 视频的字幕处理方法和导播系统
US20230107414A1 (en) Method for controlling virtual object
US20170171277A1 (en) Method and electronic device for multimedia recommendation based on android platform
TWI637772B (zh) 透過網路傳送媒體的系統及方法
CN112188264A (zh) 基于Android实现多窗口播放视频的方法及终端
CN111836110A (zh) 比赛视频的展示方法、装置、电子设备及存储介质
WO2020258907A1 (zh) 虚拟物品的生成方法、装置及设备
CN105635834A (zh) 显示比赛结果的方法和装置
WO2024061243A1 (en) Live stream interactive method, device, apparatus and storage medium
WO2017185645A1 (zh) 竖直全屏播放方法、装置及其移动播放终端
CN115604500A (zh) 直播间页面显示方法、装置、电子设备及存储介质
CN112929685B (zh) Vr直播间的互动方法、装置、电子设备和存储介质
US9384276B1 (en) Reducing latency for remotely executed applications

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 15774818

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17759264

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17759264

Country of ref document: EP

Kind code of ref document: A1