WO2017148410A1 - 一种信息交互的方法、设备及系统 - Google Patents

一种信息交互的方法、设备及系统 Download PDF

Info

Publication number
WO2017148410A1
WO2017148410A1 PCT/CN2017/075433 CN2017075433W WO2017148410A1 WO 2017148410 A1 WO2017148410 A1 WO 2017148410A1 CN 2017075433 W CN2017075433 W CN 2017075433W WO 2017148410 A1 WO2017148410 A1 WO 2017148410A1
Authority
WO
WIPO (PCT)
Prior art keywords
user equipment
interaction information
information
interaction
community
Prior art date
Application number
PCT/CN2017/075433
Other languages
English (en)
French (fr)
Inventor
王利
张伟
夏小伟
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to KR1020187015519A priority Critical patent/KR102098669B1/ko
Priority to US15/774,377 priority patent/US10861222B2/en
Priority to JP2018528068A priority patent/JP6727669B2/ja
Publication of WO2017148410A1 publication Critical patent/WO2017148410A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/86Watching games played by other players
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • H04N21/8173End-user applications, e.g. Web browser, game
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/69Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • A63F13/795Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for finding other players; for building a team; for providing a buddy list
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Definitions

  • the present invention relates to the field of 3D technologies, and in particular, to a method, device and system for information interaction.
  • the interactive application involving three-dimensional (3D) scenes in the prior art has been very popular.
  • the 3D application system usually includes a user equipment and a 3D application server, and the user equipment can acquire data of the interactive application from the 3D application server and display the interactive application.
  • the anchor video using the Internet live broadcast mode has also been very popular.
  • the anchor of the Internet live broadcast mode can only communicate with the user on the corresponding live broadcast platform, and cannot be integrated into the 3D application.
  • Embodiments of the present invention provide a method for information interaction, in which an anchor can interact with a viewer in a 3D community in a 3D application, thereby increasing the diversity of interaction.
  • the embodiments of the present invention also provide corresponding user equipment, servers, and systems.
  • a first aspect of the present invention provides a method for information interaction, where the method is applied to a 3D application system, where the 3D application system includes a 3D application server, a video source server, a first user device, and a second user device, and the second
  • the 3D community in the 3D application displayed by the user equipment includes a simulated object and a virtual screen, and the method includes:
  • the second user equipment receives the interaction information sent by the 3D application server, where the interaction information is sent by the 3D application server to the video source server according to the first user equipment. Generated by mutual requests;
  • the second user equipment renders an object corresponding to the interaction information in the 3D community according to the interaction information.
  • a second aspect of the present invention provides a method for information interaction, where the method is applied to a 3D application system, where the 3D application system includes a 3D application server, a video source server, a first user device, and a second user device, and the second
  • the 3D community in the 3D application displayed by the user equipment includes a simulated object and a virtual screen, and the method includes:
  • the 3D application server receives an information interaction request from the video source server, where the information interaction request is uploaded to the video source server by the first user equipment;
  • the 3D application server generates interaction information according to the information interaction request
  • the 3D application server sends the interaction information to the second user equipment, where the interaction information is used by the second user equipment to render an object corresponding to the interaction information in the 3D community.
  • a third aspect of the present invention provides a user equipment, where the user equipment is a user equipment in a 3D application system, where the 3D application system further includes a 3D application server, a video source server, and a first user equipment, and the second user
  • the 3D community in the 3D application displayed by the device includes a simulation object and a virtual screen, and the user equipment includes:
  • An acquiring unit configured to acquire video content uploaded by the first user equipment from a video source server, and display the video content on the virtual screen;
  • a receiving unit configured to receive interaction information sent by the 3D application server, where the interaction information is generated by the 3D application server according to an interaction request that the first user equipment uploads to the video source server;
  • a rendering unit configured to render an object corresponding to the interaction information in the 3D community according to the interaction information received by the receiving unit.
  • a fourth aspect of the present invention provides a server, where the server is applied to a 3D application system,
  • the 3D application system further includes a video source server, a first user device, and a second user device, and the 3D community in the 3D application displayed by the second user device includes a simulation object and a virtual screen, and the 3D application server includes:
  • An acquiring unit configured to acquire video content uploaded by the first user equipment from a video source server, and send the uploaded video content to the second user equipment to be on a virtual screen of the second user Presenting the video content;
  • a receiving unit configured to receive an information interaction request from the video source server, where the information interaction request is uploaded to the video source server by the first user equipment;
  • a generating unit configured to generate interaction information according to the information interaction request received by the receiving unit
  • a sending unit configured to send, to the second user equipment, the interaction information generated by the generating unit, where the interaction information is used by the second user equipment to render the interaction information in the 3D community Object.
  • a fifth aspect of the present invention provides a 3D application system, including: a 3D application server, a video source server, a first user device, and a second user device, where the 3D community in the 3D application displayed by the second user device includes a simulation object. And a virtual screen;
  • the second user equipment is the user equipment described in the foregoing third aspect
  • the 3D application server is the 3D application server described in the above fourth aspect.
  • the information interaction method provided by the embodiment of the present invention can enable the anchor to interact with the audience in the 3D community of the 3D application, thereby increasing the interaction. Diversity.
  • FIG. 1 is a schematic diagram of an example of an anchor control 3D community environment in an embodiment of the present invention
  • FIG. 2 is a schematic diagram of an embodiment of a 3D application system according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a process of presenting video content of a live broadcast on a virtual screen in an embodiment of the present invention
  • FIG. 4 is a schematic diagram of an embodiment of a method for information interaction in an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of an example of an anchor delivering a gift to a D community in an embodiment of the present invention
  • FIG. 6 is a schematic diagram of another embodiment of a method for information interaction in an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of another embodiment of a method for information interaction in an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of an embodiment of a user equipment according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of an embodiment of a 3D application server according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of another embodiment of a 3D application server according to an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of another embodiment of a user equipment according to an embodiment of the present invention.
  • FIG. 12 is a schematic diagram of another embodiment of a 3D application server according to an embodiment of the present invention.
  • Embodiments of the present invention provide a method for information interaction, in which an anchor can interact with a viewer in a 3D community in a 3D application, thereby increasing the diversity of interaction.
  • the embodiments of the present invention also provide corresponding user equipment, servers, and systems. The details are described below separately.
  • the 3D application system in the embodiment of the present invention can be understood as a 3D game system.
  • 3D games 3D electronic games based on 3D computer graphics, including but not limited to online 3D games for multiplayer online, single 3D games for single players, and virtual reality games based on 3D game systems.
  • the system and has universal applicable properties for the platform, the game console platform, the mobile game platform, and the 3D games within the PC game platform are all included.
  • 3D community A virtual community environment in 3D games, based on 3D computer graphics The game environment produced by the base.
  • the 3D community may include a simulated object corresponding to the player in the game, and the 3D community in the present application includes a virtual screen, which is preferably a virtual large screen similar to the external field.
  • Game anchor refers to the individual subject of game reporting and commentary on electronic media such as the Internet.
  • the invention utilizes the technology of directly displaying the live broadcast video of the Internet in the 3D community, and establishes a communication mechanism thereon, which allows the anchor of the live broadcast video of the Internet to generate richer interaction behavior with the viewers in the 3D community, except for the direct video.
  • the audio display also includes interactive methods to control the weather in the 3D community and launch beautiful fireworks. For example, at Christmas time, the anchor can control the weather, let the 3D community snow, and set off a fireworks in the community, which greatly increases the festive atmosphere, which will greatly enhance the audience's participation.
  • the anchor can control the weather of the 3D community through a similar communication mechanism. As shown in Figure 1, the anchor can control the weather in the 3D community, and can also set the 3D community to black days, light the lights, and so on. Greatly increase the interaction between the anchor and the audience.
  • FIG. 2 is a schematic diagram of an embodiment of a 3D application system according to an embodiment of the present invention.
  • the 3D application system includes: a 3D application server, a video source server, a first user equipment used by the anchor, and a second user equipment used by the player, and the second user equipment may have multiple One.
  • the anchor broadcasts the game through the first user equipment, and the live game video stream is uploaded to the internal video source server, and the anchor is pre-registered on the game server, so the source address of the currently live content is stored in the game server.
  • the player can also interact with the anchor. Therefore, during the live broadcast, there may be interaction between the anchor and the simulated object. Therefore, the user equipment can obtain the live video stream and interactive content from the content providing server.
  • the audio and video are rendered, and the corresponding audio content and video content are obtained, and the audio content is played in the 3D application, and the virtual screen is displayed through the virtual screen.
  • Video content and interactive content are provided.
  • the first user equipment sends an interaction request to the video source server, and the video source server forwards the interaction request to the 3D application server, and the 3D application server according to the interaction Requesting, generating interaction information, and transmitting the interaction information to the second user equipment.
  • the second user equipment renders an object corresponding to the interaction information in the 3D community according to the interaction information.
  • the object corresponding to the interaction information in the embodiment of the present invention may include a simulated person object, and may also include an environment object, and the environment object may include objects such as weather, lanterns, and fireworks.
  • An anchor broadcasts a video on the Internet at this time, and the anchor submits the live video stream to the video stream server through the first user equipment.
  • the program After the user opens the 3D application on the second user device, the program starts to initialize the 3D rendering engine in the 3D application.
  • the program starts to automatically request the address of the anchor video source currently being broadcasted.
  • the 3D application server sends a video source address to the audio and video rendering module.
  • the audio and video rendering module requests a data stream of the live video from the content providing server.
  • the live video server returns a video data stream to the audio and video rendering module.
  • the audio and video rendering module uses audio and video data to render audio.
  • the audio and video rendering module submits audio data to an audio engine within the 3D application.
  • the audio and video rendering module uses audio and video data to render a video single frame image.
  • the audio and video rendering module submits image data to a 3D rendering engine within the 3D application.
  • the rendering engine uses the rendered still frame data to play the audio content and present the 3D video image to the user.
  • an embodiment of the interaction between the anchor and the viewer that is, the information interaction in the embodiment of the present invention, can be understood by referring to FIG. 4.
  • the first user equipment reports an interaction request to the video source server.
  • the video source server forwards the interaction request to the 3D application server.
  • the 3D application server generates interaction information according to the interaction request.
  • the 3D application server sends the interaction information to the second user equipment.
  • the second user equipment renders an object corresponding to the interaction information in the 3D community according to the interaction information.
  • the interaction request may be a target simulation object generation request
  • the 3D application server generates interaction information of the target simulation object according to the target simulation object generation request.
  • the second user equipment receives the interaction information sent by the 3D application server for rendering the target simulation object.
  • the second user equipment renders the target simulation object in the 3D community according to the interaction information for rendering a target simulation object.
  • the target simulation object is used to send a value package to a simulated object in the 3D community.
  • the method further includes:
  • the 3D application server obtains a value packet sending request from the video source server, where the value packet sending request is uploaded to the video source server by the first user equipment;
  • the 3D application server generates interaction information of the value package according to the value packet sending request
  • the second user equipment renders the value packet on a movement trajectory of the target simulation object according to the value package information.
  • the method further includes:
  • the 3D application server obtains a value packet sending request from the video source server, where the value packet sending request is uploaded to the video source server by the first user equipment;
  • the 3D application server generates interaction information of the value package according to the value packet sending request
  • the second user equipment renders the value packet on a movement trajectory of the target simulation object according to the value package information.
  • the method further includes:
  • the specific simulation object is a mock object corresponding to the second user device.
  • the second user equipment receives a notification message that the value packet sent by the 3D application server has been obtained, and the specific simulation object is A simulation object corresponding to the second user equipment.
  • the information interaction request is an environment object rendering request
  • the 3D application server generates interaction information of the environment object according to the environment object rendering request
  • the 3D application server sends the interaction information of the environment object to the second user equipment, where the interaction information of the environment object is used by the second user equipment to render the environment object in the 3D community;
  • the second user equipment renders the environment object in the 3D community according to the interaction information for rendering an environment object.
  • Figure 5 is a schematic diagram of the main broadcast of the messenger in the 3D community.
  • the anchor sends a messenger representing the anchor in the 3D community, which is the target mock object, which can walk in the 3D community and distribute gifts to the mock objects in the 3D community.
  • the interaction process of the messenger to distribute the gift can be understood by referring to the description of FIG. 6.
  • another embodiment of the information interaction provided by the embodiment of the present invention includes:
  • the anchor logs in to the community management page through the first user.
  • the 3D application server authenticates the anchor identity.
  • the identification result is returned.
  • the anchor selects the 3D community through the first user equipment.
  • the 3D application server allocates the selected 3D community to the anchor.
  • the steps of steps 301-305 may be to integrate the corresponding 3D community page management, 3D community portal, and 3D community assignment functions on the 3D application server.
  • the functions of the foregoing steps 301-305 may also be performed by three independent servers, which may include a management page server, a 3D community portal server, and a 3D community server, wherein the management page server is responsible for displaying the management page.
  • the interface is presented to the user; the 3D community portal server is responsible for interacting with the management page server, and is responsible for storing data such as anchor information; the 3D community server receives the interaction request from the anchor and forwards it to the 3D application server.
  • the anchor sends a messenger request to the 3D application server by using the first user equipment.
  • the 3D application server generates messenger interaction information according to the messenger request.
  • the 3D application server broadcasts the messenger interaction information to the second user equipment.
  • the second user equipment renders the messenger in the 3D community according to the messenger interaction information.
  • the 3D application server sends a messenger mobile route to the second user equipment.
  • the second user equipment renders the moving messenger according to the messenger moving route.
  • the 3D application server sends the item information to the second user equipment.
  • the messenger can also drop other gifts.
  • the value package in this application can include props and other gifts.
  • the second user equipment renders the messenger drop item according to the item information.
  • the present application implements an automated, immersive, and interesting gift giving system, which greatly enhances the user interaction with the anchor in the virtual community.
  • another embodiment of the information interaction provided by the embodiment of the present invention includes:
  • the anchor logs into the community management page through the first user.
  • 3D application server to identify the anchor identity.
  • the anchor selects the 3D community through the first user equipment.
  • the 3D application server allocates the selected 3D community to the anchor.
  • 401-405 is the same as the process of 301-305 in the above embodiment.
  • the anchor sends a weather setting request to the 3D application server by using the first user equipment.
  • the 3D application server generates weather interaction information according to the weather setting request.
  • the 3D application server broadcasts weather interaction information to the second user equipment.
  • the second user equipment filters the weather parameter according to the weather interaction information.
  • the 3D rendering engine first performs a linear overshoot of the correlation coefficients in the weather parameters to avoid user arousal in the 3D community.
  • the second user equipment sets a time when the weather changes.
  • the second user equipment calculates atmospheric dispersion.
  • the atmospheric scattering is calculated according to the set time, because the color of the sky is determined according to atmospheric scattering, so the Mie-Rayleigh algorithm is used to render the sky color.
  • the second user equipment determines a rendering result of the weather.
  • the second user equipment sets the particle type (rain or snow).
  • the second user equipment calculates a special effect.
  • the second user equipment determines a final rendering result.
  • the second user equipment displays the final rendering result.
  • the user equipment provided by the embodiment of the present invention is a second user equipment in a 3D application system, where the 3D application system includes a 3D application server, a video source server, a first user equipment, and the second user equipment.
  • the first user equipment is configured to respond to an interaction operation of the anchor
  • the 3D community in the 3D application displayed by the second user equipment includes a simulation object and a virtual screen for viewing the video content by the simulation object
  • the user equipment 50 include:
  • the obtaining unit 501 is configured to acquire video content uploaded by the first user equipment from a video source server, and display the video content on the virtual screen.
  • the receiving unit 502 is configured to: after the acquiring unit 501 acquires the video content, receive the interaction information sent by the 3D application server, where the interaction information is uploaded by the 3D application server according to the first user equipment. Generated by the interactive request of the video source server;
  • the rendering unit 503 is configured to render an object corresponding to the interaction information in the 3D community according to the interaction information received by the receiving unit 502.
  • the acquiring unit 501 acquires the video content of the anchor live broadcast uploaded by the first user equipment from the video source server, and displays the video content on the virtual screen;
  • the receiving unit 502 is After acquiring the video content, the acquiring unit 501 receives the interaction information sent by the 3D application server, where the interaction information is an interaction request that the 3D application server uploads to the video source server according to the first user equipment.
  • the rendering unit 503 renders an object corresponding to the interaction information in the 3D community according to the interaction information received by the receiving unit 502.
  • the user equipment provided by the embodiment of the present invention can enable the anchor to interact with the audience in the 3D community of the 3D application, thereby increasing the variety of interactions. Sex.
  • the receiving unit 502 is configured to receive interaction information that is sent by the 3D application server and used to render a target simulation object.
  • the rendering unit 503 is configured to render the target simulation object in the 3D community according to the interaction information used by the receiving unit 502 for rendering a target simulation object.
  • the target simulation object is used to send a value package to a simulated object in the 3D community.
  • the receiving unit 502 is further configured to receive the value packet information sent by the 3D application server;
  • the rendering unit 503 is configured to render the value package on a moving track of the target simulation object according to the value package information.
  • the receiving unit 502 is further configured to: when the simulated object in the 3D community acquires the value packet sent by the target simulated object, receive the value packet sent by the 3D application server, and the simulated object is received by the simulated object. Get the notification message obtained.
  • the receiving unit 502 is configured to receive interaction information that is sent by the 3D application server and used to render an environment object.
  • the rendering unit 503 is configured to render the environment object in the 3D community according to the interaction information used by the receiving unit 502 to render an environment object.
  • the 3D application server 60 is applied to a 3D application system, where the 3D application system further includes a video source server, a first user equipment, and a second user equipment, where the first user equipment is used to respond.
  • the interaction operation of the anchor, the 3D community in the 3D application displayed by the second user equipment includes a simulation object and a virtual screen for the simulation object to view the video content broadcasted by the anchor, the 3D application server includes:
  • the obtaining unit 601 is configured to acquire video content uploaded by the first user equipment from a video source server, and send the uploaded video content to the second user equipment to be in a virtual screen of the second user Presenting the video content;
  • the receiving unit 602 is configured to receive an information interaction request from the video source server, where the information interaction request is uploaded to the video source server by the first user equipment;
  • the generating unit 603 is configured to generate interaction information according to the information interaction request received by the receiving unit 602.
  • the sending unit 604 is configured to send the interaction information generated by the generating unit 603 to the second user equipment, where the interaction information is used by the second user equipment to render the interaction information in the 3D community.
  • the corresponding object is configured to send the interaction information generated by the generating unit 603 to the second user equipment, where the interaction information is used by the second user equipment to render the interaction information in the 3D community.
  • the obtaining unit 601 acquires the first user equipment from the video source server. Uploading the video content; and transmitting the uploaded video content to the second user device to display the video content on a virtual screen of the second user; the receiving unit 602 receiving information from the video source server An interaction request, where the information interaction request is uploaded to the video source server by the first user equipment; the generating unit 603 generates interaction information according to the information interaction request received by the receiving unit 602, and the sending unit 604 uses The interaction information generated by the generating unit 603 is sent to the second user equipment, where the interaction information is used by the second user equipment to render an object corresponding to the interaction information in the 3D community.
  • the 3D application server provided by the embodiment of the present invention can enable the anchor to interact with the viewer in the 3D community of the 3D application, thereby increasing the interaction. Diversity.
  • the information interaction request generates a request for the target simulation object; and the generating unit 603 is configured to generate interaction information of the target simulation object according to the target simulation object generation request;
  • the sending unit 604 is configured to send the interaction information of the target simulation object to the second user equipment, so that the second user equipment renders in the 3D community in response to the interaction information of the target simulation object. Out of the target simulation object.
  • the receiving unit 602 is further configured to receive a value packet sending request from the video source server, where the value packet sending request is uploaded to the video source server by the first user equipment;
  • the generating unit 603 is further configured to generate interaction information of the value packet according to the value packet sending request;
  • the sending unit 604 is further configured to send the interaction information of the value packet to the second user equipment, so that the second user equipment moves in the target simulation object in response to the interaction information of the value packet.
  • the value package is rendered on the track.
  • the 3D application server provided by the embodiment of the present invention further includes a monitoring unit 605.
  • the monitoring unit 605 is configured to monitor a simulated object in the 3D community to obtain a value packet sent by the target simulated object;
  • the sending unit 604 is configured to send the value to the second user equipment when the monitoring unit 605 detects that the simulated object in the 3D community acquires the value packet sent by the target simulated object.
  • the packet has been obtained by the simulation object to obtain the notification message.
  • the information interaction request is an environment object rendering request
  • the generating unit 603 is configured to generate interaction information of the environment object according to the environment object rendering request.
  • the sending unit 604 is configured to send the interaction information of the environment object to the second user equipment, so that the second user equipment renders the same in the 3D community in response to the interaction information of the environment object. Environment object.
  • FIG. 11 is a schematic structural diagram of a user equipment 50 according to an embodiment of the present invention.
  • the user equipment 50 is a second user equipment in a 3D application system, where the 3D application system includes a 3D application server, a video source server, a first user equipment, and the second user equipment, where the first user equipment is used.
  • the 3D community in the 3D application displayed by the second user equipment includes a simulation object and a virtual screen for the simulation object to view video content
  • the user equipment 50 in the present application includes a central processing unit ( Central Processing Unit (CPU) 5101 and Graphic Processing Unit (GPU) 5102, transceiver 540, memory 550, and input/output (I/O) device 530
  • input/output (I/O) device 530 may be
  • CPU Central Processing Unit
  • GPU Graphic Processing Unit
  • I/O input/output
  • input/output (I/O) device 530 may be A keyboard or mouse
  • memory 550 can include read only memory and random access memory, and provides operational instructions and data to processor 510.
  • a portion of the memory 550 may also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the memory 550 stores elements, executable modules or data structures, or a subset thereof, or their extension set:
  • the operation instruction can be stored in the operating system
  • the transceiver 540 is configured to acquire the video content of the anchor live broadcast uploaded by the first user equipment from a video source server, and display the video content on the virtual screen, and receive the interaction information sent by the 3D application server, where the interaction is performed.
  • the information is generated by the 3D application server according to an interaction request that the first user equipment uploads to the video source server;
  • the graphics processor 5102 is configured to render the intersection in the 3D community according to the interaction information.
  • the user equipment provided by the embodiment of the present invention can enable the anchor to interact with the audience in the 3D community of the 3D application, thereby increasing the variety of interactions. Sex.
  • the central processing unit 5101 controls the operation of the user device 50.
  • Memory 550 can include read only memory and random access memory and provides instructions and data to central processor 5101. A portion of the memory 550 may also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the specific components of the user equipment 50 are coupled together by a bus system 520 in a specific application.
  • the bus system 520 may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus. However, for clarity of description, various buses are labeled as bus system 520 in the figure.
  • Processor 510 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 510 or an instruction in a form of software.
  • the processor 510 described above may be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or discrete hardware. Component.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA off-the-shelf programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present invention may be implemented or carried out.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented as a hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 550, and the processor 510 reads the information in the memory 550 and performs the steps of the above method in combination with its hardware.
  • the transceiver 540 is configured to receive, by the 3D application server, interaction information for rendering a target simulation object, where the target simulation object is used to send a value package to a simulated object in the 3D community;
  • the image processor 5102 is configured to render the target simulation object in the 3D community according to the interaction information for rendering a target simulation object.
  • the transceiver 540 is configured to receive the value packet information sent by the 3D application server.
  • the image processor 5102 is configured to render the value package on a moving track of the target simulation object according to the value package information.
  • the transceiver 540 is configured to: when a specific simulation object in the 3D community acquires a value packet sent by the target simulation object, receive a notification message that the value packet sent by the 3D application server has obtained, the specific The simulation object is a simulation object corresponding to the second user equipment.
  • the transceiver 540 is configured to receive interaction information that is sent by the 3D application server and used to render an environment object.
  • the image processor 5102 is configured to render the environment object in the 3D community according to the interaction information for rendering an environment object.
  • the above user equipment 50 can be understood by referring to the related description in the parts of FIG. 1 to FIG. 7 , and no further description is made herein.
  • FIG. 12 is a schematic structural diagram of a 3D application server 60 according to an embodiment of the present invention.
  • the 3D application server 60 is applied to a 3D application system, where the 3D application system further includes a video source server, a first user device, and a second user device, where the first user device is configured to respond to an interaction operation of the anchor,
  • the 3D community in the 3D application presented by the two user devices includes a simulation object and a virtual screen for the simulation object to view the video content broadcasted by the anchor, the 3D application server 60 including the processor 610, the memory 650, and the transceiver
  • the memory 650 can include read only memory and random access memory and provides operational instructions and data to the processor 610. A portion of the memory 650 can also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the memory 650 stores the following elements, executable modules or data structures, or a subset thereof, or their extended set:
  • the operation instruction can be stored in the operating system
  • the transceiver 630 is configured to obtain an information interaction request from a video source server, where the information interaction request is uploaded to the video source server by the first user equipment;
  • the processor 610 is configured to generate interaction information according to the information interaction request.
  • the transceiver 630 is further configured to acquire video content uploaded by the first user equipment from the video source server, and send the uploaded video content to the second user equipment; and to the second The user equipment sends the interaction information, where the interaction information is used by the second user equipment to render an object corresponding to the interaction information in the 3D community.
  • the 3D application server provided by the embodiment of the present invention can enable the anchor to interact with the viewer in the 3D community of the 3D application, thereby increasing the interaction. Diversity.
  • the processor 610 controls the operation of the 3D application server 60, which may also be referred to as a CPU (Central Processing Unit).
  • Memory 650 can include read only memory and random access memory and provides instructions and data to processor 610. A portion of the memory 650 can also include non-volatile random access memory (NVRAM).
  • the components of the 3D application server 60 are coupled together by a bus system 620 in a specific application.
  • the bus system 620 may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus. However, for clarity of description, various buses are labeled as bus system 620 in the figure.
  • Processor 610 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 610 or an instruction in a form of software.
  • the processor 610 described above may be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or discrete hardware. Component.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA off-the-shelf programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present invention may be implemented or carried out.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented as a hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 650, and the processor 610 reads the information in the memory 650 and performs the steps of the above method in combination with its hardware.
  • the processor 610 is configured to generate the target interaction object when the information interaction request is generated. Obtaining time, generating interaction information of the target simulation object according to the target simulation object generation request;
  • the transceiver 630 is further configured to send the interaction information of the target simulation object to the second user equipment, where the interaction information of the target simulation object is used by the second user equipment to render the target in the 3D community. Simulate the object.
  • the transceiver 630 is further configured to obtain a value packet sending request from the video source server, where the value packet sending request is uploaded to the video source server by the first user equipment;
  • the processor 610 is further configured to generate interaction information of the value packet according to the value packet sending request.
  • the transceiver 630 is further configured to send the interaction information of the value packet to the second user equipment, where the interaction information of the value packet is used by the second user equipment to render the mobile trajectory of the target simulation object.
  • the value package is further configured to send the interaction information of the value packet to the second user equipment, where the interaction information of the value packet is used by the second user equipment to render the mobile trajectory of the target simulation object.
  • the processor 610 is further configured to: monitor a specific simulation object in the 3D community to obtain a value packet sent by the target simulation object;
  • the transceiver 630 is further configured to: when a specific simulation object in the 3D community acquires a value packet sent by the target simulation object, send, to the second user equipment, a notification message that the value packet has been obtained, where the specific The simulation object is a simulation object corresponding to the second user equipment.
  • the processor 610 is configured to: when the information interaction request is an environment object rendering request, generate interaction information of the environment object according to the environment object rendering request;
  • the transceiver 630 is further configured to send the interaction information of the environment object to the second user equipment, where the interaction information of the environment object is used by the second user equipment to render the environment object in the 3D community.
  • the above 3D application server 60 can be understood by referring to the related description in the parts of FIG. 1 to FIG. 7 , and no further description is made herein.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: ROM, RAM, disk or CD.

Abstract

本发明公开了一种信息交互的方法,包括:3D应用服务器从视频源服务器获取信息交互请求,所述信息交互请求为所述第一用户设备上传到所述视频源服务器的,根据所述信息交互请求,生成交互信息,向所述第二用户设备发送所述交互信息,第二用户设备接收所述3D应用服务器发送的交互信息,根据所述交互信息,在所述3D社区中渲染出所述交互信息所对应的对象。本发明实施例提供的信息交互的方法,可以使主播在3D应用的3D社区中与观众进行互动,从而增加了互动的多样性。

Description

一种信息交互的方法、设备及系统
本申请要求于2016年3月3日提交中国专利局,申请号为201610120316.3,发明名称为“一种信息交互的方法、设备及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及3D技术领域,具体涉及一种信息交互的方法、设备及系统。
背景技术
现有技术中涉及三维(3D)场景的交互式应用已经非常普遍了,3D应用系统中通常包括用户设备和3D应用服务器,用户设备可以从3D应用服务器获取交互式应用的数据,并展示交互式应用。
现有技术中采用互联网直播方式的主播视频也已经非常普遍了,但目前互联网直播方式的主播只能是在相应的直播平台上与用户进行文字交流,还无法结合到3D应用中。
发明内容
本发明实施例提供一种信息交互的方法,主播可以在3D应用中的3D社区与观众进行互动,从而增加了互动的多样性。本发明实施例还提供了相应的用户设备、服务器及系统。
本发明第一方面提供一种信息交互的方法,所述方法应用于3D应用系统,所述3D应用系统包括3D应用服务器、视频源服务器、第一用户设备和第二用户设备,所述第二用户设备所展示的3D应用中的3D社区包括模拟对象和虚拟屏幕,所述方法包括:
所述第二用户设备从视频源服务器获取所述第一用户设备所上传的视频内容,并在所述虚拟屏幕上展示;
所述第二用户设备接收所述3D应用服务器发送的交互信息,所述交互信息为所述3D应用服务器根据所述第一用户设备上传到所述视频源服务器的交 互请求生成的;以及
所述第二用户设备根据所述交互信息,在所述3D社区中渲染出所述交互信息所对应的对象。
本发明第二方面提供一种信息交互的方法,所述方法应用于3D应用系统,所述3D应用系统包括3D应用服务器、视频源服务器、第一用户设备和第二用户设备,所述第二用户设备所展示的3D应用中的3D社区包括模拟对象和虚拟屏幕,所述方法包括:
所述3D应用服务器从视频源服务器获取所述第一用户设备所上传的视频内容;并将所述上传的视频内容发送至所述第二用户设备,以在所述第二用户的虚拟屏幕上展示所述视频内容;
所述3D应用服务器从所述视频源服务器接收信息交互请求,所述信息交互请求为所述第一用户设备上传到所述视频源服务器的;
所述3D应用服务器根据所述信息交互请求,生成交互信息;以及
所述3D应用服务器向所述第二用户设备发送所述交互信息,所述交互信息用于所述第二用户设备在所述3D社区中渲染出所述交互信息所对应的对象。
本发明第三方面提供一种用户设备,所述用户设备为3D应用系统中的用户设备,所述3D应用系统还包括3D应用服务器、视频源服务器、和第一用户设备,所述第二用户设备所展示的3D应用中的3D社区包括模拟对象和虚拟屏幕,所述用户设备包括:
获取单元,用于从视频源服务器获取所述第一用户设备所上传的视频内容,并在所述虚拟屏幕上展示;
接收单元,用于接收所述3D应用服务器发送的交互信息,所述交互信息为所述3D应用服务器根据所述第一用户设备上传到所述视频源服务器的交互请求生成的;以及
渲染单元,用于根据所述接收单元接收的所述交互信息,在所述3D社区中渲染出所述交互信息所对应的对象。
本发明第四方面提供一种服务器,所述服务器应用于3D应用系统,所述 3D应用系统还包括视频源服务器、第一用户设备和第二用户设备,所述第二用户设备所展示的3D应用中的3D社区包括模拟对象和虚拟屏幕,所述3D应用服务器包括:
获取单元,用于从视频源服务器获取所述第一用户设备所上传的视频内容;并将所述上传的视频内容发送至所述第二用户设备,以在所述第二用户的虚拟屏幕上展示所述视频内容;
接收单元,用于从所述视频源服务器接收信息交互请求,所述信息交互请求为所述第一用户设备上传到所述视频源服务器的;
生成单元,用于根据所述接收单元接收的所述信息交互请求,生成交互信息;以及
发送单元,用于向所述第二用户设备发送所述生成单元生成的所述交互信息,所述交互信息用于所述第二用户设备在所述3D社区中渲染出所述交互信息所对应的对象。
本发明第五方面提供一种3D应用系统,包括:3D应用服务器、视频源服务器、第一用户设备和第二用户设备,所述第二用户设备所展示的3D应用中的3D社区包括模拟对象和虚拟屏幕;
所述第二用户设备为上述第三方面所述的用户设备;
所述3D应用服务器为上述第四方面所述的3D应用服务器。
与现有技术中主播只能通过文字平面化的与观众进行互动相比,本发明实施例提供的信息交互的方法,可以使主播在3D应用的3D社区中与观众进行互动,从而增加了互动的多样性。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例中主播控制3D社区环境的一实例示意图;
图2是本发明实施例中3D应用系统的一实施例示意图;
图3是本发明实施例中主播直播的视频内容在虚拟屏幕上呈现的过程示意图;
图4是本发明实施例中信息交互的方法的一实施例示意图;
图5是本发明实施例中主播向D社区派发礼物的一实例示意图;
图6是本发明实施例中信息交互的方法的另一实施例示意图;
图7是本发明实施例中信息交互的方法的另一实施例示意图;
图8是本发明实施例中用户设备的一实施例示意图;
图9是本发明实施例中3D应用服务器的一实施例示意图;
图10是本发明实施例中3D应用服务器的另一实施例示意图;
图11是本发明实施例中用户设备的另一实施例示意图;
图12是本发明实施例中3D应用服务器的另一实施例示意图。
具体实施方式
本发明实施例提供一种信息交互的方法,主播可以在3D应用中的3D社区与观众进行互动,从而增加了互动的多样性。本发明实施例还提供了相应的用户设备、服务器及系统。以下分别进行详细说明。
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
为了便于理解,预先对本申请中涉及到的名词做如下简单介绍:
本发明实施例中的3D应用系统可以理解为是3D游戏系统。
3D游戏:是指以三维计算机图形为基础制作的立体电子游戏,包括但不限于多人在线联机的网络3D游戏,单人进行游戏的单人3D游戏,和基于3D游戏系统建立的虚拟现实游戏系统,并且对平台具有通用适用属性,游戏主机平台,手机游戏平台,个人计算机游戏平台内的3D游戏都包含在内。
3D社区:3D游戏中的一种虚拟的社区环境,是指以三维计算机图形为基 础制作的游戏环境。3D社区中可以包括玩家在游戏中对应的模拟对象,并且本申请中的3D社区中包括一个虚拟屏幕,该虚拟屏幕最好是类似于外场投放的虚拟大屏幕。
游戏主播:指的是在互联网等电子媒体上,从事游戏报道及解说的个人主体。
本发明借助于在3D社区内直接显示互联网直播视频的技术,并在此上面建立一套通信机制,允许互联网直播视频的主播与3D社区内的观众产生更为丰富的交互行为,除了直接的视频,音频的展示,还包括控制3D社区内的天气,发射绚丽的烟花等交互方式。例如,圣诞节的时候,主播能够控制天气,让3D社区下起雪来,同时在社区燃放一个烟花,大大地增加了节日气氛,那么将会大大地提升观众的参与度。
上面是一个主播给3D社区的观众播放烟花的过程,除了烟花之外,通过类似的通信机制,主播能够控制3D社区的天气。如图1所示,主播可以控制3D社区中的天气,还可以将3D社区设置为黑天,点亮彩灯等。大大的增加主播与观众的互动性。
图2为本发明实施例中3D应用系统的一实施例示意图。
如图2所示,本发明实施例提供的3D应用系统包括:3D应用服务器、视频源服务器、主播所使用的第一用户设备和玩家所使用的第二用户设备,第二用户设备可以有多个。主播通过该第一用户设备进行游戏直播,直播的游戏视频流上传到内视频源服务器,该主播是预先在游戏服务器上注册过的,所以,游戏服务器中会存储有当前直播的内容的源地址,玩家还可以与主播互动,因此,在直播过程中还可能会有主播与模拟对象的互动内容,因此,用户设备可以从内容提供服务器获取直播的视频流和互动内容。
第二用户设备获取直播的视频流和互动内容后,进行音频和视频的渲染,得到相应的音频内容和视频内容,在所述3D应用中播放所述音频内容,并通过所述虚拟屏幕展示所述视频内容和互动内容。
主播在与观众交互时,通过第一用户设备向视频源服务器发送交互请求,视频源服务器将该交互请求转发给3D应用服务器,3D应用服务器根据该交互 请求,生成交互信息,并向第二用户设备发送该交互信息。第二用户设备根据所述交互信息,在所述3D社区中渲染出所述交互信息所对应的对象。
本发明实施例中交互信息所对应的对象可以包括模拟人物对象、也可以包括环境对象,环境对象可以包括天气、彩灯和烟花等对象。
其中,第二用户设备中3D应用中的3D社区的虚拟屏幕上内容呈现的过程可以参阅图3进行理解。
101、有一个主播此时正在互联网上直播视频,该主播通过第一用户设备将其直播的视频流提交到视频流服务器。
102、用户打开第二用户设备上的3D应用程序后,程序开始初始化3D应用内的3D渲染引擎。
103、程序开始自动请求目前正在直播的主播视频源地址。
104、3D应用服务器向音视频渲染模块下发视频源地址。
105、音视频渲染模块向内容提供服务器请求直播视频的数据流。
106、直播视频服务器向音视频渲染模块返回视频数据流。
107、音视频渲染模块使用音视频数据,渲染音频。
108、音视频渲染模块向3D应用内的音频引擎提交音频数据。
109、音视频渲染模块使用音视频数据,渲染视频单帧图像。
110、音视频渲染模块向3D应用内的3D渲染引擎提交图像数据。
111、渲染引擎使用渲染好的静帧数据,播放音频内容,并呈现3D视频画面给用户。
在3D社区的视频内容渲染出来后,主播与观众的交互过程,也就是本发明实施例中信息交互的一实施例可以参阅图4进行理解。
201、第一用户设备向视频源服务器上报交互请求。
202、视频源服务器向3D应用服务器转该交互请求。
203、3D应用服务器根据该交互请求生成交互信息。
204、3D应用服务器向第二用户设备发送交互信息。
205、第二用户设备根据所述交互信息,在所述3D社区中渲染出所述交互信息所对应的对象。
可选地,该交互请求可以是目标模拟对象生成请求,所述3D应用服务器根据所述目标模拟对象生成请求,生成所述目标模拟对象的交互信息。
所述3D应用服务器向所述第二用户设备发送所述目标模拟对象的交互信息,所述目标模拟对象的交互信息用于所述第二用户设备在所述3D社区中渲染出所述目标模拟对象。
所述第二用户设备接收所述3D应用服务器发送的用于渲染目标模拟对象的交互信息。
所述第二用户设备根据所述用于渲染目标模拟对象的交互信息,在所述3D社区中渲染出所述目标模拟对象。
可选地,,所述目标模拟对象用于向所述3D社区中的模拟对象发送数值包。在这种情况下,所述方法还包括:
所述3D应用服务器从视频源服务器获取数值包发送请求,所述数值包发送请求为所述第一用户设备上传到所述视频源服务器的;
所述3D应用服务器根据所述数值包发送请求生成所述数值包的交互信息;
所述3D应用服务器向所述第二用户设备发送所述数值包的交互信息,所述数值包的交互信息用于所述第二用户设备在所述目标模拟对象的移动轨迹上渲染出所述数值包;
所述第二用户设备接收所述3D应用服务器发送的数值包信息;
所述第二用户设备根据所述数值包信息在所述目标模拟对象的移动轨迹上渲染出所述数值包。
可选地,所述方法还包括:
所述3D应用服务器从视频源服务器获取数值包发送请求,所述数值包发送请求为所述第一用户设备上传到所述视频源服务器的;
所述3D应用服务器根据所述数值包发送请求生成所述数值包的交互信息;
所述3D应用服务器向所述第二用户设备发送所述数值包的交互信息,所述数值包的交互信息用于所述第二用户设备在所述目标模拟对象的移动轨迹 上渲染出所述数值包。
所述第二用户设备接收所述3D应用服务器发送的数值包信息;
所述第二用户设备根据所述数值包信息在所述目标模拟对象的移动轨迹上渲染出所述数值包。
可选地,所述方法还包括:
所述3D应用服务器监测到所述3D社区中的特定模拟对象获取所述目标模拟对象送出的数值包时,向所述第二用户设备发送所述数值包已获得的通知消息,所述特定模拟对象为与所述第二用户设备对应的模拟对象。
当所述3D社区中的特定模拟对象获取所述目标模拟对象送出的数值包时,所述第二用户设备接收所述3D应用服务器发送的数值包已获得的通知消息,所述特定模拟对象为与所述第二用户设备对应的模拟对象。
可选地,所述信息交互请求为环境对象渲染请求;
所述3D应用服务器根据所述环境对象渲染请求,生成所述环境对象的交互信息;
所述3D应用服务器向所述第二用户设备发送所述环境对象的交互信息,所述环境对象的交互信息用于所述第二用户设备在所述3D社区中渲染出所述环境对象;
所述第二用户设备接收所述3D应用服务器发送的用于渲染环境对象的交互信息;
所述第二用户设备根据所述用于渲染环境对象的交互信息,在所述3D社区中渲染出所述环境对象。
为了便于理解,下面用两个实施例分别介绍主播派出使者向3D社区中的模拟对象派发礼物的过程和主播控制3D社区中的天气的过程。
图5为主播在3D社区中派出使者的示意图。如图5所示,主播在3D社区中派出了一个代表主播的使者,该使者即为目标模拟对象,该使者可以在3D社区中行走,向3D社区中的模拟对象派发礼物。
使者派发礼物的交互过程可以参阅图6的描述进行理解。
如图6所示,本发明实施例提供的信息交互的另一实施例包括:
301、主播通过第一用户登陆社区管理页面。
302、3D应用服务器鉴定主播身份。
确定主播是否已注册,是否是合法的主播。
303、鉴定通过后,返回身份鉴定结果。
304、主播通过第一用户设备选择3D社区。
305、3D应用服务器为该主播分配所选择的3D社区。
本申请中,步骤301-305的步骤可以是在3D应用服务器上集成相应的3D社区页面管理、3D社区入口和3D社区分配的功能。也可以是通过三个独立的服务器来完成上述步骤301-305的功能,这三个独立的服务器可以包括管理页面服务器、3D社区入口服务器和3D社区服务器,其中,管理页面服务器负责显示管理页面,呈现接口给用户;3D社区入口服务器负责和管理页面服务器交互,负责存储主播信息等数据;3D社区服务器接收来自主播的互动需求,并转发给3D应用服务器。
306、主播通过第一用户设备向3D应用服务器发送使者请求。
307、3D应用服务器根据该使者请求生成使者交互信息。
308、3D应用服务器向第二用户设备广播使者交互信息。
309、第二用户设备根据使者交互信息,在3D社区中渲染出使者。
310、3D应用服务器向第二用户设备发送使者移动路线。
311、第二用户设备根据使者移动路线渲染出移动的使者。
312、3D应用服务器向第二用户设备发送道具信息。
使者除了掉落道具,还可以掉落其他礼物,本申请中的数值包即可以包括道具、也可以包括其他礼物。
313、第二用户设备根据道具信息渲染出使者掉落道具。
314、获取拾取道具掉落的玩家。
315、更新道具列表。
316、通知玩家道具拾取成功。
通过上述的描述,本申请实现了一套自动化的,拟真的,且有趣味性的礼物赠送系统,大大的提高了用户在虚拟社区与主播互动。
下面结合图7说明本申请中主播控制3D社区中的天气的过程。
如图7所示,本发明实施例提供的信息交互的另一实施例包括:
401、主播通过第一用户登陆社区管理页面。
402、3D应用服务器鉴定主播身份。
确定主播是否已注册,是否是合法的主播。
403、鉴定通过后,返回身份鉴定结果。
404、主播通过第一用户设备选择3D社区。
405、3D应用服务器为该主播分配所选择的3D社区。
401-405与上述实施例中301-305的过程相同。
406、主播通过第一用户设备向3D应用服务器发送天气设置请求。
407、3D应用服务器根据该天气设置请求生成天气交互信息。
408、3D应用服务器向第二用户设备广播天气交互信息。
409、第二用户设备根据天气交互信息,过滤天气参数。
3D渲染引擎首先进行天气参数中相关系数的线性过度,避免3D社区内的用户感觉突兀。
410、第二用户设备设定天气变化的时间。
411、第二用户设备计算大气散色。
根据设定的时间计算大气散射,因为天空的颜色是根据大气散射决定的,因此在此使Mie-Rayleigh算法进行天空颜色的渲染。
412、第二用户设备确定天气的渲染结果。
413、第二用户设备设定粒子种类(雨或雪)。
根据设定的粒子种类,进行雨水或雪花的渲染。此处,利用了图形处理器(Graphic Processing Unit,GPU)的贴图文件(Valve Textures File,VTF)特性,直接在GPU内完成了雨雪粒子位置的计算,并直接一个批次渲染出来。
414、第二用户设备计算特效。
415、第二用户设备确定最后的渲染结果。
416、第二用户设备展示最后的渲染结果。
其他的例如在3D社区中放烟花的交互过程与上述图7的过程基本一致。不 同之处仅在于,根据烟花交互信息,渲染出烟花的效果即可。
以上所描述的是信息交互的方法,下面结合附图,介绍本发明实施例中的设备。
参阅图8,本发明实施例提供的用户设备为3D应用系统中的第二用户设备,所述3D应用系统包括3D应用服务器、视频源服务器、第一用户设备和所述第二用户设备,所述第一用户设备用于响应主播的交互操作,所述第二用户设备所展示的3D应用中的3D社区包括模拟对象和用于所述模拟对象观看视频内容的虚拟屏幕,所述用户设备50包括:
获取单元501,用于从视频源服务器获取所述第一用户设备所上传的视频内容,并在所述虚拟屏幕上展示;
接收单元502,用于在所述获取单元501获取所述视频内容后,接收所述3D应用服务器发送的交互信息,所述交互信息为所述3D应用服务器根据所述第一用户设备上传到所述视频源服务器的交互请求生成的;
渲染单元503,用于根据所述接收单元502接收的所述交互信息,在所述3D社区中渲染出所述交互信息所对应的对象。
在本发明实施例提供的用户设备50中,获取单元501从视频源服务器获取所述第一用户设备所上传的所述主播直播的视频内容,并在所述虚拟屏幕上展示;接收单元502在所述获取单元501获取所述视频内容后,接收所述3D应用服务器发送的交互信息,所述交互信息为所述3D应用服务器根据所述第一用户设备上传到所述视频源服务器的交互请求生成的;渲染单元503根据所述接收单元502接收的所述交互信息,在所述3D社区中渲染出所述交互信息所对应的对象。与现有技术中主播只能通过文字平面化的与观众进行互动相比,本发明实施例提供的用户设备,可以使主播在3D应用的3D社区中与观众进行互动,从而增加了互动的多样性。
可选地,所述接收单元502,用于接收所述3D应用服务器发送的用于渲染目标模拟对象的交互信息;
所述渲染单元503,用于根据所述接收单元502接收的所述用于渲染目标模拟对象的交互信息,在所述3D社区中渲染出所述目标模拟对象。
可选地,所述目标模拟对象用于向所述3D社区中的模拟对象发送数值包。在这种情况下,所述接收单元502,还用于接收所述3D应用服务器发送的数值包信息;
所述渲染单元503,用于根据所述数值包信息在所述目标模拟对象的移动轨迹上渲染出所述数值包。
可选地,所述接收单元502,还用于当所述3D社区中的模拟对象获取所述目标模拟对象送出的数值包时,接收所述3D应用服务器发送的数值包已被所述模拟对象获取获得的通知消息。
可选地,所述接收单元502,用于接收所述3D应用服务器发送的用于渲染环境对象的交互信息;
所述渲染单元503,用于根据所述接收单元502接收的所述用于渲染环境对象的交互信息,在所述3D社区中渲染出所述环境对象。
参阅图9,本发明实施例提供的3D应用服务器60应用于3D应用系统,所述3D应用系统还包括视频源服务器、第一用户设备和第二用户设备,所述第一用户设备用于响应主播的交互操作,所述第二用户设备所展示的3D应用中的3D社区包括模拟对象和用于所述模拟对象观看所述主播所直播的视频内容的虚拟屏幕,所述3D应用服务器包括:
获取单元601,用于从视频源服务器获取所述第一用户设备所上传的视频内容;并将所述上传的视频内容发送至所述第二用户设备,以在所述第二用户的虚拟屏幕上展示所述视频内容;
接收单元602,用于从所述视频源服务器接收信息交互请求,所述信息交互请求为所述第一用户设备上传到所述视频源服务器的;
生成单元603,用于根据所述接收单元602接收的所述信息交互请求,生成交互信息;
发送单元604,用于向所述第二用户设备发送所述生成单元603生成的所述交互信息,所述交互信息用于所述第二用户设备在所述3D社区中渲染出所述交互信息所对应的对象。
本发明实施例中,获取单元601从视频源服务器获取所述第一用户设备所 上传的视频内容;并将所述上传的视频内容发送至所述第二用户设备,以在所述第二用户的虚拟屏幕上展示所述视频内容;接收单元602从所述视频源服务器接收信息交互请求,所述信息交互请求为所述第一用户设备上传到所述视频源服务器的;生成单元603根据所述接收单元602接收的所述信息交互请求,生成交互信息;发送单元604,用于向所述第二用户设备发送所述生成单元603生成的所述交互信息,所述交互信息用于所述第二用户设备在所述3D社区中渲染出所述交互信息所对应的对象。与现有技术中主播只能通过文字平面化的与观众进行互动相比,本发明实施例提供的3D应用服务器,可以使主播在3D应用的3D社区中与观众进行互动,从而增加了互动的多样性。
可选地,所述信息交互请求为目标模拟对象生成请求;以及所述生成单元603,用于根据所述目标模拟对象生成请求,生成所述目标模拟对象的交互信息;
所述发送单元604,用于向所述第二用户设备发送所述目标模拟对象的交互信息,以使得所述第二用户设备响应于所述目标模拟对象的交互信息在所述3D社区中渲染出所述目标模拟对象。
可选地,所述接收单元602,还用于从视频源服务器接收数值包发送请求,所述数值包发送请求为所述第一用户设备上传到所述视频源服务器的;
所述生成单元603,还用于根据所述数值包发送请求生成所述数值包的交互信息;
所述发送单元604,还用于向所述第二用户设备发送所述数值包的交互信息,以使得所述第二用户设备响应于所述数值包的交互信息在所述目标模拟对象的移动轨迹上渲染出所述数值包。
可选地,参阅图10,本发明实施例提供的所述3D应用服务器还包括监测单元605。
所述监测单元605,用于监测所述3D社区中的模拟对象获取所述目标模拟对象送出的数值包;
所述发送单元604用于当所述监测单元605测到所述3D社区中的模拟对象获取所述目标模拟对象送出的数值包时,向所述第二用户设备发送所述数值 包已被所述模拟对象获取获得的通知消息。
可选地,所述信息交互请求为环境对象渲染请求,以及所述生成单元603用于根据所述环境对象渲染请求,生成所述环境对象的交互信息;
所述发送单元604用于向所述第二用户设备发送所述环境对象的交互信息,以使得所述第二用户设备响应于所述环境对象的交互信息在所述3D社区中渲染出所述环境对象。
以上的用户设备和3D应用服务器的描述可以参阅图1至图7部分的相关描述进行理解,本处不做过多赘述。
图11是本发明实施例提供的用户设备50的结构示意图。所述用户设备50为3D应用系统中的第二用户设备,所述3D应用系统包括3D应用服务器、视频源服务器、第一用户设备和所述第二用户设备,所述第一用户设备用于响应主播的交互操作,所述第二用户设备所展示的3D应用中的3D社区包括模拟对象和用于所述模拟对象观看视频内容的虚拟屏幕,本申请中的用户设备50包括中央处理器(Central Processing Unit,CPU)5101和图形处理器(Graphic Processing Unit,GPU)5102、收发器540、存储器550和输入/输出(I/O)设备530,输入/输出(I/O)设备530可以是键盘或鼠标,图形处理器5102用于图形渲染,存储器550可以包括只读存储器和随机存取存储器,并向处理器510提供操作指令和数据。存储器550的一部分还可以包括非易失性随机存取存储器(NVRAM)。
在一些实施方式中,存储器550存储了如下的元素,可执行模块或者数据结构,或者他们的子集,或者他们的扩展集:
在本发明实施例中,通过调用存储器550存储的操作指令(该操作指令可存储在操作系统中),
收发器540用于从视频源服务器获取所述第一用户设备所上传的所述主播直播的视频内容,并在所述虚拟屏幕上展示;接收所述3D应用服务器发送的交互信息,所述交互信息为所述3D应用服务器根据所述第一用户设备上传到所述视频源服务器的交互请求生成的;
图形处理器5102用于根据所述交互信息,在所述3D社区中渲染出所述交 互信息所对应的对象。
与现有技术中主播只能通过文字平面化的与观众进行互动相比,本发明实施例提供的用户设备,可以使主播在3D应用的3D社区中与观众进行互动,从而增加了互动的多样性。
中央处理器5101控制用户设备50的操作。存储器550可以包括只读存储器和随机存取存储器,并向中央处理器5101提供指令和数据。存储器550的一部分还可以包括非易失性随机存取存储器(NVRAM)。具体的应用中用户设备50的各个组件通过总线系统520耦合在一起,其中总线系统520除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线系统520。
上述本发明实施例揭示的方法可以应用于处理器510中,或者由处理器510实现。处理器510可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器510中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器510可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器550,处理器510读取存储器550中的信息,结合其硬件完成上述方法的步骤。
可选地,收发器540用于接收所述3D应用服务器发送的用于渲染目标模拟对象的交互信息,所述目标模拟对象用于向所述3D社区中的模拟对象发送数值包;
图像处理器5102用于根据所述用于渲染目标模拟对象的交互信息,在所述3D社区中渲染出所述目标模拟对象。
可选地,收发器540用于接收所述3D应用服务器发送的数值包信息;
图像处理器5102用于根据所述数值包信息在所述目标模拟对象的移动轨迹上渲染出所述数值包。
可选地,收发器540用于当所述3D社区中的特定模拟对象获取所述目标模拟对象送出的数值包时,接收所述3D应用服务器发送的数值包已获得的通知消息,所述特定模拟对象为与所述第二用户设备对应的模拟对象。
可选地,收发器540用于接收所述3D应用服务器发送的用于渲染环境对象的交互信息;
图像处理器5102用于根据所述用于渲染环境对象的交互信息,在所述3D社区中渲染出所述环境对象。
以上的用户设备50可以参阅图1至图7部分的相关描述进行理解,本处不做过多赘述。
图12是本发明实施例提供的3D应用服务器60的结构示意图。所述3D应用服务器60应用于3D应用系统,所述3D应用系统还包括视频源服务器、第一用户设备和第二用户设备,所述第一用户设备用于响应主播的交互操作,所述第二用户设备所展示的3D应用中的3D社区包括模拟对象和用于所述模拟对象观看所述主播所直播的视频内容的虚拟屏幕,所述3D应用服务器60包括处理器610、存储器650和收发器630,存储器650可以包括只读存储器和随机存取存储器,并向处理器610提供操作指令和数据。存储器650的一部分还可以包括非易失性随机存取存储器(NVRAM)。
在一些实施方式中,存储器650存储了如下的元素,可执行模块或者数据结构,或者他们的子集,或者他们的扩展集:
在本发明实施例中,通过调用存储器650存储的操作指令(该操作指令可存储在操作系统中),
所述收发器630用于从视频源服务器获取信息交互请求,所述信息交互请求为所述第一用户设备上传到所述视频源服务器的;
所述处理器610用于根据所述信息交互请求,生成交互信息;
所述收发器630还用于从所述视频源服务器获取所述第一用户设备所上传的视频内容;并将所述上传的视频内容发送至所述第二用户设备;以及向所述第二用户设备发送所述交互信息,所述交互信息用于所述第二用户设备在所述3D社区中渲染出所述交互信息所对应的对象。
与现有技术中主播只能通过文字平面化的与观众进行互动相比,本发明实施例提供的3D应用服务器,可以使主播在3D应用的3D社区中与观众进行互动,从而增加了互动的多样性。
处理器610控制3D应用服务器60的操作,处理器610还可以称为CPU(Central Processing Unit,中央处理单元)。存储器650可以包括只读存储器和随机存取存储器,并向处理器610提供指令和数据。存储器650的一部分还可以包括非易失性随机存取存储器(NVRAM)。具体的应用中3D应用服务器60的各个组件通过总线系统620耦合在一起,其中总线系统620除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线系统620。
上述本发明实施例揭示的方法可以应用于处理器610中,或者由处理器610实现。处理器610可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器610中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器610可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器650,处理器610读取存储器650中的信息,结合其硬件完成上述方法的步骤。
可选地,所述处理器610用于当所述信息交互请求为目标模拟对象生成请 求时,根据所述目标模拟对象生成请求,生成所述目标模拟对象的交互信息;
收发器630还用于向所述第二用户设备发送所述目标模拟对象的交互信息,所述目标模拟对象的交互信息用于所述第二用户设备在所述3D社区中渲染出所述目标模拟对象。
可选地,收发器630还用于从视频源服务器获取数值包发送请求,所述数值包发送请求为所述第一用户设备上传到所述视频源服务器的;
处理器610还用于根据所述数值包发送请求生成所述数值包的交互信息;
收发器630还用于向所述第二用户设备发送所述数值包的交互信息,所述数值包的交互信息用于所述第二用户设备在所述目标模拟对象的移动轨迹上渲染出所述数值包。
可选地,处理器610还用于监测所述3D社区中的特定模拟对象获取所述目标模拟对象送出的数值包;
收发器630还用于监测到所述3D社区中的特定模拟对象获取所述目标模拟对象送出的数值包时,向所述第二用户设备发送所述数值包已获得的通知消息,所述特定模拟对象为与所述第二用户设备对应的模拟对象。
可选地,处理器610用于当所述信息交互请求为环境对象渲染请求时,根据所述环境对象渲染请求,生成所述环境对象的交互信息;
收发器630还用于向所述第二用户设备发送所述环境对象的交互信息,所述环境对象的交互信息用于所述第二用户设备在所述3D社区中渲染出所述环境对象。
以上的3D应用服务器60可以参阅图1至图7部分的相关描述进行理解,本处不做过多赘述。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:ROM、RAM、磁盘或光盘等。
以上对本发明实施例所提供的信息交互的方法、用户设备、3D应用服务器以及系统进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核 心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (21)

  1. 一种信息交互的方法,其特征在于,所述方法应用于三维(3D)应用系统,所述3D应用系统包括3D应用服务器、视频源服务器、第一用户设备和第二用户设备,所述第二用户设备所展示的3D应用中的3D社区包括模拟对象和虚拟屏幕,所述方法包括:
    所述第二用户设备从视频源服务器获取所述第一用户设备所上传的视频内容,并在所述虚拟屏幕上展示;
    所述第二用户设备接收所述3D应用服务器发送的交互信息,所述交互信息为所述3D应用服务器根据所述第一用户设备上传到所述视频源服务器的交互请求生成的;以及
    所述第二用户设备根据所述交互信息,在所述3D社区中渲染出所述交互信息所对应的对象。
  2. 根据权利要求1所述的方法,其特征在于,所述第二用户设备接收所述3D应用服务器发送的交互信息,包括:
    所述第二用户设备接收所述3D应用服务器发送的用于渲染目标模拟对象的交互信息;以及
    所述第二用户设备根据所述交互信息,在所述3D社区中渲染出所述交互信息所对应的对象,包括:
    所述第二用户设备根据所述用于渲染目标模拟对象的交互信息,在所述3D社区中渲染出所述目标模拟对象。
  3. 根据权利要求2所述的方法,其特征在于,所述目标模拟对象用于向所述3D社区中的模拟对象发送数值包,以及
    所述方法还包括:
    所述第二用户设备接收所述3D应用服务器发送的数值包信息;
    所述第二用户设备根据所述数值包信息在所述目标模拟对象的移动轨迹上渲染出所述数值包。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    当所述3D社区中的模拟对象获取所述渲染的数值包时,所述第二用户设备接收所述3D应用服务器发送的数值包已被所述模拟对象获取的通知消息。
  5. 根据权利要求1所述的方法,其特征在于,所述第二用户设备接收所述3D应用服务器发送的交互信息,包括:
    所述第二用户设备接收所述3D应用服务器发送的用于渲染环境对象的交互信息;以及
    所述第二用户设备根据所述交互信息,在所述3D社区中渲染出所述交互信息所对应的对象,包括:
    所述第二用户设备根据所述用于渲染环境对象的交互信息,在所述3D社区中渲染出所述环境对象。
  6. 一种信息交互的方法,其特征在于,所述方法应用于三维(3D)应用系统,所述3D应用系统包括3D应用服务器、视频源服务器、第一用户设备和第二用户设备,所述第二用户设备所展示的3D应用中的3D社区包括模拟对象和虚拟屏幕,所述方法包括:
    所述3D应用服务器从所述视频源服务器获取所述第一用户设备所上传的视频内容;并将所述上传的视频内容发送至所述第二用户设备,以在所述第二用户的虚拟屏幕上展示所述视频内容;
    所述3D应用服务器从所述视频源服务器接收信息交互请求,所述信息交互请求为所述第一用户设备上传到所述视频源服务器的;
    所述3D应用服务器根据所述信息交互请求,生成交互信息;以及
    所述3D应用服务器向所述第二用户设备发送所述交互信息,所述交互信息用于所述第二用户设备在所述3D社区中渲染出所述交互信息所对应的对象。
  7. 根据权利要求6所述的方法,其特征在于,所述信息交互请求为目标模拟对象生成请求;
    所述3D应用服务器根据所述信息交互请求,生成交互信息,包括:
    所述3D应用服务器根据所述目标模拟对象生成请求,生成所述目标模拟对象的交互信息;
    所述3D应用服务器向所述第二用户设备发送所述交互信息,所述交互信息用于所述第二用户设备在所述3D社区中渲染出所述交互信息所对应的对象,包括:
    所述3D应用服务器向所述第二用户设备发送所述目标模拟对象的交互信息,以使得用于所述第二用户设备响应于所述目标模拟对象的交互信息在所述3D社区中渲染出所述目标模拟对象。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    所述3D应用服务器从视频源服务器获取数值包发送请求,所述数值包发送请求为所述第一用户设备上传到所述视频源服务器的;
    所述3D应用服务器根据所述数值包发送请求生成所述数值包的交互信息;以及
    所述3D应用服务器向所述第二用户设备发送所述数值包的交互信息,以使得所述第二用户设备响应于所述数据包的交互信息在所述目标模拟对象的移动轨迹上渲染出所述数值包。
  9. 根据权利要求8所述的方法,其特征在于,所述方法还包括:
    所述3D应用服务器监测到所述3D社区中的模拟对象获取所述数值包时,向所述第二用户设备发送所述数值包已被所述模拟对象获取的通知消息。
  10. 根据权利要求6所述的方法,其特征在于,所述信息交互请求为环境对象渲染请求;
    所述3D应用服务器根据所述信息交互请求,生成交互信息,包括:
    所述3D应用服务器根据所述环境对象渲染请求,生成所述环境对象的交互信息;
    所述3D应用服务器向所述第二用户设备发送所述交互信息,所述交互信息用于所述第二用户设备在所述3D社区中渲染出所述交互信息所对应的对象,包括:
    所述3D应用服务器向所述第二用户设备发送所述环境对象的交互信息,以使得所述第二用户设备响应于所述环境对象的交互信息在所述3D社区中渲染出所述环境对象。
  11. 一种用户设备,其特征在于,所述用户设备为三维(3D)应用系统中的用户设备,所述3D应用系统还包括3D应用服务器、视频源服务器、和第一用户设备,所述用户设备所展示的3D应用中的3D社区包括模拟对象和虚拟屏幕,所述用户设备包括:
    获取单元,用于从视频源服务器获取所述第一用户设备所上传的视频内容,并在所述虚拟屏幕上展示;
    接收单元,用于接收所述3D应用服务器发送的交互信息,所述交互信息为所述3D应用服务器根据所述第一用户设备上传到所述视频源服务器的交互请求生成的;以及
    渲染单元,用于根据所述接收单元接收的所述交互信息,在所述3D社区中渲染出所述交互信息所对应的对象。
  12. 根据权利要求11所述的用户设备,其特征在于,
    所述接收单元,用于接收所述3D应用服务器发送的用于渲染目标模拟对象的交互信息;以及
    所述渲染单元,用于根据所述接收单元接收的所述用于渲染目标模拟对象的交互信息,在所述3D社区中渲染出所述目标模拟对象。
  13. 根据权利要求12所述的用户设备,其特征在于,所述目标模拟对象用于向所述3D社区中的模拟对象发送数值包,以及
    所述接收单元,还用于接收所述3D应用服务器发送的数值包信息;
    所述渲染单元,用于根据所述数值包信息在所述目标模拟对象的移动轨迹上渲染出所述数值包。
  14. 根据权利要求13所述的用户设备,其特征在于,
    所述接收单元,还用于当所述3D社区中的模拟对象获取所述渲染的数值包时,接收所述3D应用服务器发送的数值包已被所述模拟对象获取的通知消息。
  15. 根据权利要求11所述的用户设备,其特征在于,
    所述接收单元,用于接收所述3D应用服务器发送的用于渲染环境对象的交互信息;
    所述渲染单元,用于根据所述接收单元接收的所述用于渲染环境对象的交互信息,在所述3D社区中渲染出所述环境对象。
  16. 一种服务器,其特征在于,所述服务器应用于三维(3D)应用系统,所述3D应用系统还包括视频源服务器、第一用户设备和第二用户设备,所述第二用户设备所展示的3D应用中的3D社区包括模拟对象和虚拟屏幕,所述服务器包括:
    获取单元,用于从视频源服务器获取所述第一用户设备所上传的视频内容;并将所述上传的视频内容发送至所述第二用户设备,以在所述第二用户的虚拟屏幕上展示所述视频内容;
    接收单元,用于从所述视频源服务器接收信息交互请求,所述信息交互请求为所述第一用户设备上传到所述视频源服务器的;
    生成单元,用于根据所述接收单元接收的所述信息交互请求,生成交互信息;以及
    发送单元,用于向所述第二用户设备发送所述生成单元生成的所述交互信息,所述交互信息用于所述第二用户设备在所述3D社区中渲染出所述交互信息所对应的对象。
  17. 根据权利要求16所述的服务器,其特征在于,所述信息交互请求为目标模拟对象生成请求;以及
    所述生成单元,用于根据所述目标模拟对象生成请求,生成所述目标模拟对象的交互信息;
    所述发送单元,用于向所述第二用户设备发送所述目标模拟对象的交互信息,以使得所述第二用户设备响应于所述目标模拟对象的交互信息在所述3D社区中渲染出所述目标模拟对象。
  18. 根据权利要求17所述的服务器,其特征在于,
    所述接收单元,还用于从视频源服务器接收数值包发送请求,所述数值包发送请求为所述第一用户设备上传到所述视频源服务器的;
    所述生成单元,还用于根据所述数值包发送请求生成所述数值包的交互信息;
    所述发送单元,还用于向所述第二用户设备发送所述数值包的交互信息,以使得所述第二用户设备响应于所述数值包的交互信息在所述目标模拟对象的移动轨迹上渲染出所述数值包。
  19. 根据权利要求18所述的服务器,其特征在于,还包括监测单元,
    所述监测单元,用于监测所述3D社区中的模拟对象获取所述数值包;
    所述发送单元,用于当所述监测单元监测到所述3D社区中的模拟对象获取所述数值包时,向所述第二用户设备发送所述数值包已被所述模拟对象获取获得的通知消息。
  20. 根据权利要求16所述的服务器,其特征在于,所述信息交互请求为环境对象渲染请求;
    所述生成单元,用于根据所述环境对象渲染请求,生成所述环境对象的交互信息;以及
    所述发送单元,用于向所述第二用户设备发送所述环境对象的交互信息,以使得所述第二用户设备响应于所述环境对象的交互信息在所述3D社区中渲染出所述环境对象。
  21. 一种三维(3D)应用系统,其特征在于,包括:3D应用服务器、视频源服务器、第一用户设备和第二用户设备,所述第二用户设备所展示的3D应用中的3D社区包括模拟对象和虚拟屏幕;
    所述第二用户设备为权利要求11-15中任一项所述的用户设备;
    所述3D应用服务器为权利要求16-20中任一项所述的服务器。
PCT/CN2017/075433 2016-03-03 2017-03-02 一种信息交互的方法、设备及系统 WO2017148410A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
KR1020187015519A KR102098669B1 (ko) 2016-03-03 2017-03-02 정보 상호작용 방법, 장치 및 시스템
US15/774,377 US10861222B2 (en) 2016-03-03 2017-03-02 Information interaction method, device, and system
JP2018528068A JP6727669B2 (ja) 2016-03-03 2017-03-02 情報インタラクション方法、デバイス、およびシステム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610120316.3A CN105610868B (zh) 2016-03-03 2016-03-03 一种信息交互的方法、设备及系统
CN201610120316.3 2016-03-03

Publications (1)

Publication Number Publication Date
WO2017148410A1 true WO2017148410A1 (zh) 2017-09-08

Family

ID=55990406

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/075433 WO2017148410A1 (zh) 2016-03-03 2017-03-02 一种信息交互的方法、设备及系统

Country Status (5)

Country Link
US (1) US10861222B2 (zh)
JP (1) JP6727669B2 (zh)
KR (1) KR102098669B1 (zh)
CN (1) CN105610868B (zh)
WO (1) WO2017148410A1 (zh)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105610868B (zh) * 2016-03-03 2019-08-06 腾讯科技(深圳)有限公司 一种信息交互的方法、设备及系统
CN105740029B (zh) * 2016-03-03 2019-07-05 腾讯科技(深圳)有限公司 一种内容呈现的方法、用户设备及系统
CN107801083A (zh) * 2016-09-06 2018-03-13 星播网(深圳)信息有限公司 一种基于三维虚拟技术的网络实时互动直播方法及装置
CN106331818A (zh) * 2016-09-27 2017-01-11 北京赢点科技有限公司 一种跨屏互动的方法及系统
CN106658041B (zh) * 2016-12-20 2020-05-19 天脉聚源(北京)传媒科技有限公司 一种信息交互的方法及装置
CN106790066B (zh) * 2016-12-20 2020-02-07 天脉聚源(北京)传媒科技有限公司 一种继续获取信息的方法及装置
CN108243171B (zh) * 2016-12-27 2020-12-22 北京新唐思创教育科技有限公司 在线直播互动系统及方法
CN107820132B (zh) * 2017-11-21 2019-12-06 广州华多网络科技有限公司 直播互动方法、装置及系统
WO2019127349A1 (zh) * 2017-12-29 2019-07-04 腾讯科技(深圳)有限公司 一种多媒体信息分享的方法、相关装置及系统
CN109104619B (zh) * 2018-09-28 2020-10-27 联想(北京)有限公司 用于直播的图像处理方法和装置
KR102082670B1 (ko) * 2018-10-12 2020-02-28 주식회사 아프리카티비 방송 시청을 위한 가상 현실 사용자 인터페이스 제공 방법 및 장치
KR102196994B1 (ko) * 2019-04-15 2020-12-30 주식회사 아프리카티비 서드 파티의 가상 현실 콘텐츠를 제공하기 위한 장치 및 방법
KR102452584B1 (ko) * 2019-04-15 2022-10-11 주식회사 아프리카티비 서드 파티의 가상 현실 콘텐츠를 제공하기 위한 장치 및 방법
KR20200115402A (ko) 2020-09-18 2020-10-07 정영선 보건마스크 별도 부착용 안전손가락잡이와 안전보관고리 그리고 이의 제조방법
CN112492336B (zh) * 2020-11-20 2023-03-31 完美世界(北京)软件科技发展有限公司 礼物发送方法、装置、电子设备及可读介质
CN113365111A (zh) * 2021-06-03 2021-09-07 上海哔哩哔哩科技有限公司 基于直播的互动方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2383696A1 (en) * 2010-04-30 2011-11-02 LiberoVision AG Method for estimating a pose of an articulated object model
CN104025586A (zh) * 2011-10-05 2014-09-03 比特安尼梅特有限公司 分辨率增强3d视频渲染系统和方法
CN104469440A (zh) * 2014-04-16 2015-03-25 成都理想境界科技有限公司 视频播放方法、视频播放器及对应的播放设备
CN104618797A (zh) * 2015-02-06 2015-05-13 腾讯科技(北京)有限公司 信息处理方法、装置及客户端
CN105610868A (zh) * 2016-03-03 2016-05-25 腾讯科技(深圳)有限公司 一种信息交互的方法、设备及系统

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002007752A (ja) * 2000-03-10 2002-01-11 Heart Gift:Kk オンラインギフト方法
US7222150B1 (en) * 2000-08-15 2007-05-22 Ikadega, Inc. Network server card and method for handling requests received via a network interface
JP2003125361A (ja) * 2001-10-12 2003-04-25 Sony Corp 情報処理装置、情報処理方法、情報処理プログラム、及び情報処理システム
KR20050063473A (ko) * 2003-12-22 2005-06-28 김정훈 가상현실 기술을 사용한 개인 인터넷 방송 방법
JP2006025281A (ja) * 2004-07-09 2006-01-26 Hitachi Ltd 情報源選択システム、および方法
KR100701010B1 (ko) * 2005-12-09 2007-03-29 한국전자통신연구원 가상 스튜디오를 이용한 인터넷 연동형 데이터 방송 시스템및 그 방법
KR101276199B1 (ko) * 2009-08-10 2013-06-18 한국전자통신연구원 시청자 참여의 iptv 원격 방송 시스템 및 그 서비스 제공 방법
KR20110137439A (ko) * 2010-06-17 2011-12-23 주식회사 에스비에스콘텐츠허브 가상세계와 현실세계가 융합된 음악 방송 서비스 방법
JP2013220167A (ja) * 2012-04-16 2013-10-28 Konami Digital Entertainment Co Ltd ゲーム装置、ゲームシステム、ゲーム装置の制御方法、ゲームシステムの制御方法、及びプログラム
JP5462326B2 (ja) * 2012-07-17 2014-04-02 株式会社フォーラムエイト 仮想空間情報処理システム、当該システムのサーバ装置、当該サーバ装置で実行されるプログラム、及び仮想空間情報処理方法
KR20210094149A (ko) * 2012-12-21 2021-07-28 소니 인터랙티브 엔터테인먼트 아메리카 엘엘씨 레코딩된 게임플레이에 기반하여 클라우드-게이밍에 대한 제안된 미니-게임의 자동 발생
EP2750032B1 (en) * 2012-12-27 2020-04-29 Sony Computer Entertainment America LLC Methods and systems for generation and execution of miniapp of computer application served by cloud computing system
CN105338479B (zh) * 2014-06-09 2020-03-10 阿里巴巴集团控股有限公司 基于场所的信息处理方法及装置
WO2015192117A1 (en) * 2014-06-14 2015-12-17 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2383696A1 (en) * 2010-04-30 2011-11-02 LiberoVision AG Method for estimating a pose of an articulated object model
CN104025586A (zh) * 2011-10-05 2014-09-03 比特安尼梅特有限公司 分辨率增强3d视频渲染系统和方法
CN104469440A (zh) * 2014-04-16 2015-03-25 成都理想境界科技有限公司 视频播放方法、视频播放器及对应的播放设备
CN104618797A (zh) * 2015-02-06 2015-05-13 腾讯科技(北京)有限公司 信息处理方法、装置及客户端
CN105610868A (zh) * 2016-03-03 2016-05-25 腾讯科技(深圳)有限公司 一种信息交互的方法、设备及系统

Also Published As

Publication number Publication date
JP6727669B2 (ja) 2020-07-22
KR20180080273A (ko) 2018-07-11
CN105610868A (zh) 2016-05-25
KR102098669B1 (ko) 2020-04-08
US10861222B2 (en) 2020-12-08
CN105610868B (zh) 2019-08-06
US20200250881A1 (en) 2020-08-06
JP2019511756A (ja) 2019-04-25

Similar Documents

Publication Publication Date Title
WO2017148410A1 (zh) 一种信息交互的方法、设备及系统
US11077363B2 (en) Video game overlay
CN112104594B (zh) 沉浸式交互式远程参与现场娱乐
CN111467793B (zh) 利用客户端侧资产整合的云游戏流式传送
US9937423B2 (en) Voice overlay
US9751015B2 (en) Augmented reality videogame broadcast programming
TWI468734B (zh) 用於在共享穩定虛擬空間維持多個視面的方法、攜帶式裝置以及電腦程式
TWI786700B (zh) 利用第二螢幕裝置掃描3d物件以插入虛擬環境中
US20120272162A1 (en) Methods and systems for virtual experiences
US10403022B1 (en) Rendering of a virtual environment
US11748950B2 (en) Display method and virtual reality device
WO2018000609A1 (zh) 一种虚拟现实系统中分享3d影像的方法和电子设备
CN113873280B (zh) 连麦直播对战互动方法、系统、装置及计算机设备
JP6379107B2 (ja) 情報処理装置並びにその制御方法、及びプログラム
KR20210084248A (ko) Vr 컨텐츠 중계 플랫폼 제공 방법 및 그 장치
Mishra et al. Comparative Study of Cloud and Non-Cloud Gaming Platform: Apercu
CN114405012A (zh) 线下游戏的互动直播方法、装置、计算机设备及存储介质
CN115779441A (zh) 增益虚拟物品发送方法、装置、移动终端和存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018528068

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20187015519

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17759261

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17759261

Country of ref document: EP

Kind code of ref document: A1