CN115225923A - Gift special effect rendering method and device, electronic equipment and live broadcast server - Google Patents

Gift special effect rendering method and device, electronic equipment and live broadcast server Download PDF

Info

Publication number
CN115225923A
CN115225923A CN202210653955.1A CN202210653955A CN115225923A CN 115225923 A CN115225923 A CN 115225923A CN 202210653955 A CN202210653955 A CN 202210653955A CN 115225923 A CN115225923 A CN 115225923A
Authority
CN
China
Prior art keywords
gift
special effect
scene
information
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210653955.1A
Other languages
Chinese (zh)
Other versions
CN115225923B (en
Inventor
庄宇轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Boguan Information Technology Co Ltd
Original Assignee
Guangzhou Boguan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Boguan Information Technology Co Ltd filed Critical Guangzhou Boguan Information Technology Co Ltd
Priority to CN202210653955.1A priority Critical patent/CN115225923B/en
Publication of CN115225923A publication Critical patent/CN115225923A/en
Application granted granted Critical
Publication of CN115225923B publication Critical patent/CN115225923B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides a gift special effect rendering method and device, electronic equipment and a live broadcast server, wherein the method comprises the following steps: constructing a first virtual scene of a virtual live broadcast room; acquiring segmented special effect data of a target gift; the segmented special effect data comprises: first effect data rendered in a first virtual scene, and second effect data rendered in a generic scene of a target gift; rendering and displaying first special effect data in a first virtual scene; and in response to the specified condition being triggered, controlling the virtual live broadcast room to be converted from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene. In the method, the gift special effect rendering is combined with the scene of the live broadcast room, and scene conversion and subsequent special effect rendering are performed based on the specified conditions.

Description

Gift special effect rendering method and device, electronic equipment and live broadcast server
Technical Field
The invention relates to the technical field of live broadcast, in particular to a gift special effect rendering method and device, electronic equipment and a live broadcast server.
Background
In the live scene, the audience can watch the live content of the anchor after entering the live broadcasting room, and simultaneously can send information and gifts to the anchor so as to realize the interaction between the audience and the anchor. In the related art, after a gift is sent to a main broadcast, a section of gift special effect is played in a live broadcast room generally, the gift special effect covers live broadcast content to be displayed, the gift special effect and the live broadcast content in the live broadcast room are split, the live broadcast content is watched by audiences easily, and the immersion feeling of the audiences in watching the live broadcast is influenced.
Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for rendering a gift special effect, an electronic device, and a live broadcast server, so as to improve the display effect of a virtual gift special effect and improve the immersion of viewers in watching live broadcasts.
In a first aspect, an embodiment of the present invention provides a method for rendering a gift special effect, where the method is applied to a terminal device; the method comprises the following steps: constructing a first virtual scene of a virtual live broadcast room; acquiring segmented special effect data of a target gift; wherein the segmented special effect data comprises: first effect data rendered in a first virtual scene, and second effect data rendered in a generic scene of a target gift; rendering and displaying first special effect data in a first virtual scene; and in response to the specified condition being triggered, controlling the virtual live broadcast room to be converted from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
The step of obtaining segmented special effect data of the target gift includes: obtaining gift special effect information of a target gift; and generating special effect sectional information of the target gift based on the gift special effect information and a preset special effect pre-rendering template, so that the live broadcast server obtains sectional special effect data of the target gift based on the special effect sectional information, and returning the sectional special effect data to the terminal equipment.
The gift special effect information at least comprises gift special effect duration and special effect image data; the step of generating the special effect segmentation information of the target gift based on the gift special effect information and the preset special effect prerendering template includes: determining the time period of the gift special effect duration based on the special effect pre-rendering template; extracting lens information from the special effect image data based on the belonging time period; determining gift effect deduction rhythm data of the target gift based on the lens information; and generating special effect segmentation information of the target gift based on the gift effect deduction rhythm data and the special effect pre-rendering template.
The lens information includes: the lens switching times, the single lens shortest duration and the single lens longest duration; the step of determining the gift effect deduction rhythm data of the target gift based on the shot information includes: and based on preset weighting parameters, carrying out weighted addition on the lens switching times, the single-lens shortest duration and the single-lens longest duration to obtain the gift effect deduction rhythm score of the target gift.
The step of generating the special effect segmentation information of the target gift based on the gift effect deduction rhythm data and the special effect prerendering template comprises the following steps: determining the number of segments corresponding to the gift effect deduction rhythm data based on the special effect pre-rendering template; and determining a segmentation timestamp based on the number of segments, the shortest duration of a single lens and the longest duration of the single lens in the lens information, and generating special effect segmentation information of the target gift.
After the step of generating the special effect segmentation information of the target gift based on the gift special effect information and the preset special effect pre-rendering template, the method further includes: determining a specified condition based on the special effect pre-rendering template and the special effect style of the target gift; wherein the specified conditions include: in the segmented special effect data, the trigger condition of the special effect data of the segment is specified; the trigger condition includes one or more of the following: and the anchor object in the virtual live broadcast room executes a preset gesture, and the client side, which receives the specified information, completes the specified task and sends the gift sending instruction, in the virtual live broadcast room executes the specified behavior.
Before the step of obtaining the segmented special effect data of the target gift, the method further includes: obtaining gift special effect information of a target gift; determining scene configuration information of the target gift based on the gift special effect information; wherein the scene configuration information includes: the method comprises the following steps of (1) obtaining a special effect style, a special effect playing time length and a scene style of a first virtual scene; rendering a preset scene basic model and illumination information based on scene configuration information to obtain pre-rendering data of a general scene of the target gift; and the pre-rendering data is used for displaying the general scene.
The step of determining scene configuration information of the target gift based on the gift special effect information includes: the method comprises the steps of obtaining a gift identification of a target gift from gift special effect information, and determining a special effect style of the target gift based on the gift identification; obtaining gift special effect time length of the target gift from the gift special effect information, and determining special effect playing time length of the target gift based on the gift special effect time length and a preset special effect pre-rendering template; the method comprises the steps of obtaining live broadcast room information of a virtual live broadcast room, extracting scene identification of a first virtual scene from the live broadcast room information, and determining scene style of the first virtual scene based on the scene identification.
After the step of rendering the preset scene basic model and the illumination information based on the scene configuration information to obtain the pre-rendering data of the general scene of the target gift, the method further includes: storing pre-rendering data of a universal scene of a plurality of alternative gifts in a virtual live broadcast room in terminal equipment; wherein, the alternative gift is a gift supported by the virtual live broadcast room; the alternative gifts comprise the target gifts; and receiving the deliverable gifts supported by the specified user side, and deleting the pre-rendering data of the common scene of the gifts other than the deliverable gifts in the terminal equipment.
The step of rendering and displaying the first special effect data in the first virtual scene includes: rendering and displaying a target object corresponding to the target gift in the first virtual scene; and controlling the target object to move in the first virtual scene, and controlling the target object and the anchor object in the virtual live broadcast room to execute preset interactive operation to obtain an interactive result.
The step of controlling the virtual live broadcast room to switch from the first virtual scene to the general scene of the target gift in response to the specified condition being triggered, and rendering and displaying the second special effect data in the general scene includes: splicing the first virtual scene and the general scene in response to the triggering of a specified condition; controlling the virtual camera to move so that a general scene is displayed in the virtual live broadcast room; rendering and displaying a preset scene linking special effect in the moving process of the virtual camera; and rendering and displaying the second special effect data in the general scene after the general scene is displayed.
The rendering and displaying second special effect data in the general scene includes: rendering and displaying an interaction result corresponding to the first special effect data in the general scene; and responding to the specified operation of the anchor object in the virtual live broadcast room for the interaction result, and displaying the operation result of the specified operation in the general scene.
In a second aspect, an embodiment of the present invention provides a method for rendering a gift special effect, where the method is applied to a live broadcast server; the method comprises the following steps: receiving a gift sending instruction aiming at a target gift, acquiring gift special effect information of the target gift, sending the gift special effect information to terminal equipment, generating special effect sectional information of the target gift based on the gift special effect information through the terminal equipment, and returning the special effect sectional information to a live broadcast server; obtaining segmented special effect data of the target gift based on the special effect segmented information; wherein the segmented special effect data comprises: first effect data rendered in a first virtual scene and second effect data rendered in a generic scene of a target gift; returning the segmented special effect data to the terminal equipment so as to render and display the first special effect data in the first virtual scene through the terminal equipment; and in response to a specified condition being triggered, controlling the virtual live broadcast room to be converted from the first virtual scene to a general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
Before the step of receiving a gift sending instruction for the target gift, obtaining gift special effect information of the target gift, and sending the gift special effect information to the terminal device, the method further includes: acquiring live broadcast room information of a virtual live broadcast room in a broadcasting state; the live broadcast room information at least comprises a scene identifier of the first virtual scene and a gift identifier of a gift supported by the virtual live broadcast room; and providing live room information to the terminal equipment.
Before the step of receiving a gift sending instruction for the target gift, obtaining gift special effect information of the target gift, and sending the gift special effect information to the terminal device, the method further includes: and acquiring the operation of calling out the gift panel by the appointed user side, acquiring the deliverable gifts supported by the appointed user side from the gift panel, and providing the deliverable gifts to the terminal equipment.
The method further comprises the following steps: and after the first special effect data is displayed, receiving information data from the anchor terminal, determining whether the specified condition is triggered or not based on the information data, and if the specified condition is triggered, sending a condition identifier of the specified condition to the terminal equipment to indicate that the specified condition is triggered.
In a third aspect, an embodiment of the present invention provides a device for rendering a gift special effect, where the device is disposed in a terminal device; the device comprises: the first acquisition module is used for constructing a first virtual scene of a virtual live broadcast room; acquiring segmented special effect data of a target gift; wherein the segmented special effect data comprises: first effect data rendered in a first virtual scene, and second effect data rendered in a generic scene of a target gift; the first display module is used for rendering and displaying first special effect data in a first virtual scene; and the second display module is used for responding to the triggering of the specified condition, controlling the virtual live broadcast room to be converted from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
In a fourth aspect, an embodiment of the present invention provides another device for rendering a gift special effect, where the device is disposed in a live broadcast server; the device comprises: the information return module is used for receiving a gift sending instruction aiming at the target gift, acquiring gift special effect information of the target gift, sending the gift special effect information to the terminal equipment, generating special effect sectional information of the target gift based on the gift special effect information through the terminal equipment, and returning the special effect sectional information to the live broadcast server; the data acquisition module is used for acquiring segmented special effect data of the target gift based on the special effect segmented information; wherein the segmented special effect data comprises: first effect data rendered in a first virtual scene and second effect data rendered in a generic scene of a target gift; the data display module is used for returning the segmented special effect data to the terminal equipment so as to render and display the first special effect data in the first virtual scene through the terminal equipment; and in response to the specified condition being triggered, controlling the virtual live broadcast room to be converted from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
In a fifth aspect, an embodiment of the present invention provides an electronic device, which includes a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to implement the method for rendering a gift special effect described above.
In a sixth aspect, an embodiment of the present invention provides a live broadcast server, which includes a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to implement the above-mentioned method for rendering a gift special effect.
In a seventh aspect, an embodiment of the present invention provides a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions, and when the machine-executable instructions are called and executed by a processor, the machine-executable instructions cause the processor to implement the method for rendering the gift special effect.
The embodiment of the invention has the following beneficial effects:
according to the gift special effect rendering method and device, the electronic equipment and the live broadcast server, a first virtual scene of a virtual live broadcast room is constructed, and sectional special effect data of a target gift are obtained; wherein the segmented special effect data comprises: first effect data rendered in a first virtual scene, and second effect data rendered in a generic scene of a target gift; rendering and displaying first special effect data in a first virtual scene; and in response to a specified condition being triggered, controlling the virtual live broadcast room to be converted from the first virtual scene to a general scene of the target gift, and rendering and displaying the second special effect data in the general scene. In the method, the primary gift special effect is rendered in the first virtual scene through the segmented special effect data of the target gift, after the specified condition is triggered, the virtual live broadcast room is controlled to be converted from the first virtual scene to the general scene of the target gift, and the secondary gift effect rendering and presentation are carried out in the general scene.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for rendering a gift special effect according to an embodiment of the present invention;
fig. 2 is a flowchart of another method for rendering a gift special effect according to an embodiment of the present invention;
fig. 3 is a schematic diagram of multi-end interaction of pre-rendering of a method for rendering a gift special effect according to an embodiment of the present invention;
fig. 4 is a multi-end interaction diagram of segmentation and effect assembly of a gift special effect rendering method according to an embodiment of the present invention;
fig. 5 is a schematic multi-end interaction diagram of completing rendering of a gift special effect according to a method for rendering the gift special effect provided by an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a gift special effect rendering apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of another gift special effect rendering apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device or a live server according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
In a live scene, the anchor carries out live broadcast in a virtual live broadcast room, and audience users watch live broadcast contents of the anchor at user client sides. In order to increase the interactivity between the main broadcast and the user, the user can select a specific gift to be presented to the main broadcast, and after the user sends out the gift, a section of animation is usually played in the live broadcast room as the gift effect display. In the correlation technique, the show process of the gift can shelter from the live content of the anchor, and is low in degree of correlation with the live content itself, the live content watched by audiences is easily interrupted, the immersion feeling of the live broadcast is influenced, and other audience users are led to shield the gift effect display, so that the gift sending willingness of the user is low, and the payment willingness of the user and the overall revenue condition of the platform are further influenced.
Based on this, the gift special effect rendering method and apparatus, the electronic device and the live broadcast server provided by the embodiment of the invention can be applied to a live broadcast room scene, and especially can be applied to a virtual live broadcast room scene.
To facilitate understanding of the present embodiment, a detailed description is first given of a method for rendering a gift special effect disclosed in the embodiment of the present invention, and as shown in fig. 1, the method is applied to a terminal device. Here, an anchor UE (illusion Engine) instance may be run on the terminal device, and other rendering engines may also be run; the anchor UE instance is a rendering engine running on an anchor client or a cloud server, a plurality of special effect rendering templates and rendering logics are prestored in the engine, and the rendering logics can be called to render corresponding rendering templates so as to realize the rendering and presentation of virtual scenes and effects; the method for rendering the gift special effect comprises the following steps:
102, constructing a first virtual scene of a virtual live broadcast room, and acquiring segmented special effect data of a target gift; wherein the segmented special effect data comprises: first effect data rendered in a first virtual scene and second effect data rendered in a generic scene of a target gift;
the virtual live broadcast room can be understood that the anchor is in front of a green screen, the background behind the anchor is actually a virtual scene rendered by an engine of the UE, the virtual scene can be a simulation environment of a real world, the virtual scene can further include various property elements such as a weather environment, an article, a building and the like, and the elements in the scene can be triggered and interacted theoretically through different logic events. It should be noted that, before the gift special effect is displayed, the first virtual scene is rendered and displayed in the virtual live broadcast room.
It will be appreciated that in a virtual live broadcast room, the viewer user may select a virtual item to be sent to the anchor as a virtual gift by calling out the gift panel. A virtual gift may correspond to a gift identifier (e.g., a gift ID), so that the virtual gift with the same tag information may be found from the gift material library through the gift identifier, and it can be understood that the virtual gift selected by the viewer user to be sent to the main broadcast is the target gift.
Here, the target gift-presentation scene includes a first virtual scene and a general scene of the target gift. The first virtual scene is a virtual scene which can be seen by audience users after the audience users enter a virtual live broadcast room, and a main broadcast of the live broadcast room is usually positioned in the first virtual scene; the general scene of the target gift is used as a gift special effect adapting scene after the first virtual scene, which can be understood as a scene level sequence including a plurality of parameter configuration items, and the sequence can obtain different types of general scenes through different scene configuration information. The general scene may also be understood as that, for a target gift, no matter what kind of scene the first virtual scene is, the specified condition may be triggered to switch from the first virtual scene to the general scene. The common scenes corresponding to different gifts may be the same or different.
The segmented special effect data of the target gift includes: first effect data rendered in a first virtual scene, and second effect data rendered in a generic scene of a gift. In an initial state, a first virtual scene is rendered and displayed in a virtual live broadcast room, and first special effect data is rendered in the first virtual scene, wherein the first special effect data can be virtual objects in the first virtual scene, such as virtual characters, virtual animals, virtual articles and the like; also included in the first special effects data are movements, gestures, language, interaction data with the live, etc. of the virtual object. The first special effect data may also be a local scene special effect added to the first virtual scene, such as a cloud special effect, a snowing special effect, and the like, and at this time, the first special effect data further includes data of a display area, a display duration, and the like of the local special effect.
In this embodiment, the first special effect data is used to display a partial gift special effect in the current scene of the virtual live broadcast room, so as to achieve the purpose of combining the display of the gift special effect with the scene in the live broadcast room. On the basis, the scene of the virtual live broadcast room is converted from the first virtual scene to the general scene of the target gift by triggering the specified condition.
The second special effect data is used for displaying the gift special effect in a general scene, the general scene can be generated by rendering in advance, when scene conversion is triggered, the general scene is displayed, and then the subsequent gift special effect is displayed based on the second special effect data; the second special effect data may include data such as movement and gesture of the virtual object, and may further include interaction data of the virtual object and the anchor, and the like.
In this step, the initial special effect data of the gift is segmented to obtain the segmented special effect data of the target gift, so as to prepare for the subsequent segmented rendering work of the special effect of the target gift.
104, rendering and displaying first special effect data in a first virtual scene;
rendering and displaying a gift effect in a first virtual scene according to first special effect data in the segmented special effect data of the target gift, specifically, rendering and displaying a target object corresponding to the target gift in the first virtual scene, wherein the target object is an object which can control movement in the virtual scene, and can be a virtual character, a virtual animal, a virtual plant and the like; controlling the target object to move in the first virtual scene, and controlling the target object and the anchor object in the virtual live broadcast room to execute preset interactive operation based on the first special effect data to obtain an interactive result, for example: the virtual character and the anchor embrace, dance, photograph and the like.
In the step, according to the sectional special effect data of the obtained target gift, rendering and displaying the first special effect data in the first virtual scene to show the prepositive effect in the special effect of the gift. The first special effect data can comprise interaction with the anchor, and can provide more immersive gift interaction effect for the user, so that the situation that the live broadcast effect is influenced by the fact that the gift directly covers the live broadcast content is avoided, and meanwhile, the display effect of the special effect of the virtual gift is improved.
And 106, responding to the triggering of the specified condition, controlling the virtual live broadcast room to be converted from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
Here, the above-mentioned specified condition may be triggered by a body action of the anchor, for example: stretching the hands and lowering the head; or may be a viewer user's behavior or business logic trigger, such as sending a bullet screen containing keywords, etc. When the specified condition is triggered, the virtual live broadcast room is controlled to be converted from the first virtual scene to the general scene of the target gift. The mode of splicing the first virtual scene and the general scene by the terminal equipment can be in various forms, in one form, the movement of the virtual camera can be controlled, so that the general scene is displayed in the virtual live broadcast room, and a preset scene joint special effect can be rendered and displayed in the moving process of the virtual camera; in addition, the connection can be carried out through the anchor gesture action and other business logics, for example, after the anchor executes the designated action, the universal scene is directly switched into a live broadcast room, and the scene switching effect of instant switching is created.
And rendering and displaying the second special effect data in the general scene after the general scene is displayed. In one mode, an interaction result corresponding to first special effect data is rendered and displayed in a general scene, and an operation result of a specified operation is displayed in the general scene in response to a specified operation of a main broadcasting object in a virtual live broadcasting room for the interaction result, for example: when the designated condition is that the anchor makes a gesture action of extending hands upwards, and the general scene is a virtual wall, the designated condition is triggered when the anchor extends hands upwards to hang the photo in the first virtual scene, the scene is switched to the pink virtual wall, and a picture that two hands hang the photo on the wall appears.
In this step, in response to a specific condition being triggered, the virtual live broadcast room is controlled to be switched from the first virtual scene to a general scene of the target gift, and the second special effect data is rendered and displayed in the general scene. In the mode, after the appointed condition is triggered, the interaction result corresponding to the first special effect data can be rendered and displayed in the general scene, the association with the anchor terminal correlation behavior is supported, the gift interaction effect which is more immersive for the user can be provided, the display effect of the special effect of the virtual gift is improved, and the immersive experience of watching live broadcast by the audience is improved.
The gift special effect rendering method is applied to the terminal equipment; the method comprises the following steps: constructing a first virtual scene of a virtual live broadcast room; acquiring segmented special effect data of a target gift; wherein the segmented special effect data comprises: first effect data rendered in a first virtual scene and second effect data rendered in a generic scene of a target gift; rendering and displaying first special effect data in a first virtual scene; and in response to the specified condition being triggered, controlling the virtual live broadcast room to be converted from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene. In this mode, through the segmentation special effect data that acquire the target gift, render and demonstrate leading gift special effect in first virtual scene, specified condition is triggered the back, control virtual live broadcast room is by the general scene of first virtual scene conversion to the target gift, carry out the second time effect in general scene and render and present, the mode is with gift special effect render and live broadcast content in the live broadcast room fuse each other, avoided the gift effect to shelter from the show of live broadcast content, promoted spectator's the sense of immersion of watching the live broadcast.
In this embodiment, before the segmented special effect data of the target gift is obtained, some pre-rendering operations are required to obtain rendering data of a general scene, so as to avoid excessive data rendering pressure in the live broadcast process.
Obtaining gift special effect information of a target gift; determining scene configuration information of the target gift based on the gift special effect information; wherein the scene configuration information includes: the method comprises the following steps of (1) obtaining a special effect style, a special effect playing time length and a scene style of a first virtual scene; rendering a preset scene basic model and illumination information based on scene configuration information to obtain pre-rendering data of a general scene of the target gift; wherein the prerender data is used for displaying a generic scene.
It should be noted that after the virtual live broadcast room is started, the pre-rendering data of the general scene of the gift can be obtained from each gift supported by the virtual live broadcast room through the above manner. Taking the target gift as an example, gift special effect information of the target gift is first acquired.
In one embodiment, the live broadcast server sends the gift special effect information to the terminal device, the terminal device obtains the gift special effect information sent by the live broadcast server, and the gift special effect information of the target gift may include: the gift identification of the gift, the duration of the special effect of the gift, special effect image data, etc., the gift special effect identification is the only identification to look for the gift, can be gift ID, gift name, etc.; as for the duration of the gift special effect, it can be understood that the whole special effect of the gift is formed by combining a plurality of special effects with the same or different time period lengths, and the duration of the gift special effect is the duration of a specified special effect interval; the special effect image data is image data information corresponding to the specified duration of the special effect of the gift.
Then, based on the gift special effect information, determining scene configuration information of the target gift; the scene configuration information includes: the special effect style, the special effect playing time length and the scene style of the first virtual scene. It is noted that, before the audience user triggers a specific gift, the terminal device may acquire all gift identifiers supported by the live broadcast room in advance, and based on a pre-stored general pre-rendering template and rendering logic, quickly determine the scene configuration information of the target gift. Specifically, the determining manner of the scene configuration information may be implemented by the following manners:
1) And classifying the style of the gift special effect. Gift identification of the target gift is obtained from the gift special effect information, and the special effect style of the target gift is determined based on the gift identification. The terminal equipment carries out interval classification according to the gift sign with the affiliated style of all gift special effects according to the gift sign in advance, and the interval classification comprises the following steps: the gift identification system comprises a plurality of regions such as warm regions, dynamic regions, outdoor regions and the like, so that the purpose of quickly obtaining the special effect style of the gift according to the gift identification is achieved;
2) The trickplay time is determined. The method comprises the steps of obtaining gift special effect time of a target gift from gift special effect information, and determining special effect playing time of the target gift based on the gift special effect time and a preset special effect pre-rendering template. The gift special effect time length is the playing time length of an original gift special effect corresponding to the target gift, the playing time length is matched with different time length intervals defined by a pre-designed special effect pre-rendering template, and the matched time length interval is used for determining the final special effect playing time length.
3) The first virtual scene style is classified. The method comprises the steps of obtaining live broadcast room information of a virtual live broadcast room, extracting scene identification of a first virtual scene from the live broadcast room information, and determining scene style of the first virtual scene based on the scene identification. Based on the first virtual scene prerender decision logic, classifying the scene style for gift special effect display according to the scene identification in advance, comprising: the method comprises the following steps of real, illusion, dynamic and the like so as to achieve the purpose of rapidly obtaining the first virtual scene style according to the scene identification.
Therefore, after the gift special effect information of the target gift is received, the scene configuration information of the target gift can be quickly determined only according to the corresponding data, namely, the gift special effect style can be quickly obtained according to the identification of the target gift; acquiring the playing time length of an original gift special effect corresponding to the gift according to the gift identification, matching the playing time length with different time length intervals defined by a pre-designed universal scene pre-rendering template, and determining the final special effect playing time length; receiving live broadcast room information sent by a live broadcast server, extracting a scene identifier of a first virtual scene from the live broadcast room information, and obtaining a first virtual scene style of a target gift according to the target scene identifier. In this way, after being classified and combined according to the three dimensions in the above target scene configuration information, the general scene configuration information of the target gift can be determined, such as: the gift special effect is warm, the playing time is 30 seconds, and the style of a real scene is realized.
And finally, rendering the preset scene basic model and the illumination information according to the acquired target scene configuration information to obtain pre-rendering data of the general scene of the target gift, wherein the pre-rendering data is used for displaying the general scene of the target gift. A scene base model is herein understood to be a model without texture or color, which provides information about the scene shape, structure, etc. of the scene. The illumination information is used for rendering the brightness, contrast, brightness, light and shade degree, light and shadow effect and the like of the colors in the scene.
In addition, pre-rendering data of a universal scene of a plurality of alternative gifts in the virtual live broadcast room are stored in the terminal equipment; wherein, the alternative gift is a gift supported by the virtual live broadcast room; the alternative gifts include target gifts; upon receiving a deliverable gift supported by a specified user side, pre-rendering data of a general scene of the gift other than the deliverable gift is deleted in the terminal device.
That is, the terminal device may store pre-rendering data of a general scene of multiple candidate gifts including a target gift in a virtual live broadcast, and in order to reduce performance pressure of the main broadcast end and prepare for subsequent gift special effect display, after receiving a deliverable gift supported by a specified user end, the terminal device may delete the pre-rendering data of the general scene of the gift other than the deliverable gift. In one embodiment, when the audience user opens the gift panel to screen the gifts, the terminal device obtains the related gift information and screens the pre-rendering data of the multiple gift general scenes, and only the pre-rendering data of the general scenes of the gifts in the current panel are reserved.
In the above manner, the scene configuration information of the target gift is determined according to the special effect information of the target gift, and the pre-rendering data for displaying the general scene of the target gift is obtained based on the scene configuration information of the target gift, so as to prepare for the rendering of the gift display scene.
The embodiments described below provide specific implementations of obtaining segmented special effects data for a target gift.
Obtaining gift special effect information of a target gift; based on the gift special effect information and a preset special effect pre-rendering template, special effect sectional information of the target gift is generated, so that the live broadcast server obtains sectional special effect data of the target gift based on the special effect sectional information, and the sectional special effect data is returned to the terminal equipment.
The special effect pre-rendering template comprises a processing mode for gift special effect information and generates corresponding special effect sectional information; the special effect segmentation information indicates how to segment the initial special effect data of the target gift, namely, the special effect segmentation information contains data information related to segmentation, the live broadcast server performs segmentation processing on the initial special effect data of the target gift based on the special effect segmentation information, a plurality of segments of segmented special effect data of the target gift can be obtained, and the segmented special effect data is returned to the terminal device for storage.
In one mode, the gift effect information includes at least a gift effect duration and effect image data. Determining the time period of the gift special effect duration based on the special effect pre-rendering template; extracting lens information from the special effect image data based on the belonging time period; determining gift effect deduction rhythm data of the target gift based on the shot information; and generating special effect segmentation information of the target gift based on the gift effect deduction rhythm data and the special effect pre-rendering template.
Specifically, firstly, according to a special effect pre-rendering template in the terminal device, the gift special effect time length is divided, and the time period of the gift special effect time length is determined, for example, the divided 3 time periods are less than 5s, 5s-10s, and more than 10 s.
Then, extracting lens information from special effect image data corresponding to the time period of the gift special effect duration; the shot information may include a kind of a shot included in the special effect image data, and time length information of each kind of shot, and the like. And determining gift effect deduction rhythm data of the target gift according to the lens information, wherein the gift effect deduction rhythm data is data for evaluating the gift special effect playing rhythm, the score of the gift effect deduction rhythm data influences the number of segments of the gift special effect, and generally the higher the score is, the more the number of segments is. The present effect deduction rhythm data score is related to the shot switching times and the shot duration, in a specific implementation mode, the shot information comprises the shot switching times, the single-shot shortest duration and the single-shot longest duration, and the shot switching times, the single-shot shortest duration and the single-shot longest duration can be added in a weighted mode based on preset weighting parameters to obtain the present effect deduction rhythm score of the target present. The present embodiment may provide a gift effect deductive tempo data formula of calculating the target gift, i.e., M = a X + b Y + c Z; wherein M is the gift effect deduction rhythm data; a. b and c are weighting parameters; x is the lens switching times; y is the shortest duration of a single lens; and Z is the longest duration of the single lens.
Finally, determining the number of the segments corresponding to the gift effect deduction rhythm data based on the special effect pre-rendering template; and determining a segmentation timestamp based on the number of segments, the shortest duration of a single lens and the longest duration of the single lens in the lens information, and generating special effect segmentation information of the target gift. Illustratively, when the calculated value of the gift effect deduction rhythm data falls in the fractional segment A, determining that the gift special effect needs to be divided into 2 segments by combining a segmentation rule defined in the special effect pre-rendering template, and adding a timestamp between the single-shot shortest duration Y and the single-shot longest duration Z judgment segments to obtain two segments of gift special effect information.
In the above manner, according to the gift special effect information and the preset special effect pre-rendering template in the terminal device, the initial special effect data of the target gift is segmented by means of the segmentation rule defined by the special effect pre-rendering template, and finally the segmented special effect data of the target gift is obtained.
In addition, a specified condition for triggering a scene change needs to be determined in advance. Specifically, determining a specified condition based on a special effect pre-rendering template and a special effect style of the target gift; wherein the specified conditions include: in the segmented special effect data, the trigger condition of the special effect data of the segment is specified; the trigger condition includes one or more of: and the anchor object in the virtual live broadcast room executes a preset gesture, and the client side, which receives the specified information, completes the specified task and sends the gift sending instruction, in the virtual live broadcast room executes the specified behavior.
Specifically, the special effect pre-rendering template may include specified conditions corresponding to a plurality of special effect styles and related parameters for generating the specified conditions; based on the special effects style of the target gift, corresponding specified conditions may be determined from the special effects prerendering target. The specified condition includes a trigger condition of the special effect data of the specified section in the segmented special effect data, for example, a specified condition of the general scene for switching from the aforementioned first virtual scene to the target gift. In addition, there may be multiple segments of sub-effects in the first effect data or the second effect data, and each segment of sub-effect also needs to set a specific condition for triggering, for example, an interactive operation performed by a host.
In a specific implementation manner, the trigger condition may control the virtual live broadcast room to switch from the first virtual scene to a general scene of the target gift, so as to complete the deduction of the subsequent special effect, and the trigger condition may specifically include: 1) A main broadcasting object in the virtual live broadcasting room executes a preset gesture, such as triggering of key points of limbs such as a main broadcasting clap and a hand stretching; 2) The virtual live broadcasting room receives the specified information or the virtual live broadcasting room completes the service logic trigger such as the specified task, for example: audience users in the anchor room need to send a barrage with a certain specific keyword or complete 100 praised tasks of the anchor point; 3) The user sending the gift sending instruction executes the specified behavior, such as: if the user needs to control the mobile phone to tilt a certain angle or make a certain expression, the special effect of the subsequent segment can be triggered. The trigger condition may be one or more.
According to the method for obtaining the segmented special effect data of the target gift, the initial special effect data of the gift is segmented to obtain the segmented special effect data of the target gift, and the segmented special effect data is returned to the terminal device to prepare for subsequent segmented rendering work of the special effect of the target gift.
The embodiments described below provide specific implementations for rendering and displaying first special effects data in a first virtual scene.
Rendering and displaying a target object corresponding to the target gift in the first virtual scene; and controlling the target object to move in the first virtual scene, and controlling the target object and a main broadcasting object in the virtual live broadcasting room to execute preset interactive operation to obtain an interactive result.
Specifically, according to the first special effect data in the segmented special effect data, the terminal device generates and renders a gift special effect of the first virtual scene, and the audience user can see a corresponding gift effect in the first virtual scene. And controlling the target object to move in the first virtual scene, and controlling the target object and the anchor object in the virtual live broadcast room to execute preset interaction operation to obtain a corresponding interaction result, such as hugging, dancing and the like between the target object and the anchor object.
In this way, the first special effect data is rendered and displayed in the first virtual scene according to the segmented special effect data of the acquired target gift, and a prepositive effect in the gift special effect is presented. The first special effect data may include interaction with the anchor, which may provide a more immersive gift interaction effect to the user, prevent the gift from directly overlaying the live content and affecting the live effect, and improve the display effect of the special effect of the virtual gift,
the following embodiments provide specific implementations for rendering and displaying second special effects data in a generic scene.
Splicing the first virtual scene and the general scene in response to the triggering of a specified condition; controlling the virtual camera to move so that a general scene is displayed in the virtual live broadcast room; rendering and displaying a preset scene linking special effect in the moving process of the virtual camera; and rendering and displaying the second special effect data in the general scene after the general scene is displayed.
Specifically, after receiving a message that a specified condition is triggered, terminal equipment acquires a universal scene of a pre-rendered target gift, and splices a first virtual scene and the universal scene which are in live broadcasting, wherein the first virtual scene is shot by a virtual camera in an initial state. The movement of the virtual camera and the display of the special effects can be controlled by the time point information in the second special effect data.
Further, after the general scene is displayed, the second special effect data is rendered and displayed in the general scene. The manner of rendering and displaying the second special effect data in the general scene may specifically include: rendering and displaying an interaction result corresponding to the first special effect data in the general scene; and responding to the specified operation of the anchor object in the virtual live broadcast room for the interaction result, and displaying the operation result of the specified operation in the general scene.
The interaction result may be a virtual item, e.g., a photograph; and the anchor object carries out specified operations such as moving, editing and hanging on the wall on the interaction result, and simultaneously displays the operation result of the specified operations in a general scene.
And in the general scene, rendering and displaying an interaction result corresponding to the first special effect data in the general scene, and displaying an operation result of specified operation aiming at the interaction result in the general scene based on the second special effect data, thereby finishing rendering and presenting a complete special effect in the virtual scene. For example: in the first virtual scene, the target object approaches the anchor object and takes a picture, and at the moment, the interaction result is a combined picture; in the general scene, the designated operation is that the anchor makes a gesture motion of extending hands upwards, and the general scene is a virtual wall, so that when the anchor extends hands upwards to hang a photo in the general scene, a picture that two hands hang the photo on the wall appears.
In the above manner, in response to the specific condition being triggered, the virtual live broadcast room is controlled to be switched from the first virtual scene to the general scene of the target gift, and the second special effect data is rendered and displayed in the general scene. In the mode, after the appointed condition is triggered, the interaction result corresponding to the first special effect data can be rendered and displayed in the general scene, the association with the anchor terminal is supported, the more immersive gift interaction effect can be provided for the user, the display effect of the special effect of the virtual gift is improved, and the immersive sense of watching live broadcast by the audience is improved.
The following description will be continued with respect to the live broadcast server as a main body. Fig. 2 is a flowchart of a method for rendering a gift special effect; the method is applied to a live broadcast server; the method comprises the following steps:
step S202, receiving a gift sending instruction aiming at a target gift, obtaining gift special effect information of the target gift, sending the gift special effect information to a terminal device, generating special effect segmentation information of the target gift based on the gift special effect information through the terminal device, and returning the special effect segmentation information to a live broadcast server;
specifically, after an audience user enters a live broadcast room, the audience user can call a gift sending panel to screen a gift, when the user clicks a certain gift in the live broadcast room and triggers a specific gift sending behavior, a live broadcast platform client sends a gift ID to a live broadcast platform server to obtain special effect information of the gift to be rendered and forwards the special effect information to a terminal device, after the terminal device obtains the special effect information of the gift, the terminal device pre-renders a template according to the special effect information of the gift and a special effect preset by the terminal device to generate special effect segmentation information of a target gift, the special effect segmentation information comprises data such as segmentation quantity and the like, and the special effect segmentation information is returned to a live broadcast server.
Step S204, obtaining segmented special effect data of the target gift based on the special effect segmented information; wherein the segmented special effect data comprises: first effect data rendered in a first virtual scene and second effect data rendered in a generic scene of a target gift;
after the live broadcast server obtains the special effect segmentation information, the live broadcast server conducts segmentation processing on the initial special effect data of the target gift to obtain first special effect data rendered in a first virtual scene and second special effect data rendered in a general scene of the target gift.
Step S206, returning the segmented special effect data to the terminal equipment so as to render and display the first special effect data in the first virtual scene through the terminal equipment; and in response to the specified condition being triggered, controlling the virtual live broadcast room to be converted from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
The live broadcast server returns the segmented gift special effect data (namely the segmented special effect data) to the terminal equipment, the terminal equipment generates and renders the gift special effect of the first virtual scene, the audience user side can normally see the first part of the special effect, when the specified condition is triggered, the live broadcast server controls the virtual live broadcast room to be converted from the first virtual scene to the general scene of the target gift, the gift special effect of the next segment is rendered and presented, and therefore rendering and presentation of a complete special effect in the virtual scene are completed.
The gift special effect rendering method comprises the steps of receiving a gift sending instruction aiming at a target gift, obtaining gift special effect information of the target gift, sending the gift special effect information to terminal equipment, generating special effect segmentation information of the target gift based on the gift special effect information through the terminal equipment, and returning the special effect segmentation information to a live broadcast server; obtaining segmented special effect data of the target gift based on the special effect segmented information; wherein the segmented special effect data comprises: first effect data rendered in a first virtual scene and second effect data rendered in a generic scene of a target gift; returning the segmented special effect data to the terminal equipment so as to render and display the first special effect data in the first virtual scene through the terminal equipment; and in response to the specified condition being triggered, controlling the virtual live broadcast room to be converted from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene. In the mode, the live broadcast server side and the terminal equipment perform interaction of special effect information for many times, the gift effect is rendered and presented, the gift special effect is rendered and live broadcast contents in a live broadcast room are fused with each other through the information interaction process, the display that the gift effect covers the live broadcast contents is avoided, and the immersion sense that audiences watch live broadcasts is improved.
Before receiving a gift sending instruction for a target gift, the live broadcast server can obtain live broadcast room information of a virtual live broadcast room in a broadcasting state; the live broadcast room information at least comprises a scene identification of a first virtual scene and a gift identification of a gift supported by a virtual live broadcast room; and providing live room information to the terminal equipment.
Specifically, when a live broadcast room is opened by a main broadcast end on a live broadcast platform, a live broadcast server sends information such as main broadcast related information, gift ID supported by a room to which the live broadcast server belongs, a first scene identification and the like to a live broadcast platform server, the live broadcast platform server acquires live broadcast room information currently in virtual broadcasting, obtains scene identification information including the first virtual scene and gift identification information of gifts supported by the virtual live broadcast room, and sends the scene identification information and the gift identification information to terminal equipment.
The live broadcast server can also obtain the operation that appointed user terminal called out the gift panel before receiving the gift giving instruction to the target gift, obtains the gift that can send that appointed user terminal supported from the gift panel, provides the gift that can send to terminal equipment.
The user enters a live broadcast room in virtual broadcasting and calls out a gift-sending panel; the method comprises the steps that a live broadcast platform client acquires all gift information contained in a current panel and sends the gift information to a live broadcast platform server; the client side obtains all gift identification information contained in the current panel and sends the information to the live broadcast platform server side, and the live broadcast platform server side forwards the gift information to the terminal equipment after receiving the gift identification information.
And after the first special effect data is displayed, receiving information data from the anchor terminal, determining whether the specified condition is triggered or not based on the information data, and if the specified condition is triggered, sending a condition identifier of the specified condition to the terminal equipment to indicate that the specified condition is triggered.
Specifically, after the display of the special effect data in the first virtual scene is completed, real-time information data from the anchor terminal is received, whether the specified condition is triggered or not is determined according to the information data, and when the specified condition is triggered, a condition identifier of the corresponding specified condition is sent to the terminal device to prompt the anchor terminal UE, for example, a camera acquisition picture of the anchor is acquired, real-time detection is performed, and if it is detected that the anchor makes a specified gesture, the condition identifier is sent to the terminal device to perform rendering of the special effect.
The following provides a specific implementation manner of the gift special effect rendering method of the embodiment in a virtual live scene.
The method comprises the steps that in a virtual live broadcast scene, a main broadcast end and a user end are included, wherein the main broadcast end completes virtual broadcast through a virtual broadcast function of a live broadcast platform through the main broadcast; audience users enter the virtual live broadcast room which is played through the user terminal.
In the embodiment, a user sends out a small bear gift in a virtual live broadcast room, then sees that the small bear in the live broadcast room (a first virtual scene) stretches out a head bag from a scene window, walks to the side of a main broadcast to hold the main broadcast and group a photo with the main broadcast, and then the main broadcast hangs a newly-shot group photo on a general photo wall (a general scene) to serve as a case, so that a method for rendering the special effect of the gift is explained.
In the case, the rendering and the presentation of the gift special effect in the virtual live broadcast room are completed mainly through data intercommunication of a live broadcast platform client-live broadcast server-terminal equipment, and the rendering and the presentation of the gift special effect are specifically divided into three multi-end interaction processes of multi-end interaction of pre-rendering, multi-end interaction of segmentation and effect assembly and multi-end interaction of rendering of the gift special effect.
Before audience users send out specific gifts in a live broadcast room, a live broadcast platform needs to prepare for a gift sending process of the users and pre-render a universal gift special effect receiving scene. To facilitate understanding, fig. 3 provides a multi-end interaction schematic diagram of pre-rendering in the present case of the method for rendering a gift special effect, where the process is implemented by a live broadcast platform client, a live broadcast server, and a terminal device interactively, and includes the following steps:
step S302, the live broadcast platform client sends the live broadcast room information to a live broadcast server;
the live broadcast server can collect the information of the live broadcast room which is currently in virtual broadcasting, and the information mainly comprises a scene identifier, a gift identifier supported by a room to which the live broadcast server belongs and anchor related information; and the live broadcast platform client sends the live broadcast room information to a live broadcast server.
Step S304, the live broadcast server obtains the gift special effect information by analyzing the live broadcast room information and sends the information to the terminal equipment, and the anchor terminal UE determines the scene configuration information of the target gift and the pre-rendering data of the general scene of the gift.
After receiving the data, the live broadcast server finds out corresponding gift special effect information by analyzing the gift identification, and sends the data to the terminal equipment; and after receiving the data, the terminal equipment combines with the general scene pre-rendering judgment logic for bearing the special effect to analyze and classify the type of the gift special effect to obtain configuration parameters, and the terminal equipment renders the basic model body and the illumination information of the general bearing scene according to the configuration parameters, but does not show the basic model body and the illumination information to a user for the use of a subsequent process.
After the audience user clicks the gift to send, segmentation and effect assembly need to be performed on the gift special effect, for easy understanding, fig. 4 provides a multi-end interaction schematic diagram of segmentation and effect assembly in the gift special effect rendering method of the embodiment, and the method includes the following steps:
step S402, the live broadcast platform client sends the target gift identification to the live broadcast server, and the live broadcast server obtains gift special effect information to be rendered based on the target gift identification;
step S404, the live broadcast server obtains the gift special effect information to be rendered and forwards the information to the terminal equipment;
step S406, the terminal device generates special effect segmentation information of the target gift based on the gift special effect information and a preset special effect pre-rendering template, and sends the special effect segmentation information to the live broadcast server;
in the above embodiment, the number of segments in the special effect segment information is two.
And step S408, the live broadcast server carries out segmentation processing on the target gift based on the special effect segmentation information to obtain segmented special effect data, and the segmented special effect data is returned to the terminal equipment.
After the live broadcast server obtains the segmented special effect data, the rendering of the gift special effect is completed under the cooperation of the terminal equipment. For convenience of understanding, fig. 5 provides a multi-end interaction diagram for completing the rendering of the gift special effect in the method for rendering the gift special effect according to the embodiment, and includes the following steps:
step S502, rendering and displaying first special effect data in a first virtual scene based on the first special effect data of the segmented special effect data in the terminal equipment;
in one example, the target object in the first special effect data is a little bear, and the first virtual scene is rendered in such a way that the little bear is seen in the virtual live broadcast room to find out a head bag from the window of the scene, and the little bear walks to the side of the main broadcast to hold the main broadcast and is combined with the main broadcast.
Step S504, after the first special-effect data is displayed, the anchor side sends information data to a live broadcast server;
step S506, the live broadcast server receives the information data from the anchor terminal, and determines whether the specified condition is triggered or not based on the information data; if the specified condition is triggered, sending a condition identifier of the specified condition to the terminal equipment;
in this case, the anchor makes a hang gesture under the specified conditions, the second segment special effect is triggered, and the photo is hung on the photo wall.
Step S508, the terminal device receives the condition identifier, controls the virtual live broadcast room to switch from the first virtual scene to the general scene of the target gift, and renders second special effect data in the general scene of the target gift.
In this case, the conversion mode is to switch from the first virtual scene of the original anchor live broadcast to the general scene by moving the lens of the virtual camera, and meanwhile, a lens is required to be swept from the right to the left, so that the general scene needs to be seamlessly connected to the left side of the original scene.
Corresponding to the above method embodiment, referring to fig. 6, a schematic diagram of a device for rendering a gift special effect is shown, where the device is configured to operate with a terminal device; the device includes:
a first obtaining module 602, configured to construct a first virtual scene of a virtual live broadcast room; acquiring segmented special effect data of a target gift; wherein the segmented special effect data comprises: first effect data rendered in a first virtual scene and second effect data rendered in a generic scene of a target gift;
a first display module 604, configured to render and display the first special effect data in the first virtual scene;
and a second display module 606, configured to, in response to a specified condition being triggered, control the virtual live broadcast room to switch from the first virtual scene to a general scene of the target gift, and render and display the second special effect data in the general scene.
The gift special effect rendering device is applied to terminal equipment; the method comprises the following steps: constructing a first virtual scene of a virtual live broadcast room; acquiring segmented special effect data of a target gift; wherein the segmented special effect data comprises: first effect data rendered in a first virtual scene and second effect data rendered in a generic scene of a target gift; rendering and displaying first special effect data in a first virtual scene; and in response to a specified condition being triggered, controlling the virtual live broadcast room to be converted from the first virtual scene to a general scene of the target gift, and rendering and displaying the second special effect data in the general scene. In the mode, through the segmentation special effect data that acquire the target gift, render and demonstrate leading gift special effect in first virtual scene, specified condition is triggered the back, control virtual live broadcast room is changed to the general scene of target gift by first virtual scene, carry out the second time effect in general scene and render and present, this mode is with the gift special effect render with live broadcast content in the live broadcast room fuse each other, avoided the gift effect to shelter from the show of live broadcast content, promoted audience and watched live broadcast's sense of immersing.
The first obtaining module is further configured to obtain gift special effect information of the target gift; and generating special effect sectional information of the target gift based on the gift special effect information and a preset special effect pre-rendering template so as to enable the live broadcast server to obtain sectional special effect data of the target gift based on the special effect sectional information and return the sectional special effect data to the terminal equipment.
The first obtaining module is further configured to determine a time period to which the gift special effect duration belongs based on a special effect pre-rendering template; extracting lens information from the special effect image data based on the belonging time period; determining gift effect deduction rhythm data of the target gift based on the lens information; and generating special effect segmentation information of the target gift based on the gift effect deduction rhythm data and the special effect prerendering template.
The lens information includes: the first obtaining module is further configured to perform weighted addition on the lens switching times, the single-lens shortest duration and the single-lens longest duration based on preset weighting parameters to obtain a present effect deductive rhythm score of the target present.
The first obtaining module is further configured to determine a number of segments corresponding to the gift effect deduction rhythm data based on the special effect pre-rendering template; and determining a segmentation timestamp based on the number of segments, the shortest time length of a single lens and the longest time length of the single lens in the lens information, and generating special effect segmentation information of the target gift.
The apparatus further includes a first determining module, configured to determine a specified condition based on the special effect pre-rendering template and a special effect style of the target gift, where the specified condition includes: in the segmented special effect data, the trigger condition of the special effect data of the segment is specified; the trigger conditions include one or more of the following: and the anchor object in the virtual live broadcast room executes a preset gesture, and the client side, which receives the specified information, completes the specified task and sends the gift sending instruction, in the virtual live broadcast room executes the specified behavior.
The device also comprises a second acquisition module, a second display module and a second display module, wherein the second acquisition module is used for acquiring the gift special effect information of the target gift; determining scene configuration information of the target gift based on the gift special effect information; wherein the scene configuration information includes: the method comprises the following steps of (1) obtaining a special effect style, a special effect playing time length and a scene style of a first virtual scene; rendering a preset scene basic model and illumination information based on scene configuration information to obtain pre-rendering data of a general scene of the target gift; wherein the pre-rendering data is used to display a generic scene.
The second obtaining module is further configured to obtain a gift identifier of the target gift from the gift special effect information, and determine a special effect style of the target gift based on the gift identifier; obtaining gift special effect time length of the target gift from the gift special effect information, and determining special effect playing time length of the target gift based on the gift special effect time length and a preset special effect pre-rendering template; the method comprises the steps of obtaining live broadcast room information of a virtual live broadcast room, extracting scene identification of a first virtual scene from the live broadcast room information, and determining scene style of the first virtual scene based on the scene identification.
The device also comprises a first deleting module, a second deleting module and a third deleting module, wherein the first deleting module is used for storing the pre-rendering data of the general scenes of the multiple alternative gifts in the virtual live broadcast room in the terminal equipment; wherein the alternative gifts are gifts supported by the virtual live broadcast room; the alternative gift includes a target gift; and receiving the deliverable gifts supported by the specified user side, and deleting the pre-rendering data of the common scene of the gifts other than the deliverable gifts in the terminal equipment.
The first display module is further configured to render and display a target object corresponding to the target gift in the first virtual scene; and controlling the target object to move in the first virtual scene, and controlling the target object and the anchor object in the virtual live broadcast room to execute preset interactive operation to obtain an interactive result.
The second display module is further configured to splice the first virtual scene and the general scene in response to a specified condition being triggered; controlling the virtual camera to move so that a general scene is displayed in the virtual live broadcast room; rendering and displaying a preset scene linking special effect in the moving process of the virtual camera; and rendering and displaying the second special effect data in the general scene after the general scene is displayed.
The second display module is further configured to render and display an interaction result corresponding to the first special effect data in the general scene; and responding to the specified operation of the anchor object in the virtual live broadcast room for the interaction result, and displaying the operation result of the specified operation in the general scene.
Corresponding to the above method embodiment, refer to a schematic diagram of a gift special effect rendering device shown in fig. 7, which is disposed in a live broadcast server; the device comprises:
an information returning module 702, configured to receive a gift sending instruction for a target gift, obtain gift special effect information of the target gift, send the gift special effect information to a terminal device, generate special effect segment information of the target gift based on the gift special effect information through the terminal device, and return the special effect segment information to a live broadcast server;
a data obtaining module 704, configured to obtain segmented special effect data of the target gift based on the special effect segmentation information; wherein the segmented special effect data comprises: first effect data rendered in a first virtual scene, and second effect data rendered in a generic scene of a target gift;
a data display module 706, configured to return the segmented special effect data to the terminal device, so as to render and display the first special effect data in the first virtual scene through the terminal device; and in response to the specified condition being triggered, controlling the virtual live broadcast room to be converted from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
The gift special effect rendering device receives a gift sending instruction aiming at a target gift, obtains gift special effect information of the target gift, sends the gift special effect information to the terminal equipment, generates special effect segmentation information of the target gift based on the gift special effect information through the terminal equipment, and returns the special effect segmentation information to the live broadcast server; obtaining segmented special effect data of the target gift based on the special effect segmented information; wherein the segmented special effect data comprises: first effect data rendered in a first virtual scene and second effect data rendered in a generic scene of a target gift; returning the segmented special effect data to the terminal equipment so as to render and display the first special effect data in the first virtual scene through the terminal equipment; and in response to the specified condition being triggered, controlling the virtual live broadcast room to be converted from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene. In this mode, live broadcast server and terminal equipment carry out the interaction of many times special effect information for the render and the present of gift effect fuse with the live content in the live broadcast room through the information interaction process with the render of gift special effect, have avoided the gift effect to shelter from the show of live content, have promoted spectator and have watched live the sense of immersing.
The device also comprises an information providing module, a live broadcast room information acquiring module and a live broadcast room information acquiring module, wherein the information providing module is used for acquiring the live broadcast room information of the virtual live broadcast room in the broadcasting state; the live broadcast room information at least comprises a scene identifier of a first virtual scene and a gift identifier of a gift supported by the virtual live broadcast room; and providing live room information to the terminal equipment.
The device also comprises a gift providing module used for obtaining the operation of calling out the gift panel by the appointed user end, obtaining the deliverable gift supported by the appointed user end from the gift panel and providing the deliverable gift for the terminal equipment.
The device further comprises a condition triggering module, wherein the condition triggering module is used for receiving the information data from the anchor terminal after the first special effect data is displayed, determining whether the specified condition is triggered or not based on the information data, and sending a condition identifier of the specified condition to the terminal equipment to indicate that the specified condition is triggered if the specified condition is triggered.
The embodiment also provides an electronic device, which comprises a processor and a memory, wherein the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to realize the gift special effect rendering method.
The embodiment also provides a live broadcast server, which comprises a processor and a memory, wherein the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to realize the gift special effect rendering method.
Referring to fig. 8, the electronic device or the live broadcast server includes a processor 100 and a memory 101, the memory 101 stores machine executable instructions capable of being executed by the processor 100, and the processor 100 executes the machine executable instructions to implement the method for rendering the gift special effect.
Further, the electronic device shown in fig. 8 further includes a bus 102 and a communication interface 103, and the processor 100, the communication interface 103, and the memory 101 are connected through the bus 102.
The Memory 101 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 103 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like may be used. The bus 102 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 8, but that does not indicate only one bus or one type of bus.
Processor 100 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 100. The Processor 100 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 101, and the processor 100 reads the information in the memory 101 and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
The processor in the electronic device may implement the following operations of the gift special effect rendering method by executing machine executable instructions: constructing a first virtual scene of a virtual live broadcast room; acquiring segmented special effect data of a target gift; wherein the segmented special effect data comprises: first effect data rendered in a first virtual scene, and second effect data rendered in a generic scene of a target gift; rendering and displaying first special effect data in a first virtual scene; and in response to a specified condition being triggered, controlling the virtual live broadcast room to be converted from the first virtual scene to a general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
In the method, the segmented special effect data of the target gift are used for rendering in the first virtual scene to show the preorder gift special effect, after the specified condition is triggered, the virtual live broadcast room is controlled to be converted from the first virtual scene to the general scene of the target gift, and the second time of gift effect rendering and presentation are carried out in the general scene.
The processor in the electronic device may implement the following operations of the method for rendering a gift special effect by executing machine executable instructions: obtaining gift special effect information of a target gift; and generating special effect sectional information of the target gift based on the gift special effect information and a preset special effect pre-rendering template, so that the live broadcast server obtains sectional special effect data of the target gift based on the special effect sectional information, and returning the sectional special effect data to the terminal equipment.
The processor in the electronic device may implement the following operations of the gift special effect rendering method by executing machine executable instructions: the gift special effect information at least comprises gift special effect duration and special effect image data; determining the time period of the gift special effect duration based on the special effect pre-rendering template; extracting lens information from the special effect image data based on the time period to which the lens information belongs; determining gift effect deduction rhythm data of the target gift based on the lens information; and generating special effect segmentation information of the target gift based on the gift effect deduction rhythm data and the special effect pre-rendering template.
The processor in the electronic device may implement the following operations of the method for rendering a gift special effect by executing machine executable instructions: the lens information includes: the number of times of lens switching, the shortest duration of a single lens and the longest duration of the single lens; and based on preset weighting parameters, carrying out weighted addition on the lens switching times, the single-lens shortest duration and the single-lens longest duration to obtain the gift effect deduction rhythm score of the target gift.
The processor in the electronic device may implement the following operations of the method for rendering a gift special effect by executing machine executable instructions: determining the number of segments corresponding to the gift effect deduction rhythm data based on the special effect pre-rendering template; and determining a segmentation timestamp based on the number of segments, the shortest duration of a single lens and the longest duration of the single lens in the lens information, and generating special effect segmentation information of the target gift.
The processor in the electronic device may implement the following operations of the method for rendering a gift special effect by executing machine executable instructions: determining a specified condition based on the special effect pre-rendering template and the special effect style of the target gift; wherein the specified conditions include: in the segmented special effect data, the trigger condition of the special effect data of the segment is specified; the trigger conditions include one or more of the following: and the anchor object in the virtual live broadcast room executes a preset gesture, and the client side, which receives the specified information, completes the specified task and sends the gift sending instruction, in the virtual live broadcast room executes the specified behavior.
The processor in the electronic device may implement the following operations of the gift special effect rendering method by executing machine executable instructions: obtaining gift special effect information of a target gift; determining scene configuration information of the target gift based on the gift special effect information; wherein the scene configuration information includes: the method comprises the following steps of (1) obtaining a special effect style, a special effect playing time length and a scene style of a first virtual scene; rendering a preset scene basic model and illumination information based on the scene configuration information to obtain pre-rendering data of a general scene of the target gift; wherein the pre-rendering data is used to display a generic scene.
The processor in the electronic device may implement the following operations of the method for rendering a gift special effect by executing machine executable instructions: the method comprises the steps of obtaining a gift identification of a target gift from gift special effect information, and determining a special effect style of the target gift based on the gift identification; obtaining gift special effect time length of the target gift from the gift special effect information, and determining special effect playing time length of the target gift based on the gift special effect time length and a preset special effect pre-rendering template; the method comprises the steps of obtaining live broadcast room information of a virtual live broadcast room, extracting scene identification of a first virtual scene from the live broadcast room information, and determining scene style of the first virtual scene based on the scene identification.
The processor in the electronic device may implement the following operations of the gift special effect rendering method by executing machine executable instructions: storing pre-rendering data of a universal scene of a plurality of alternative gifts in a virtual live broadcast room in terminal equipment; wherein the alternative gifts are gifts supported by the virtual live broadcast room; the alternative gifts include target gifts; and receiving the deliverable gifts supported by the appointed user side, and deleting the pre-rendering data of the common scenes of the gifts other than the deliverable gifts in the terminal equipment.
The processor in the electronic device may implement the following operations of the method for rendering a gift special effect by executing machine executable instructions: rendering and displaying a target object corresponding to the target gift in the first virtual scene; and controlling the target object to move in the first virtual scene, and controlling the target object and a main broadcasting object in the virtual live broadcasting room to execute preset interactive operation to obtain an interactive result.
The processor in the electronic device may implement the following operations of the method for rendering a gift special effect by executing machine executable instructions: splicing the first virtual scene and the general scene in response to the triggering of a specified condition; controlling the virtual camera to move so that a general scene is displayed in the virtual live broadcast room; rendering and displaying a preset scene linking special effect in the moving process of the virtual camera; and rendering and displaying the second special effect data in the general scene after the general scene is displayed.
The processor in the electronic device may implement the following operations of the method for rendering a gift special effect by executing machine executable instructions: rendering and displaying an interaction result corresponding to the first special effect data in the general scene; and responding to the specified operation of the anchor object in the virtual live broadcast room for the interaction result, and displaying the operation result of the specified operation in the general scene.
The processor in the live broadcast server may implement the following operations of the gift special effect rendering method by executing machine executable instructions: receiving a gift sending instruction aiming at a target gift, acquiring gift special effect information of the target gift, sending the gift special effect information to terminal equipment, generating special effect sectional information of the target gift based on the gift special effect information through the terminal equipment, and returning the special effect sectional information to a live broadcast server; obtaining segmented special effect data of the target gift based on the special effect segmented information; wherein the segmented special effect data comprises: first effect data rendered in a first virtual scene, and second effect data rendered in a generic scene of a target gift; returning the segmented special effect data to the terminal equipment so as to render and display the first special effect data in the first virtual scene through the terminal equipment; and in response to the specified condition being triggered, controlling the virtual live broadcast room to be converted from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
In the mode, the live broadcast server side and the terminal equipment perform interaction of special effect information for many times, the gift effect is rendered and presented, the gift special effect is rendered and live broadcast contents in a live broadcast room are fused with each other through the information interaction process, the display that the gift effect covers the live broadcast contents is avoided, and the immersion sense that audiences watch live broadcasts is improved.
The processor in the live broadcast server may implement the following operations of the method for rendering a gift special effect by executing machine executable instructions: acquiring live broadcast room information of a virtual live broadcast room in a broadcasting state; the live broadcast room information at least comprises a scene identifier of a first virtual scene and a gift identifier of a gift supported by a virtual live broadcast room; and providing live room information to the terminal equipment.
The processor in the live broadcast server may implement the following operations of the gift special effect rendering method by executing machine executable instructions: and acquiring the operation of calling out the gift panel by the appointed user side, acquiring the deliverable gifts supported by the appointed user side from the gift panel, and providing the deliverable gifts to the terminal equipment.
The processor in the live broadcast server may implement the following operations of the gift special effect rendering method by executing machine executable instructions: and after the first special effect data is displayed, receiving information data from the anchor terminal, determining whether the specified condition is triggered or not based on the information data, and if the specified condition is triggered, sending a condition identifier of the specified condition to the terminal equipment to indicate that the specified condition is triggered.
The present embodiments also provide a machine-readable storage medium having stored thereon machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement a method of rendering a gift special effect as described above.
The machine-readable storage medium stores machine-executable instructions, which, when executed, implement the following operations in the method for rendering a gift special effect: constructing a first virtual scene of a virtual live broadcast room; acquiring segmented special effect data of a target gift; wherein the segmented special effect data comprises: first effect data rendered in a first virtual scene and second effect data rendered in a generic scene of a target gift; rendering and displaying first special effect data in a first virtual scene; and in response to the specified condition being triggered, controlling the virtual live broadcast room to be converted from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
In the method, the primary gift special effect is rendered in the first virtual scene through the segmented special effect data of the target gift, after the specified condition is triggered, the virtual live broadcast room is controlled to be converted from the first virtual scene to the general scene of the target gift, and the secondary gift effect rendering and presentation are carried out in the general scene.
The machine-readable storage medium stores machine-executable instructions, which, when executed, implement the following operations in the method for rendering a gift special effect: obtaining gift special effect information of a target gift; and based on the gift special effect information and a preset special effect pre-rendering template, generating special effect sectional information of the target gift so that the live broadcast server can obtain sectional special effect data of the target gift based on the special effect sectional information and return the sectional special effect data to the terminal equipment.
The machine-readable storage medium stores machine-executable instructions, and by executing the machine-executable instructions, the following operations in the gift special effect rendering method can be realized: the gift special effect information at least comprises gift special effect duration and special effect image data; determining the time period of the gift special effect duration based on the special effect pre-rendering template; extracting lens information from the special effect image data based on the belonging time period; determining gift effect deduction rhythm data of the target gift based on the lens information; and generating special effect segmentation information of the target gift based on the gift effect deduction rhythm data and the special effect prerendering template.
The machine-readable storage medium stores machine-executable instructions, and by executing the machine-executable instructions, the following operations in the gift special effect rendering method can be realized: the lens information includes: the lens switching times, the single lens shortest duration and the single lens longest duration; and based on preset weighting parameters, carrying out weighted addition on the lens switching times, the single-lens shortest duration and the single-lens longest duration to obtain the gift effect deduction rhythm score of the target gift.
The machine-readable storage medium stores machine-executable instructions, which, when executed, implement the following operations in the method for rendering a gift special effect: determining the number of segments corresponding to the gift effect deduction rhythm data based on the special effect pre-rendering template; and determining a segmentation timestamp based on the number of segments, the shortest time length of a single lens and the longest time length of the single lens in the lens information, and generating special effect segmentation information of the target gift.
The machine-readable storage medium stores machine-executable instructions, which, when executed, implement the following operations in the method for rendering a gift special effect: determining a specified condition based on the special effect pre-rendering template and the special effect style of the target gift; wherein the specified conditions include: in the segmented special effect data, the trigger condition of the special effect data of the segment is specified; the trigger conditions include one or more of the following: and the anchor object in the virtual live broadcasting room executes a preset gesture, and the client side, which receives the specified information, completes the specified task and sends a gift sending instruction through the virtual live broadcasting room, executes the specified behavior.
The machine-readable storage medium stores machine-executable instructions, which, when executed, implement the following operations in the method for rendering a gift special effect: obtaining gift special effect information of a target gift; determining scene configuration information of the target gift based on the gift special effect information; wherein the scene configuration information includes: the method comprises the following steps of (1) obtaining a special effect style, a special effect playing time length and a scene style of a first virtual scene; rendering a preset scene basic model and illumination information based on the scene configuration information to obtain pre-rendering data of a general scene of the target gift; wherein the pre-rendering data is used to display a generic scene.
The machine-readable storage medium stores machine-executable instructions, which, when executed, implement the following operations in the method for rendering a gift special effect: obtaining a gift identification of the target gift from the gift special effect information, and determining a special effect style of the target gift based on the gift identification; obtaining gift special effect time length of the target gift from the gift special effect information, and determining special effect playing time length of the target gift based on the gift special effect time length and a preset special effect pre-rendering template; the method comprises the steps of obtaining live broadcast room information of a virtual live broadcast room, extracting scene identification of a first virtual scene from the live broadcast room information, and determining scene style of the first virtual scene based on the scene identification.
The machine-readable storage medium stores machine-executable instructions, which, when executed, implement the following operations in the method for rendering a gift special effect: storing pre-rendering data of a universal scene of a plurality of alternative gifts in a virtual live broadcast room in a terminal device; wherein the alternative gifts are gifts supported by the virtual live broadcast room; the alternative gift includes a target gift; and receiving the deliverable gifts supported by the appointed user side, and deleting the pre-rendering data of the common scenes of the gifts other than the deliverable gifts in the terminal equipment.
The machine-readable storage medium stores machine-executable instructions, and by executing the machine-executable instructions, the following operations in the gift special effect rendering method can be realized: rendering and displaying a target object corresponding to the target gift in the first virtual scene; and controlling the target object to move in the first virtual scene, and controlling the target object and a main broadcasting object in the virtual live broadcasting room to execute preset interactive operation to obtain an interactive result.
The machine-readable storage medium stores machine-executable instructions, and by executing the machine-executable instructions, the following operations in the gift special effect rendering method can be realized: splicing the first virtual scene and the general scene in response to the triggering of a specified condition; controlling the virtual camera to move so that a general scene is displayed in the virtual live broadcast room; rendering and displaying a preset scene linking special effect in the moving process of the virtual camera; and rendering and displaying the second special effect data in the general scene after the general scene is displayed.
The machine-readable storage medium stores machine-executable instructions, and by executing the machine-executable instructions, the following operations in the gift special effect rendering method can be realized: rendering and displaying an interaction result corresponding to the first special effect data in the general scene; and responding to the specified operation of the anchor object in the virtual live broadcast room for the interaction result, and displaying the operation result of the specified operation in the general scene.
The machine-readable storage medium stores machine-executable instructions, which, when executed, implement the following operations in the method for rendering a gift special effect: receiving a gift sending instruction aiming at a target gift, acquiring gift special effect information of the target gift, sending the gift special effect information to terminal equipment, generating special effect sectional information of the target gift based on the gift special effect information through the terminal equipment, and returning the special effect sectional information to a live broadcast server; obtaining segmented special effect data of the target gift based on the special effect segmented information; wherein the segmented special effect data comprises: first effect data rendered in a first virtual scene and second effect data rendered in a generic scene of a target gift; returning the segmented special effect data to the terminal equipment so as to render and display the first special effect data in the first virtual scene through the terminal equipment; and in response to the specified condition being triggered, controlling the virtual live broadcast room to be converted from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
In this mode, live broadcast server and terminal equipment carry out the interaction of many times special effect information for the render and the present of gift effect fuse with the live content in the live broadcast room through the information interaction process with the render of gift special effect, have avoided the gift effect to shelter from the show of live content, have promoted spectator and have watched live the sense of immersing.
The machine-readable storage medium stores machine-executable instructions, and by executing the machine-executable instructions, the following operations in the gift special effect rendering method can be realized: acquiring live broadcast room information of a virtual live broadcast room in a broadcasting state; the live broadcast room information at least comprises a scene identifier of a first virtual scene and a gift identifier of a gift supported by the virtual live broadcast room; and providing live room information to the terminal equipment.
The machine-readable storage medium stores machine-executable instructions, and by executing the machine-executable instructions, the following operations in the gift special effect rendering method can be realized: and acquiring the operation of calling out the gift panel by the appointed user side, acquiring the deliverable gifts supported by the appointed user side from the gift panel, and providing the deliverable gifts to the terminal equipment.
The machine-readable storage medium stores machine-executable instructions, and by executing the machine-executable instructions, the following operations in the gift special effect rendering method can be realized: and after the first special effect data is displayed, receiving information data from the anchor terminal, determining whether the specified condition is triggered or not based on the information data, and if the specified condition is triggered, sending a condition identifier of the specified condition to the terminal equipment to indicate that the specified condition is triggered.
The method, the apparatus, the electronic device and the live broadcast server for rendering a gift special effect provided by the embodiments of the present invention include a computer-readable storage medium storing program codes, where instructions included in the program codes may be used to execute the method described in the foregoing method embodiments, and specific implementation may refer to the method embodiments, which are not described herein again.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the system and the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and details are not described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood in specific cases for those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplification of description, but do not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (21)

1. The method for rendering the gift special effect is characterized by being applied to terminal equipment; the method comprises the following steps:
constructing a first virtual scene of a virtual live broadcast room; acquiring segmented special effect data of a target gift; wherein the segmented special effects data comprises: first special effects data rendered in the first virtual scene, and second special effects data rendered in a generic scene of the target gift;
rendering and displaying the first special effect data in the first virtual scene;
in response to a specified condition being triggered, controlling the virtual live broadcast room to transition from the first virtual scene to a general scene of the target gift, rendering and displaying the second special effects data in the general scene.
2. The method of claim 1, wherein the step of obtaining segmented special effects data for the target gift comprises:
obtaining gift special effect information of the target gift;
and generating special effect sectional information of the target gift based on the gift special effect information and a preset special effect pre-rendering template, so that a live broadcast server obtains sectional special effect data of the target gift based on the special effect sectional information, and returning the sectional special effect data to the terminal equipment.
3. The method of claim 2, wherein the gift special effect information includes at least a gift special effect duration and special effect image data;
the step of generating the special effect segmentation information of the target gift based on the gift special effect information and a preset special effect pre-rendering template includes:
determining a time period to which the gift special effect duration belongs based on the special effect pre-rendering template; extracting lens information from the special effect image data based on the time period;
determining gift effect deduction rhythm data of the target gift based on the lens information;
and generating special effect segmentation information of the target gift based on the gift effect deduction rhythm data and the special effect pre-rendering template.
4. The method of claim 3, wherein the shot information comprises: the lens switching times, the single lens shortest duration and the single lens longest duration;
the step of determining a gift effect deduction rhythm data of the target gift based on the lens information includes:
and based on preset weighting parameters, carrying out weighted addition on the shot switching times, the single-shot shortest duration and the single-shot longest duration to obtain the gift effect deduction rhythm score of the target gift.
5. The method of claim 3, wherein generating the effect segmentation information for the target gift based on the gift effect deductive cadence data and the effect pre-rendering template comprises:
determining the number of the segments corresponding to the gift effect deduction rhythm data based on the special effect pre-rendering template;
and determining the segmentation timestamp based on the segmentation number, the shortest single-shot duration and the longest single-shot duration in the shot information, and generating special effect segmentation information of the target gift.
6. The method of claim 2, wherein after the step of generating the effect segmentation information for the target gift based on the gift effect information and a preset effect pre-rendering template, the method further comprises:
determining the specified condition based on the special effect pre-rendering template and a special effect style of the target gift;
wherein the specified conditions include: in the segmented special effect data, the trigger condition of the special effect data of a segment is specified; the trigger condition comprises one or more of: and the anchor object in the virtual live broadcast room executes a preset gesture, and the virtual live broadcast room receives the appointed information, completes the appointed task and sends the gift sending instruction to the user side to execute the appointed behavior.
7. The method of claim 1, wherein the step of obtaining segmented special effects data for the target gift is preceded by the method further comprising:
obtaining gift special effect information of the target gift;
determining scene configuration information of the target gift based on the gift special effect information; wherein the scene configuration information includes: the special effect style, the special effect playing time length and the scene style of the first virtual scene;
rendering a preset scene basic model and illumination information based on the scene configuration information to obtain pre-rendering data of a general scene of the target gift; wherein the pre-rendering data is used to display the generic scene.
8. The method of claim 7, wherein determining scene configuration information for the target gift based on the gift special effect information comprises:
obtaining a gift identification of the target gift from the gift special effect information, and determining a special effect style of the target gift based on the gift identification;
obtaining gift special effect time length of the target gift from the gift special effect information, and determining special effect playing time length of the target gift based on the gift special effect time length and a preset special effect pre-rendering template;
acquiring live broadcast room information of the virtual live broadcast room, extracting a scene identifier of the first virtual scene from the live broadcast room information, and determining a scene style of the first virtual scene based on the scene identifier.
9. The method of claim 7, wherein after the step of rendering the preset scene base model and the lighting information based on the scene configuration information to obtain the pre-rendering data of the general scene of the target gift, the method further comprises:
saving, in the terminal device, pre-rendering data of a common scene of a plurality of alternative gifts in the virtual live broadcast room; wherein the alternative gift is a gift supported by the virtual live broadcast room; the alternative gift comprises the target gift;
receiving a deliverable gift supported by a specified user side, and deleting pre-rendering data of a general scene of the gift except the deliverable gift in the terminal equipment.
10. The method of claim 1, wherein the step of rendering and displaying the first special effects data in the first virtual scene comprises:
rendering and displaying a target object corresponding to the target gift in the first virtual scene;
and controlling the target object to move in the first virtual scene, and controlling the target object and a main broadcasting object in the virtual live broadcasting room to execute preset interactive operation to obtain an interactive result.
11. The method of claim 1, wherein the step of controlling the virtual live broadcast room to transition from the first virtual scene to a general scene of the target gift in response to a specified condition being triggered, the step of rendering and displaying the second special effects data in the general scene comprises:
splicing the first virtual scene and the general scene in response to a specified condition being triggered;
controlling a virtual camera to move so that the universal scene is displayed in the virtual live broadcast room; rendering and displaying a preset scene linking special effect in the moving process of the virtual camera;
and rendering and displaying the second special effect data in the general scene after the general scene is displayed.
12. Method according to claim 1 or 11, wherein the step of rendering and displaying the second special effect data in the generic scene comprises:
rendering and displaying an interaction result corresponding to the first special effect data in the general scene;
responding to the specified operation of the anchor object in the virtual live broadcast room for the interaction result, and displaying the operation result of the specified operation in the general scene.
13. The method for rendering the gift special effect is characterized by being applied to a live broadcast server; the method comprises the following steps:
receiving a gift sending instruction aiming at a target gift, acquiring gift special effect information of the target gift, sending the gift special effect information to a terminal device, generating special effect segmentation information of the target gift based on the gift special effect information through the terminal device, and returning the special effect segmentation information to the live broadcast server;
obtaining segmented special effect data of the target gift based on the special effect segmented information; wherein the segmented special effects data comprises: first effect data rendered in the first virtual scene and second effect data rendered in a generic scene of the target gift;
returning the segmented special effect data to the terminal device so as to render and display the first special effect data in the first virtual scene through the terminal device; in response to a specified condition being triggered, controlling the virtual live broadcast room to switch from the first virtual scene to a general scene of the target gift, rendering and displaying the second special effect data in the general scene.
14. The method of claim 13, wherein prior to the steps of receiving a gift sending instruction for a target gift, obtaining gift special effect information of the target gift, and sending the gift special effect information to a terminal device, the method further comprises:
acquiring live broadcast room information of the virtual live broadcast room in a broadcasting state; wherein the live broadcast room information at least comprises a scene identification of the first virtual scene and a gift identification of a gift supported by the virtual live broadcast room;
and providing the live broadcast room information to the terminal equipment.
15. The method of claim 13, wherein prior to the steps of receiving a gift sending instruction for a target gift, obtaining gift special effect information of the target gift, and sending the gift special effect information to a terminal device, the method further comprises:
the method comprises the steps of obtaining operation of a given user terminal to call a gift panel, obtaining a deliverable gift supported by the given user terminal from the gift panel, and providing the deliverable gift to the terminal device.
16. The method of claim 13, further comprising:
after the first special effect data are displayed, receiving information data from a main broadcasting terminal, determining whether the specified condition is triggered or not based on the information data, and if the specified condition is triggered, sending a condition identifier of the specified condition to the terminal equipment to indicate that the specified condition is triggered.
17. The device for rendering the gift special effect is characterized in that the device is arranged on a terminal device and comprises:
the first acquisition module is used for constructing a first virtual scene of a virtual live broadcast room; acquiring segmented special effect data of a target gift; wherein the segmented special effects data comprises: first effect data rendered in the first virtual scene and second effect data rendered in a generic scene of the target gift;
a first display module to render and display the first special effect data in the first virtual scene;
and the second display module is used for responding to the triggering of a specified condition, controlling the virtual live broadcast room to be converted from the first virtual scene to the general scene of the target gift, and rendering and displaying the second special effect data in the general scene.
18. An apparatus for rendering a gift special effect, the apparatus being provided in a live broadcast server, the apparatus comprising:
the information return module is used for receiving a gift sending instruction aiming at a target gift, acquiring gift special effect information of the target gift, sending the gift special effect information to terminal equipment, generating special effect sectional information of the target gift based on the gift special effect information through the terminal equipment, and returning the special effect sectional information to the live broadcast server;
a data obtaining module, configured to obtain segmented special effect data of the target gift based on the special effect segmentation information; wherein the segmented special effects data comprises: first effect data rendered in the first virtual scene and second effect data rendered in a generic scene of the target gift;
the data display module is used for returning the segmented special effect data to the terminal equipment so as to render and display the first special effect data in the first virtual scene through the terminal equipment; in response to a specified condition being triggered, controlling the virtual live broadcast room to switch from the first virtual scene to a general scene of the target gift, rendering and displaying the second special effect data in the general scene.
19. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the method of rendering a gift special effect of any one of claims 1-12.
20. A live server comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement a method of rendering a gift special effect of any one of claims 13-16.
21. A machine readable storage medium having stored thereon machine executable instructions which, when invoked and executed by a processor, cause the processor to implement the method of rendering a gift special effect recited in any one of claims 1-12 or the method of rendering a gift special effect recited in any one of claims 13-16.
CN202210653955.1A 2022-06-09 2022-06-09 Method and device for rendering gift special effects, electronic equipment and live broadcast server Active CN115225923B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210653955.1A CN115225923B (en) 2022-06-09 2022-06-09 Method and device for rendering gift special effects, electronic equipment and live broadcast server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210653955.1A CN115225923B (en) 2022-06-09 2022-06-09 Method and device for rendering gift special effects, electronic equipment and live broadcast server

Publications (2)

Publication Number Publication Date
CN115225923A true CN115225923A (en) 2022-10-21
CN115225923B CN115225923B (en) 2024-03-22

Family

ID=83608157

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210653955.1A Active CN115225923B (en) 2022-06-09 2022-06-09 Method and device for rendering gift special effects, electronic equipment and live broadcast server

Country Status (1)

Country Link
CN (1) CN115225923B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116456131A (en) * 2023-03-13 2023-07-18 北京达佳互联信息技术有限公司 Special effect rendering method and device, electronic equipment and storage medium
CN117119259A (en) * 2023-09-07 2023-11-24 北京优贝在线网络科技有限公司 Scene analysis-based special effect self-synthesis system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109218796A (en) * 2017-06-30 2019-01-15 武汉斗鱼网络科技有限公司 A kind of method and apparatus showing virtual present special efficacy
CN111225231A (en) * 2020-02-25 2020-06-02 广州华多网络科技有限公司 Virtual gift display method, device, equipment and storage medium
CN111277854A (en) * 2020-03-04 2020-06-12 网易(杭州)网络有限公司 Display method and device of virtual live broadcast room, electronic equipment and storage medium
CN111314730A (en) * 2020-02-25 2020-06-19 广州华多网络科技有限公司 Virtual resource searching method, device, equipment and storage medium for live video
CN112533002A (en) * 2020-11-17 2021-03-19 南京邮电大学 Dynamic image fusion method and system for VR panoramic live broadcast
CN113329234A (en) * 2021-05-28 2021-08-31 腾讯科技(深圳)有限公司 Live broadcast interaction method and related equipment
CN113395533A (en) * 2021-05-24 2021-09-14 广州博冠信息科技有限公司 Virtual gift special effect display method and device, computer equipment and storage medium
CN113840156A (en) * 2021-09-22 2021-12-24 广州方硅信息技术有限公司 Live broadcast interaction method and device based on virtual gift and computer equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109218796A (en) * 2017-06-30 2019-01-15 武汉斗鱼网络科技有限公司 A kind of method and apparatus showing virtual present special efficacy
CN111225231A (en) * 2020-02-25 2020-06-02 广州华多网络科技有限公司 Virtual gift display method, device, equipment and storage medium
CN111314730A (en) * 2020-02-25 2020-06-19 广州华多网络科技有限公司 Virtual resource searching method, device, equipment and storage medium for live video
CN111277854A (en) * 2020-03-04 2020-06-12 网易(杭州)网络有限公司 Display method and device of virtual live broadcast room, electronic equipment and storage medium
CN112533002A (en) * 2020-11-17 2021-03-19 南京邮电大学 Dynamic image fusion method and system for VR panoramic live broadcast
CN113395533A (en) * 2021-05-24 2021-09-14 广州博冠信息科技有限公司 Virtual gift special effect display method and device, computer equipment and storage medium
CN113329234A (en) * 2021-05-28 2021-08-31 腾讯科技(深圳)有限公司 Live broadcast interaction method and related equipment
CN113840156A (en) * 2021-09-22 2021-12-24 广州方硅信息技术有限公司 Live broadcast interaction method and device based on virtual gift and computer equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116456131A (en) * 2023-03-13 2023-07-18 北京达佳互联信息技术有限公司 Special effect rendering method and device, electronic equipment and storage medium
CN116456131B (en) * 2023-03-13 2023-12-19 北京达佳互联信息技术有限公司 Special effect rendering method and device, electronic equipment and storage medium
CN117119259A (en) * 2023-09-07 2023-11-24 北京优贝在线网络科技有限公司 Scene analysis-based special effect self-synthesis system
CN117119259B (en) * 2023-09-07 2024-03-08 北京优贝在线网络科技有限公司 Scene analysis-based special effect self-synthesis system

Also Published As

Publication number Publication date
CN115225923B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN115225923A (en) Gift special effect rendering method and device, electronic equipment and live broadcast server
WO2018072652A1 (en) Video processing method, video processing device, and storage medium
KR102118000B1 (en) Target target display method and device
CN109688451B (en) Method and system for providing camera effect
CN113905251A (en) Virtual object control method and device, electronic equipment and readable storage medium
CN111246232A (en) Live broadcast interaction method and device, electronic equipment and storage medium
TW202304212A (en) Live broadcast method, system, computer equipment and computer readable storage medium
US20170236329A1 (en) System and method to integrate content in real time into a dynamic real-time 3-dimensional scene
CN113840049A (en) Image processing method, video flow scene switching method, device, equipment and medium
CN110176077A (en) The method, apparatus and computer storage medium that augmented reality is taken pictures
CN112927349B (en) Three-dimensional virtual special effect generation method and device, computer equipment and storage medium
CN106412711B (en) Barrage control method and device
US20240062254A1 (en) Media collection navigation with opt-out interstitial
CN108134945B (en) AR service processing method, AR service processing device and terminal
CN113473207B (en) Live broadcast method and device, storage medium and electronic equipment
CN114390193B (en) Image processing method, device, electronic equipment and storage medium
CN113112614A (en) Interaction method and device based on augmented reality
TW202303526A (en) Special effect display method, computer equipment and computer-readable storage medium
CN113194256B (en) Shooting method, shooting device, electronic equipment and storage medium
JP5776471B2 (en) Image display system
KR102200239B1 (en) Real-time computer graphics video broadcasting service system
CN112150602A (en) Model image rendering method and device, storage medium and electronic equipment
JP5850188B2 (en) Image display system
CN106127858B (en) Information processing method and electronic equipment
CN112887623B (en) Image generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant