CN110493630B - Processing method and device for special effect of virtual gift and live broadcast system - Google Patents

Processing method and device for special effect of virtual gift and live broadcast system Download PDF

Info

Publication number
CN110493630B
CN110493630B CN201910859930.5A CN201910859930A CN110493630B CN 110493630 B CN110493630 B CN 110493630B CN 201910859930 A CN201910859930 A CN 201910859930A CN 110493630 B CN110493630 B CN 110493630B
Authority
CN
China
Prior art keywords
gift
special effect
live video
effect
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910859930.5A
Other languages
Chinese (zh)
Other versions
CN110493630A (en
Inventor
杨克敏
陈杰
欧燕雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Huaduo Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huaduo Network Technology Co Ltd filed Critical Guangzhou Huaduo Network Technology Co Ltd
Priority to CN201910859930.5A priority Critical patent/CN110493630B/en
Publication of CN110493630A publication Critical patent/CN110493630A/en
Priority to PCT/CN2019/125929 priority patent/WO2021047094A1/en
Application granted granted Critical
Publication of CN110493630B publication Critical patent/CN110493630B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/254Management at additional data server, e.g. shopping server, rights management server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • General Engineering & Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The processing method comprises the steps of receiving a giving instruction of a virtual gift sent by a client of a viewer, obtaining a corresponding target special-effect gift according to the giving instruction, and determining a characteristic area of the target special-effect gift; determining the synthetic position information of the target special-effect gift on the live video according to the characteristic area; and sending the live video and the synthetic position information to the client, so that the audience client adds the target special effect gift to the live video according to the synthetic position information to synthesize and display the target special effect gift. The method and the system have the advantages that the anchor client is used for encoding and packaging the synthesized position information outside the live video, the transmission is independent of the live video, the synthesized position information is obtained by decoding at the audience client, the audience client can conveniently carry out secondary editing on the effect display of the virtual special effect gift, and the effect of the special effect display is favorably improved.

Description

Processing method and device for special effect of virtual gift and live broadcast system
Technical Field
The embodiment of the application relates to the technical field of live broadcast, in particular to a method and a device for processing a special effect of a virtual gift, a live broadcast system, computer equipment and a storage medium.
Background
With the development of network technology, real-time video communication such as live webcast and video chat room becomes an increasingly popular entertainment mode. In the real-time video communication process, the interactivity among users can be increased by giving away the virtual gift to show the special effect.
For example, in a live scene, the anchor user is live in the live room, and the viewer user watches the live process of the anchor at the viewer client. In order to increase the interactivity between the anchor user and the audience user, the audience user can select a specific target special effect gift to be presented to the anchor, add the target special effect gift to a specific position of an anchor picture according to a corresponding entertainment template, and display a corresponding special effect.
The existing display method of the gift special effect is that the gift special effect is synthesized to a video frame through a main broadcasting client, the video frame containing the gift special effect is placed into a video stream and transmitted to other main broadcasting clients or audience clients for special effect display, the display mode enables the gift special effect to be solidified in a video area, secondary adjustment on the gift special effect is not facilitated subsequently, and the whole display effect is influenced.
Disclosure of Invention
The purpose of the present application is to solve at least one of the above technical defects, especially the problems of being not favorable for the secondary adjustment of the virtual gift special effect and having poor display effect.
In a first aspect, an embodiment of the present application provides a method for processing a special effect of a virtual gift, including the following steps:
receiving a presentation instruction of a virtual gift, acquiring a corresponding target special-effect gift according to the presentation instruction and determining a characteristic area of the target special-effect gift;
determining the synthetic position information of the target special effect gift on the live video according to the characteristic area;
and sending the live video and the synthesized position information to a spectator client, so that the spectator client adds the target special-effect gift to the live video according to the synthesized position information to synthesize and display the live video.
In an embodiment, the step of obtaining a corresponding target special effect gift according to the giving instruction and determining a characteristic region of the target special effect gift includes:
acquiring a target special-effect gift identifier;
and searching for a target special-effect gift according to the target special-effect gift identifier, and determining a characteristic region corresponding to the target special-effect gift.
In one embodiment, the step of determining the composite position information of the target special effect gift on the live video according to the feature area comprises:
acquiring a current video frame image of the live video;
extracting figure outline key points in the current video frame image;
and determining a corresponding target position of the characteristic region on the current video frame image according to the figure outline key points so as to synthesize the target special effect gift at the target position.
In one embodiment, the step of sending the live video and the composite location information to the viewer client comprises:
and encoding and packaging the live video and the synthesized position information into live video stream data, and forwarding the live video stream data to the audience client through a server.
In one embodiment, the target effect gift is an effect gift in the form of a three-dimensional display.
In one embodiment, the synthetic position information includes: at least one of face information, body contour information, gesture information, and body skeleton information.
In a second aspect, an embodiment of the present application provides a method for processing a special effect of a virtual gift, including the following steps:
sending a virtual gift giving instruction to the anchor client; the anchor client acquires a corresponding target special-effect gift according to the presentation instruction, determines a characteristic region of the target special-effect gift, and determines the synthetic position information of the target special-effect gift on the live video according to the characteristic region;
receiving live video streaming data sent by a main broadcast client; the live video stream data comprises a live video and the synthetic position information of a target special effect gift;
forwarding the live video stream data to a viewer client; and the audience client adds the target special effect gift to the live video according to the synthesis position information to synthesize and display the target special effect gift.
In one embodiment, the step of adding the target special effect gift to the live video for composition and presentation according to the composition position information includes:
acquiring a current video frame image of the live video;
adding the target special-effect gift to the current video frame image according to the synthesis position information to obtain a special-effect frame image;
and rendering the special effect frame image to a special effect display area for displaying.
In a third aspect, an embodiment of the present application provides a processing apparatus for a virtual gift special effect, including:
the characteristic acquisition module is used for receiving a presentation instruction of the virtual gift, acquiring a corresponding target special-effect gift according to the presentation instruction and determining a characteristic area of the target special-effect gift;
the information determining module is used for determining the synthetic position information of the target special effect gift on the live video according to the characteristic area;
and the information sending module is used for sending the live video and the synthetic position information to a spectator client, so that the spectator client adds the target special-effect gift to the live video according to the synthetic position information to synthesize and display the target special-effect gift.
In a fourth aspect, an embodiment of the present application provides a device for processing a special effect of a virtual gift, including:
the instruction sending module is used for sending a presentation instruction of the virtual gift to the anchor client; the anchor client acquires a corresponding target special-effect gift according to the presentation instruction, determines a characteristic region of the target special-effect gift, and determines the synthetic position information of the target special-effect gift on the live video according to the characteristic region;
the data receiving module is used for receiving live video stream data sent by the anchor client; the live video stream data comprises a live video and the synthetic position information of a target special effect gift;
the data forwarding module is used for forwarding the live video stream data to a spectator client; and the audience client adds the target special effect gift to the live video according to the synthesis position information to synthesize and display the target special effect gift.
In a fifth aspect, an embodiment of the present application provides a live broadcast system, including a anchor client, an audience client, and a server, where the anchor client is in communication connection with the audience client via the server through a network;
the server is used for receiving a presentation instruction of the virtual gift sent by the audience client side and sending the presentation instruction to the anchor client side;
the anchor client is used for receiving the presentation instruction and acquiring a target special-effect gift identifier; searching for a target special-effect gift according to the target special-effect gift identification, and determining a characteristic area corresponding to the target special-effect gift; determining the synthetic position information of the target special effect gift on the live video according to the characteristic area; encoding the synthesized position information and the live video into live video stream data and sending the live video stream data to a server;
the server is further used for forwarding the live video stream data to the audience client;
the audience client is used for receiving live broadcast video stream data and a target special effect gift, and acquiring the synthetic position information of a live broadcast video and the target special effect gift from the live broadcast video stream data; and adding the target special effect gift to the live video according to the synthesis position information to synthesize and display the live video.
In a sixth aspect, the present application provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the method for processing the special effect of the virtual gift according to any one of the above embodiments when executing the program.
In a seventh aspect, embodiments of the present application provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the steps of the method for processing a virtual gift effect as described in any one of the above embodiments.
The processing method and device for the special effect of the virtual gift, the live broadcast system, the live broadcast equipment and the storage medium provided by the embodiment receive the presenting instruction of the virtual gift sent by the client of the audience, obtain the corresponding target special effect gift according to the presenting instruction and determine the characteristic area of the target special effect gift; determining the synthetic position information of the target special-effect gift on the live video according to the characteristic area; and sending the live video and the synthetic position information to the client, so that the audience client adds the target special effect gift to the live video according to the synthetic position information to synthesize and display the target special effect gift. The scheme utilizes the anchor client to encode and package the synthesized position information outside the live video and is independent of the live video for transmission, so that the audience client decodes to obtain the synthesized position information, secondary editing of the effect display of the virtual special effect gift is facilitated, the layered special effect processing and the special effect display of the cross live video playing area can be realized by the audience client, and the effect of the special effect display is improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a system framework diagram of a method for processing a special effect of a virtual gift according to an embodiment;
fig. 2 is a schematic structural diagram of a live broadcast system provided in an embodiment;
FIG. 3 is a flowchart of a method for processing a virtual gift effect according to one embodiment;
fig. 4 is a flowchart of a composite location information obtaining method according to an embodiment;
FIG. 5 is a flowchart of a method for composing a special effect of a virtual gift according to an embodiment;
FIG. 6 is a diagram of the composite effect of a virtual gift in a live broadcast technique;
FIG. 7 is a diagram of the effects of a virtual gift composition provided by one embodiment;
FIG. 8 is a flowchart of a method for rendering a virtual gift effect according to one embodiment;
FIG. 9 is a diagram of the rendering effect of a virtual gift in a live broadcast technique;
FIG. 10 is another flow chart of a method for processing a virtual gift effect according to one embodiment;
FIG. 11 is a timing diagram of a virtual gift-giving process provided by an embodiment;
FIG. 12 is a block diagram illustrating an exemplary embodiment of a device for processing a special effect of a virtual gift;
fig. 13 is another schematic structural diagram of a device for processing a special effect of a virtual gift according to an embodiment.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be appreciated by those skilled in the art that terms such as "client," "application," and the like are used herein to refer to the same concepts known to those skilled in the art, as computer software organically constructed from a series of computer instructions and associated data resources adapted for electronic operation. Unless otherwise specified, such nomenclature is not itself limited by the programming language class, level, or operating system or platform upon which it depends. Of course, such concepts are not limited to any type of terminal.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In order to better explain the technical solution of the present application, an application environment to which the processing method of the virtual gift special effect of the present solution can be applied is shown below. As shown in fig. 1, fig. 1 is a system framework diagram of a processing method for a virtual gift special effect according to an embodiment, and the system framework may include a server and a client. The live broadcast platform on the server side can comprise a plurality of virtual live broadcast rooms, a server and the like, and each virtual live broadcast room correspondingly plays different live broadcast contents. The client comprises a spectator client and an anchor client, generally speaking, the anchor carries out live broadcast through the anchor client, and spectators select to enter a certain virtual live broadcast room through the spectator client to watch the anchor to carry out live broadcast. The viewer client and the anchor client may enter the live platform through a live Application (APP) installed on the terminal device.
In this embodiment, the terminal device may be a terminal such as a smart phone, a tablet computer, an e-reader, a desktop computer, or a notebook computer, which is not limited to this. The server is a background server for providing background services for the terminal device, and can be implemented by an independent server or a server cluster consisting of a plurality of servers.
The method for processing the special effect of the virtual gift provided in this embodiment is suitable for presenting the virtual gift and displaying the special effect of the virtual gift in a live broadcast process, and may be that a viewer presents the virtual gift to a target anchor through a viewer client to display the special effect of the virtual gift at an anchor client and a plurality of viewer clients where the target anchor is located, or that the anchor presents the virtual gift to another anchor through the anchor client to display the special effect of the virtual gift at the anchor client where the anchor presents the virtual gift and receives the virtual gift and the plurality of viewer clients.
The following describes an exemplary scenario in which the spectator client presents a virtual special-effect gift to the target anchor.
Fig. 2 is a schematic structural diagram of a live broadcasting system provided in an embodiment, and as shown in fig. 2, the live broadcasting system 200 includes: anchor client 210, viewer client 230, and server 220. Anchor client 210 is communicatively coupled to viewer client 230 via server 220 over a network.
In this embodiment, the anchor client may be an anchor client installed on a computer, or may be an anchor client installed on a mobile terminal, such as a mobile phone or a tablet computer; similarly, the viewer client may be a viewer client installed on a computer, or may be a viewer client installed on a mobile terminal, such as a mobile phone or a tablet computer.
The server 220 is configured to receive a gifting instruction of the virtual gift sent by the viewer client 230, and send the gifting instruction to the anchor client 210;
the anchor client 210 is configured to receive the giving instruction and obtain a target special-effect gift identifier; searching for a target special-effect gift according to the target special-effect gift identification, and determining a characteristic area corresponding to the target special-effect gift; determining the synthetic position information of the target special effect gift on the live video according to the characteristic area; encoding the synthesized position information and the live video into live video stream data and sending the live video stream data to a server;
the server 220 is further configured to forward the live video stream data to the viewer client 230;
the audience client 230 is configured to receive live video stream data and a target special-effect gift, and obtain composite position information of a live video and the target special-effect gift from the live video stream data; adding the target special-effect gift to the live video according to the synthesis position information for synthesis to obtain a special-effect frame image; setting a special effect display area on a live broadcast window, and synchronously rendering the special effect frame image in the special effect display area in the process of playing the live broadcast video.
Fig. 3 is a flowchart illustrating a method for processing a special effect of a virtual gift, which is executed on a client, such as a host client. In some embodiments, it may also be performed at a server, such as a server. The present embodiment takes the anchor client as an example for explanation.
Specifically, as shown in fig. 3, the method for processing the special effect of the virtual gift may include the following steps:
s110, receiving a presentation instruction of the virtual gift, obtaining a corresponding target special-effect gift according to the presentation instruction, and determining a characteristic area of the target special-effect gift.
In the process of watching a live video, a viewer presents a virtual gift to a target anchor, a target special-effect gift is selected by triggering a relevant function key at a viewer client, a presenting instruction of the virtual gift is sent to a server, the server receives the presenting instruction and forwards the presenting instruction to the anchor client, wherein a presenting request of the virtual gift carries information such as a target special-effect gift identifier and a target anchor identifier.
And the anchor client receives a presentation instruction of the virtual gift sent by the audience client, analyzes information carried by the presentation instruction, and acquires a target special-effect gift identifier and a target anchor identifier. The server finds out the corresponding target special-effect gift according to the target special-effect gift identification. Optionally, the target special effect gift may be a two-dimensional display form of the special effect gift, or may be a three-dimensional display form of the special effect gift, i.e., a three-dimensional special effect gift. In this embodiment, the target special effect gift is preferably a three-dimensional special effect gift, such as an ai (intellectual intelligence) special effect gift, which creates a three-dimensional special effect through the three-dimensional special effect gift, enhances a realistic feeling, and improves a processing effect of the special effect of the virtual gift.
In an embodiment, the target special effect gift is subjected to effect display according to character characteristics, and different target special effect gifts correspond to respective characteristic areas, wherein the characteristic areas are areas correspondingly arranged on a live video of the target special effect gift. For example, the target special effect gift may be an angel wing, a mask, a hat, and the like, the angel wing may be disposed on the back of the anchor, and the characteristic region corresponding to the angel wing is the back; the mask is worn near the eyes of the anchor, and the characteristic area corresponding to the mask is a face; when the cap is worn on the head of the anchor, the characteristic area corresponding to the cap is the head and the like.
And S120, determining the synthetic position information of the target special effect gift on the live video according to the characteristic area.
Wherein the synthesizing of the position information may include: at least one of face information, body contour information, gesture information, and body skeleton information. In an embodiment, the compositing location information may include a target special effect gift composite at a target location of the live video based on recognition of the live video by the anchor client.
When the anchor client receives a presentation instruction of a virtual gift sent by the audience client, a live video of a live broadcast room where a target anchor is located is obtained, a current video frame image is extracted from the live video, and the current video frame image is processed according to a target special-effect gift so as to extract relevant information for synthesizing the target special-effect gift, such as synthesis position information of a characteristic area of the target special-effect gift in the current video frame image. According to the synthesis position information, the target special effect gift can be synthesized to the target position of the current video frame image, wherein the characteristic area of the target special effect gift is in one-to-one correspondence with the target position of the current video frame image. In an embodiment, the composite position information may be represented by one or more person contour key points, wherein each person contour key point has a unique coordinate value in the current video frame image, and the target position of the target special effect gift added to the current video frame image may be obtained according to the one or more coordinate values of the person contour key points.
In order to more clearly explain the present solution, the present embodiment will be exemplarily described with reference to "mask" as the target special effect gift.
In an embodiment, the target special effect gift of the mask is given to the target anchor by the audience, and the characteristic area corresponding to the mask is the face. The server acquires a live video of a live broadcast room where a target anchor is located, extracts a current video frame image, identifies the current video frame image to obtain a series of figure outline key points, determines characteristic regions corresponding to some target outline points, such as target outline points A (50,55), B (45,48) and C (45,60), from the figure outline key points, and takes a region obtained by connecting the target outline points A, B and C according to a specific algorithm as a target position of a target special effect gift in the current video frame image. Optionally, the information of the composite position of the "mask" in the current video frame image includes coordinate values of the target contour point A, B and C.
S130, sending the live video and the synthesis position information to the audience client, so that the audience client adds the target special effect gift to the live video according to the synthesis position information to synthesize and display the target special effect gift.
And after the anchor client acquires the synthesis position information, the anchor client encodes the synthesis position information and the live video to form a data packet and sends the data packet to the audience client. And after receiving the data packet, the audience client decodes to extract the live video and the synthetic position information. In the present embodiment, the composite position information includes coordinate values of target contour points of the target special effect gift in the current video frame image.
And after the audience client side acquires the synthesis position information, correspondingly converting the synthesis position information by combining the size of the special effect display area of the current audience client side, determining the target position of the target special effect gift on the current video frame image according to the converted synthesis position information, and adding the target special effect gift to the target position for synthesis.
For example, the anchor client recognizes that the resolution size of the current video frame image is 400 × 300, the coordinate value of the target contour point a in the obtained composite position information is (50,50), and the resolution size of the same current video frame image to be displayed by the viewer client is 800 × 600, and correspondingly converts the composite position information to obtain the coordinate value of the current target contour point a' of (100 ). And adding the target special effect gift to the target position determined by the converted synthesis position information for synthesis. It should be noted that the same current video frame image means that the content of the video frame image is the same, and the remaining features, such as resolution, image size, etc., may be different.
In this embodiment, a current video frame image of a live video may be extracted, a target special effect gift is added to the current video frame image at a viewer client to synthesize a special effect frame image, and effect display is performed in a preconfigured special effect display area, which may be independent of the video area, so as to improve the special effect display effect.
The method for processing the special effect of the virtual gift provided by the embodiment is applied to a main broadcasting client, and is used for receiving a presenting instruction of the virtual gift sent by a spectator client, acquiring a corresponding target special effect gift according to the presenting instruction and determining a characteristic area of the target special effect gift; determining the synthetic position information of the target special-effect gift on the live video according to the characteristic area; and sending the live video and the synthetic position information to the client, so that the audience client adds the target special effect gift to the live video according to the synthetic position information to synthesize and display the target special effect gift. The scheme sends the synthesized position information to the audience client along with the live video. The embodiment utilizes the anchor client to encode and package the synthesis position information outside the live video, so that the synthesis position information is independent of the live video and is transmitted along with the live video, the audience client can conveniently carry out secondary editing on the effect display of the virtual special effect gift according to the synthesis position information and the audience client, thereby being beneficial to realizing layered special effect processing and special effect display crossing the live video playing area at the audience client, and improving the effect display effect
In order to make the technical solution clearer and easier to understand, specific implementation processes and modes of the steps in the technical solution are described in detail below.
In an embodiment, the step S110 of obtaining the corresponding target special effect gift according to the giving instruction and determining the characteristic region of the target special effect gift may include the following steps:
s1101, obtaining a target special effect gift identification.
In an embodiment, different virtual gifts have unique identifications, and different virtual gifts have different presentation forms, and can be classified into a general virtual gift and a special virtual gift according to the type of the virtual gift, for example. If the common virtual gift is presented, the special effect is displayed according to the gift display method in the prior art, and if the special virtual gift is presented, the special effect is displayed according to the processing method of the special effect of the virtual gift provided by the scheme.
In an embodiment, a target special effect gift identifier is obtained, and whether the target special effect gift identifier is a special virtual gift is determined according to the target special effect gift identifier, if yes, step S1102 is executed.
And S1102, searching for a target special-effect gift according to the target special-effect gift identifier, and determining a characteristic region corresponding to the target special-effect gift.
In the embodiment, the characteristic regions corresponding to different target special effect gifts are different, that is, the positions of the different target special effect gifts added to the live video are different. For example, "angel wings" are added to the back of the anchor, "hat" is added to the head of the anchor, and so forth.
The server obtains the target special effect gift identification, and searches out the target special effect gift corresponding to the target special effect gift identification and the characteristic area thereof from a pre-configured database. Alternatively, the feature areas may be identified by certain set characters, such as, for example, the letter a for the head, the letter B for the back, etc.
In an embodiment, the target special effect gift identification, the target special effect gift, and the feature region may establish an association, for example {0001, angel wing, a }, where 0001 denotes the target special effect gift identification, "angel wing" denotes the name of the target special effect gift, and a denotes the feature region of the target special effect gift, such as the back. Further, the association relationship may be stored by way of a data structure.
Fig. 4 is a flowchart of a method for obtaining composite position information according to an embodiment, as shown in fig. 4, in an embodiment, the determining composite position information of the target special effect gift on the live video according to the feature area in step S120 may include the following steps:
and S1201, acquiring a current video frame image of the live video.
The current video frame image may be one frame or multiple frames.
When receiving a presentation instruction of a virtual gift sent by a viewer client, the anchor client acquires one or more frames of current video frame images of a live broadcast video of a live broadcast room where a target live broadcast is located. When the current video frame image is a plurality of frames, the current video frame image of the plurality of frames may be a connected frame video image or an alternate frame video image.
And S1202, extracting the figure outline key points in the current video frame image.
In the embodiment, the anchor client performs preprocessing on the current video frame image, such as image format conversion, filtering and drying, binarization processing and the like, extracts the figure outline of the preprocessed current video frame image, and obtains figure outline key points through algorithm operation according to the outline. Generally, it is necessary to convert the current video frame image into a bitmap image. The bitmap is composed of pixels (pixels), which are the smallest units of information of the bitmap, stored in an image grid, each Pixel having a specific position and color value, the position of the Pixel being representable by coordinate values (x, y) according to the size of the image.
It should be noted that the extraction method of the person outline key points of the current video frame image may be implemented by using existing tools and algorithms, such as OpenCV, HOG, and OTSU algorithms, and certainly, the person outline key points of the current video frame image may also be extracted by using other methods.
The set of key points of different figure outlines corresponds to different human body information. For example, a face portion of the current video frame image is identified, and a contour key point of the face portion is extracted, in an embodiment, the face information may include 106 contour key points, each contour key point corresponds to a certain portion of the face, and each contour key point corresponds to a unique coordinate value, which represents a position of the contour key point in the current video frame image. Similarly, the body contour includes 59 contour key points, each contour key point corresponds to an edge contour of each part of the human body, the human skeleton includes 22 contour key points, each contour key point corresponds to a human skeleton joint point, and the coordinate value of each contour key point represents the position in the current video frame image.
S1203, determining a corresponding target position of the characteristic region on the current video frame image according to the person outline key points, and synthesizing the target special effect gift at the target position.
Wherein the characteristic region corresponding to the target special effect gift corresponds to a target position in the current video frame image. For example, the feature region of the "angel wing" of the target special effect gift is "back", the contour key points belonging to the feature of "back" are identified from the extracted figure contour key points and determined as target contour points, and the target position synthesized on the current video frame image of the target special effect gift is determined according to the coordinate values of the target contour points on the current video frame image, wherein the target position may be a set of coordinate values of the target contour points or an area formed by connecting the target contour points.
In an embodiment, the sending the live video and the composite location information to the client in step S120 may include the following steps:
and encoding and packaging the live video and the synthesized position information into live video stream data, and forwarding the live video stream data to the audience client through a server.
In this embodiment, the anchor client encodes and encapsulates the live video and the composite position information identified according to the target special-effect gift into a data packet, thereby forming live video stream data. The anchor client sends the live video stream data to the server so that the server forwards the live video stream data to the viewer client.
And the audience client decodes the live video stream data to obtain the synthetic position information and processes the target special-effect gift according to the synthetic position information.
An exemplary one: fig. 5 is a flowchart of a method for synthesizing a virtual gift effect according to an embodiment, and as shown in fig. 5, a viewer client performs a hierarchical synthesis process on a target special effect gift according to synthesis location information, the main process may be as follows:
s110a, receiving live video stream data and a target special effect gift, and obtaining composite position information of a live video and the target special effect gift from the live video stream data.
The composite position information may include a target position where a target special effect gift obtained by identifying the live video based on a host client is composite on the live video.
In an embodiment, a user sends a comp instruction of a virtual gift to a server through a viewer client. And the receiving end server of the anchor client transmits a presentation instruction to acquire the characteristic areas corresponding to the live video and the target special-effect gift. Optionally, the characteristic region may be obtained by the anchor client recognizing according to the presentation instruction, or may be obtained by the server recognizing after receiving the presentation instruction and forwarding the result to the anchor client. The embodiment takes the example that the anchor client identifies the characteristic region corresponding to the target special-effect gift according to the presentation instruction as an example.
And the spectator client decodes the live video stream data after receiving the live video stream data to obtain the synthetic position information and the live video, and acquires the current video frame image from the live video. It should be noted that the anchor client is configured to identify that the current video frame image corresponding to the composite position information and the current video frame image obtained from the live video by the viewer client are the same frame image, and the resolution, size, color, and the like shown in the anchor client and the viewer client may be different.
The target special-effect gift can be a two-dimensional display form special-effect gift, and can also be a three-dimensional display form special-effect gift, namely a three-dimensional special-effect gift. In this embodiment, the target special effect gift is preferably a three-dimensional special effect gift, and a three-dimensional special effect is created by the three-dimensional special effect gift, so that the reality feeling is enhanced, and the display effect of the special effect of the virtual gift is improved.
S120a, the live video is divided into a foreground image layer and a background image layer, and at least one virtual gift special effect layer is generated according to the target special effect gift.
Specifically, a current video frame image is obtained from a live video; dividing a current video frame image into a foreground area and a background area; the image layer of the foreground area is a foreground image layer; and the layer where the background area is located is a background image layer.
In an embodiment, the viewer client obtains a current video frame image from a live video, where the current video frame image may be a frame video frame image or a multi-frame video frame image.
Further, background segmentation processing is carried out on the current video frame image. The existing algorithm can be used to compare each pixel value of the current video frame image, and the current video frame image is divided into a foreground region and a background region, for example, a region corresponding to a set of pixel points with pixel values greater than a certain threshold is used as the foreground region, and a region corresponding to a set of pixel points with pixel values less than a certain threshold is used as the background region. In an embodiment, the foreground region and the background region are respectively located in different image layers, where the image layer where the foreground region is located is a foreground image layer, and the image layer where the background region is located is a background image layer.
In an embodiment, the foreground image layer may include an anchor person region in the live video and the background image layer may include a background region in the live video other than the anchor person region. In addition, in an embodiment, the target special effect gift may be split to generate one or more virtual gift special effect layers corresponding to the target special effect gift, for example, a "mask" gift has only one virtual gift special effect layer, and a "snowflake" gift may include multiple virtual gift special effect layers, such as a first snowflake on the virtual gift special effect layer a, a second snowflake on the virtual gift special effect layer B, a third snowflake and a fourth snowflake on the virtual gift special effect layer C, and so on.
The audience client acquires a foreground image layer and a background image layer of a current video frame image and one or more virtual gift special effect layers corresponding to the target special effect gift. Optionally, it may be processed accordingly and buffered.
And S130a, synthesizing and displaying each virtual gift special effect layer, the foreground image layer and the background image layer according to the synthesis position information in sequence.
Illustratively, the synthesis position information comprises positions A (50,50), B (55,60) and C (70,100) of key points of the character outline, each layer comprises a foreground image layer a, a background image layer B, a virtual gift special effect layer C, a virtual gift special effect layer d and a virtual gift special effect layer e, the synthesis sequence of each layer is B, C, a, d and e, C corresponds to the position of A, d corresponds to the position of B, and e corresponds to the position of C.
Firstly, setting a background image layer B on a bottom layer, then synthesizing a virtual gift special effect layer C and a foreground image layer a according to a position A, then synthesizing a virtual gift special effect layer d according to a position B, and finally synthesizing a virtual gift special effect layer e according to a position C, so that after all parts in a target special effect gift are added on a target position corresponding to a current video frame image, the current video frame image of the synthesized target special effect gift is displayed.
As shown in fig. 6, fig. 6 is a diagram of the combined effect of virtual gifts in a live broadcast technology, in which, especially in the process of displaying a large special effect gift, the virtual gift is directly added to a live broadcast video, so that the virtual gift and the live broadcast video are overlapped, thereby blocking the main broadcast character and affecting the watching of a user; after the technology of the application is adopted, the blocking of the anchor characters can be avoided, and a better special effect display effect can be obtained.
Fig. 7 is an effect diagram of virtual gift composition provided in an embodiment, as shown in fig. 7, according to the human back profile information of the anchor, a foreground image layer where the anchor character is located is disposed on a foreground image layer where the "angel wings" are located, a set region of the "angel wings" is shielded, an effect of adding the "angel wings" to the back of the anchor character is achieved, according to the face profile information of the anchor, a special effect layer where the "mask" is located is disposed on the foreground image layer where the anchor character is located, the set region of the face of the anchor character is shielded, an effect of adding the "mask" to the eyes of the anchor character is achieved, so that the target special effect gift can be synthesized to a target position of a current video frame image of the live video according to the human profile characteristics, and a better special effect display effect is obtained.
In the method for synthesizing a virtual gift special effect provided by this embodiment, by receiving live video stream data and a target special effect gift, synthesizing position information of a live video and the target special effect gift is obtained from the live video stream data; dividing a live video into a foreground image layer and a background image layer, and generating at least one virtual gift special effect layer according to a target special effect gift; and synthesizing and displaying the virtual gift special effect layers, the foreground image layer and the background image layer according to the synthesis position information. In the embodiment, the audience client synthesizes and displays at least one virtual gift special effect layer corresponding to the target special effect gift with the foreground image layer and the background image layer of the current video frame image according to the synthesis position information in sequence, so that the target special effect gift is synthesized to the set target position of the current video frame image according to the synthesis position information obtained by figure outlines and the like, the direct live broadcast effect is prevented from being influenced by the fact that the target special effect gift is directly displayed in a video area to shield a main broadcast figure, and meanwhile, the display effect of the virtual gift special effect is improved.
An exemplary two: fig. 8 is a flowchart of a rendering method of a virtual gift special effect according to an embodiment, and as shown in fig. 8, a spectator client performs cross-live video playing area rendering on a target special effect gift according to the composite position information, the main process may be as follows:
specifically, as shown in fig. 8, the method for rendering the virtual gift special effect may include the following steps:
s110b, receiving live video stream data and a target special effect gift, and obtaining composite position information of a live video and the target special effect gift from the live video stream data.
The target special effect gift in this embodiment may be a three-dimensional display form of a special effect gift, that is, a three-dimensional special effect gift (ai (artificial intelligence) virtual special effect gift). The three-dimensional special effect is created through the three-dimensional special effect gift, the real feeling is enhanced, and the rendering effect of the virtual gift special effect is improved.
This step is the same as step S110a, and will not be described in detail here.
And S120b, adding the target special effect gift to the live video according to the synthesis position information for synthesis to obtain a special effect frame image.
The audience client side obtains the synthesis position information, determines the target position of the target special effect gift in the current video frame image of the live video according to the synthesis position information, adds the target special effect gift to the target position, and synthesizes the target special effect gift with the current video frame image to obtain the special effect frame image. The current video frame image may be a frame of video frame image or a plurality of frames of video frame images.
In an embodiment, a current video frame image may be divided into a foreground image layer and a background image layer, a target special effect gift may include one or more virtual gift special effect layers, and in the embodiment, the target special effect gift may be split to generate one or more virtual gift special effect layers corresponding to the target special effect gift, a target position of each virtual gift special effect layer on the foreground image layer or the background image layer is determined according to synthesis position information, and each virtual gift special effect layer, the foreground image layer, and the background image layer are synthesized to obtain a special effect frame image.
In one embodiment, each virtual gift special effect layer, the foreground image layer and the background image layer are synthesized and displayed according to the synthesis position information according to the priority order.
S130b, setting a display special effect display area on the live broadcast window, and synchronously rendering the special effect frame image in the special effect display area in the process of playing the live broadcast video.
The live broadcast window is a window corresponding to the live broadcast application in an open state, and the live broadcast application in a maximized state can occupy the whole screen of the terminal equipment. In this embodiment, a special effect display area is arranged on a live broadcast window, the special effect display area is arranged above a live broadcast video playing area, and the special effect display area is larger than the live broadcast video playing area, so that a special effect corresponding to a target special effect gift can be amplified and rendered, and the special effect display effect is improved. The live video playing area is an area for playing live video.
And after the audience client side acquires the synthesis position information, correspondingly converting the synthesis position information by combining the size of the special effect display area of the current audience client side, determining the target position of the target special effect gift on the current video frame image according to the converted synthesis position information, and adding the target special effect gift to the target position for synthesis.
For example, the anchor client recognizes that the resolution size of the current video frame image is 400 × 300, the coordinate value of the target contour point a in the obtained composite position information is (50,50), and the resolution size of the same current video frame image to be displayed by the viewer client is 800 × 600, and correspondingly converts the composite position information to obtain the coordinate value of the current target contour point a' of (100 ). And adding the target special effect gift to the target position determined by the converted synthesis position information for synthesis. It should be noted that the same current video frame image means that the content of the video frame image is the same, and the remaining features, such as resolution, image size, etc., may be different.
In this embodiment, the special effect display of the image frame image and the live video playing occupy different threads, so that in the process of playing the live video by one thread, the other thread can synchronously render the special effect frame image to the special effect display area, thereby achieving the synchronous operation of the video playing and the special effect display and improving the special effect display effect.
It should be noted that the area, which is blocked by the anchor character, in the special effect layer corresponding to the special effect gift is made transparent, so that the special effect display in the cross live video playing area is not affected on the normal video playing in the live video playing area.
As shown in fig. 9, fig. 9 is a rendering effect diagram of a virtual gift in a live broadcast technology, in which, especially in an AR (Augmented Reality) virtual special effect gift display process, an AR virtual gift can only be displayed in a live broadcast video playing area, and the display effect is poor; after the technology of the application is adopted, the live broadcast video can be displayed in a cross-broadcast video playing area, and a better special effect displaying effect can be obtained.
Continuing to refer to fig. 7, as shown in fig. 7, a special effect display area is set in the live video playing area, and the area of the special effect display area is larger than that of the live video playing area, so that the virtual gift special effect can be displayed across the live video area, as shown in fig. 7, "angel wings" in the virtual special effect gift, a better special effect display effect is obtained.
In the rendering method for the virtual gift special effect provided by the embodiment, the composite position information of the live video and the target special effect gift is acquired from the live video stream data by receiving the live video stream data and the target special effect gift; the synthesis position information comprises a target position of a target special-effect gift synthesized on the live video, wherein the target special-effect gift is obtained by identifying the live video based on the anchor client; adding the target special-effect gift to the live video according to the synthesis position information for synthesis to obtain a special-effect frame image; a special effect display area is arranged on a live broadcast window, and a special effect frame image is synchronously rendered in the special effect display area in the process of playing live broadcast video, so that the special effect of the virtual gift is not limited to the live broadcast area of a client side, and the rendering and display can be carried out across the video play area.
Fig. 10 is another flowchart of a method for processing a virtual gift special effect according to an embodiment, where the method is applied to a server and can be executed by the server.
Specifically, as shown in fig. 10, the processing method of the virtual gift special effect includes the following steps:
and S410, sending a presentation instruction of the virtual gift to the anchor client.
The anchor client acquires a corresponding target special-effect gift according to the presentation instruction, determines a characteristic region of the target special-effect gift, and determines the synthetic position information of the target special-effect gift on the live video according to the characteristic region.
In the process of watching a live video, a viewer presents a virtual gift to a target anchor, a target special-effect gift is selected by triggering a related function key at a viewer client, so that the viewer client sends a presentation instruction of the virtual gift to a server, the server receives the presentation instruction and forwards the presentation instruction to the anchor client, and the presentation request of the virtual gift carries information such as a target special-effect gift identifier and a target anchor identifier.
The server forwards a giving instruction of the virtual gift sent by the audience client to the anchor client, determines a corresponding target special-effect gift and a corresponding characteristic region thereof according to the obtained target special-effect gift identifier, simultaneously obtains a current video frame image in the live video, identifies the current video frame image, obtains a target contour point of the characteristic region of the target special-effect gift corresponding to the current video frame image, and obtains synthetic position information.
And S420, receiving live video stream data sent by the anchor client.
The live video stream data comprises the synthesis position information of a live video and a target special effect gift.
And the anchor client encodes the synthesized position information and the live video to form live video stream data so that the synthesized position information is transmitted independently along with the live video. In an embodiment, a server receives live video stream data transmitted by an anchor client according to a set communication protocol.
S430, forwarding the live video stream data to a viewer client; and the audience client adds the target special effect gift to the live video according to the synthesis position information to synthesize and display the target special effect gift.
The server receives the live video stream sent by the anchor client and forwards the live video stream to each audience client in the same live broadcast room, so that the audience clients decode the live video stream data to obtain synthetic position information, add the target special-effect gift to a target position on the live video according to the synthetic position information for synthesis, and display the synthesized target special-effect gift.
The method for processing the special effect of the virtual gift provided by the embodiment is applied to a server and sends a virtual gift giving instruction to a main broadcasting client; the anchor client acquires a corresponding target special-effect gift according to the presentation instruction, determines a characteristic area of the target special-effect gift, and determines the synthetic position information of the target special-effect gift on the live video according to the characteristic area; receiving live video streaming data sent by a main broadcast client; the live video stream data comprises the synthesis position information of a live video and a target special effect gift; forwarding live video stream data to a viewer client; and the audience client adds the target special effect gift to the live video according to the synthesis position information to synthesize and display the target special effect gift. According to the scheme, the server sends the synthesized position information to the audience client side along with the live video, so that the synthesized position information is independent of the live video and is transmitted along with the live video, the audience client side can carry out secondary editing on the effect display of the virtual special effect gift according to the synthesized position information and the audience client side, the layered special effect processing and the special effect display of a cross live video playing area are favorably realized at the audience client side, and the effect of the special effect display is improved.
In order to explain the technical solution of the present application more clearly, the following description will be further made with reference to examples in several scenarios.
Scene one: referring to fig. 11, fig. 11 is a timing diagram of a virtual gift-giving process provided by an embodiment; in this example, if the viewer presents a three-dimensional special effect gift, "angel wing," to the anchor, and the corresponding identifier is ID1648, the main flow may be as follows:
and S11, sending a gift sending request to the server by the audience client.
Spectator user W sends a gift-sending request to the server through the spectator client, where the virtual gift is ID 1648.
And S12, the server performs service processing.
And after receiving the gift sending request, the server performs corresponding service processing (such as fee deduction and the like).
And S13, broadcasting gift delivery information by the server.
The present information for the gift of the audience user W to the anchor gift ID1648 is broadcast to all users in the channel, including the anchor client and the audience client.
And S14, after receiving the gift sending information, the anchor client inquires the virtual gift and identifies the synthetic position information.
After receiving the broadcast of the gift sending information, the anchor client queries the configuration of the gifts according to the virtual gift ID1648 to obtain that the virtual gift is a three-dimensional special effect gift (such as an ai (intellectual intelligence) gift), and the synthetic position information to be identified comprises a face and a back, and then the anchor client starts to perform face identification and background segmentation identification.
And S15, the anchor client packs the composite position information into the live video stream for transmission.
And the anchor client packs the synthetic position information (which can be AI information) obtained by face recognition and background segmentation recognition into a live video stream, and transmits the live video stream to the server along with the live video.
And S16, the server forwards the live video stream.
The server transmits the live video stream containing the synthesized position information to the audience client.
S17, the spectator client acquires the combined position information, and combines and displays the virtual gifts.
The audience client decodes from the live video stream to obtain the synthetic position information, combines the synthetic position information with the virtual gift, and plays the angel wing special effect: the book angel wings grow behind the anchor.
Scene two: if the viewer presents a three-dimensional special gift "pet bird" to the anchor, and the corresponding identifier is ID1649, the main flow may be as follows:
and S21, sending a gift sending request to the server by the audience client.
Spectator user Q sends a gift sending request to the server through the spectator client, where the virtual gift is ID 1649.
S22, the server processes the service;
and after receiving the gift sending request, the server performs corresponding service processing (such as fee deduction and the like).
And S23, broadcasting gift delivery information by the server.
The present information for the gift of audience user Q to the anchor gift ID1649 is broadcast to all users in the channel, including the anchor client and the audience client.
And S24, after receiving the gift sending information, the anchor client inquires the virtual gift and identifies the synthetic position information.
After receiving the broadcast of the gift sending information, the anchor client queries the configuration of the gifts according to the virtual gift ID1649 to obtain that the virtual gift is a three-dimensional special effect gift (such as an ai (intellectual significance)) gift, and the synthetic position information to be identified comprises a human face and a human body contour, and then the anchor client starts to perform the human face identification and the human body contour identification.
And S25, the anchor client packs the composite position information into the live video stream for transmission.
And the anchor client packs the synthetic position information (which can be AI information) obtained by face recognition and human body contour recognition into a live video stream, and transmits the live video stream to the server along with the live video.
And S26, the server forwards the live video stream.
The server transmits the live video stream containing the synthesized position information to the audience client.
S27, the spectator client acquires the combined position information, and combines and displays the virtual gifts.
The audience client decodes from the live video stream to obtain the synthetic position information, combines the synthetic position information with the virtual gift, and plays the special effect of 'pet bird': the bird flies from the out-of-video area onto the anchor shoulder.
The above examples are merely used to assist in explaining the present application, and the illustrated contents and specific flows related thereto do not limit the usage scenarios of the present application.
The following describes in detail a related embodiment of the virtual gift effect processing apparatus.
Fig. 12 is a schematic structural diagram of a processing apparatus for a virtual gift special effect according to an embodiment, and as shown in fig. 12, the processing apparatus 100 for a virtual gift special effect may include: a feature acquisition module 110, an information determination module 120, and an information transmission module 130.
The characteristic obtaining module 110 is configured to receive a presenting instruction of a virtual gift, obtain a corresponding target special-effect gift according to the presenting instruction, and determine a characteristic region of the target special-effect gift;
an information determining module 120, configured to determine, according to the feature area, synthetic position information of the target special-effect gift on the live video;
an information sending module 130, configured to send the live video and the composite position information to a viewer client, so that the viewer client adds the target special-effect gift to the live video according to the composite position information to perform composite and display.
The processing apparatus for a special effect of a virtual gift provided in this embodiment is applied to a broadcaster client, and receives a presenting instruction of the virtual gift sent by a viewer client through a characteristic obtaining module 110, obtains a corresponding target special effect gift according to the presenting instruction, and determines a characteristic region of the target special effect gift; the information determining module 120 determines the synthetic position information of the target special-effect gift on the live video according to the characteristic area; the information sending module 130 sends the live video and the composite position information to the client, so that the audience client adds the target special-effect gift to the live video for composition and display according to the composite position information. The scheme sends the synthesized position information to the audience client along with the live video. The embodiment utilizes the anchor client to encode and package the synthesis position information outside the live video, so that the synthesis position information is independent of the live video and is transmitted along with the live video, the audience client can conveniently carry out secondary editing on the effect display of the virtual special effect gift according to the synthesis position information and the audience client, thereby being beneficial to realizing layered special effect processing and special effect display crossing the live video playing area at the audience client and improving the effect of the special effect display.
In one embodiment, the feature acquisition module 110 includes: an identification obtaining unit and a characteristic determining unit;
the system comprises an identification acquisition unit, a display unit and a display unit, wherein the identification acquisition unit is used for acquiring a target special effect gift identification; and the characteristic determining unit is used for searching and obtaining the target special-effect gift according to the target special-effect gift identification and determining a characteristic area corresponding to the target special-effect gift.
In one embodiment, the information determination module 120 includes: the system comprises a frame image acquisition unit, a key point extraction unit and a target position determination unit;
the frame image acquisition unit is used for acquiring a current video frame image of the live video; the key point extraction unit is used for extracting figure outline key points in the current video frame image; and the target position determining unit is used for determining a target position corresponding to the characteristic region on the current video frame image according to the figure outline key point so as to synthesize the target special effect gift at the target position.
In an embodiment, the information sending module 120 is configured to encode and encapsulate the live video and the composite position information into live video stream data, and forward the live video stream data to the viewer client through a server.
In one embodiment, the target effect gift is an effect gift in the form of a three-dimensional display.
In one embodiment, the synthetic position information includes: at least one of face information, body contour information, gesture information, and body skeleton information.
Fig. 13 is another schematic structural diagram of a processing apparatus for a virtual gift special effect according to an embodiment, where the processing apparatus for a virtual gift special effect in the embodiment is applied to a server, such as a server, and as shown in fig. 13, the processing apparatus 400 for a virtual gift special effect includes: an instruction sending module 410, a data receiving module 420 and a data forwarding module 430.
The instruction sending module 410 is configured to send a gifting instruction of the virtual gift to the anchor client; the anchor client acquires a corresponding target special-effect gift according to the presentation instruction, determines a characteristic region of the target special-effect gift, and determines the synthetic position information of the target special-effect gift on the live video according to the characteristic region;
a data receiving module 420, configured to receive live video stream data sent by a anchor client; the live video stream data comprises a live video and the synthetic position information of a target special effect gift;
a data forwarding module 430, configured to forward the live video stream data to a viewer client; and the audience client adds the target special effect gift to the live video according to the synthesis position information to synthesize and display the target special effect gift.
The processing device for the special effect of the virtual gift can be used for executing the processing method for the special effect of the virtual gift provided by any embodiment, and has corresponding functions and beneficial effects.
The embodiment of the present application further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and when the processor executes the program, the method for processing the special effect of the virtual gift as in any of the above embodiments is implemented.
Optionally, the computer device may be a mobile terminal, a tablet computer, a server, or the like. When the computer device provided by the above embodiment executes the processing method of the virtual gift special effect provided by any of the above embodiments, the computer device has corresponding functions and beneficial effects.
Embodiments of the present application also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a method for processing a virtual gift special effect, including:
receiving a presentation instruction of a virtual gift, acquiring a corresponding target special-effect gift according to the presentation instruction and determining a characteristic area of the target special-effect gift;
determining the synthetic position information of the target special effect gift on the live video according to the characteristic area;
and sending the live video and the synthesized position information to a spectator client, so that the spectator client adds the target special-effect gift to the live video according to the synthesized position information to synthesize and display the live video.
Alternatively, the computer executable instructions, when executed by a computer processor, are for performing a method of processing a virtual gift special effect, comprising:
sending a virtual gift giving instruction to the anchor client; the anchor client acquires a corresponding target special-effect gift according to the presentation instruction, determines a characteristic region of the target special-effect gift, and determines the synthetic position information of the target special-effect gift on the live video according to the characteristic region;
receiving live video streaming data sent by a main broadcast client; the live video stream data comprises a live video and the synthetic position information of a target special effect gift;
forwarding the live video stream data to a viewer client; and the audience client adds the target special effect gift to the live video according to the synthesis position information to synthesize and display the target special effect gift.
Of course, the storage medium provided in the embodiments of the present application includes computer-executable instructions, and the computer-executable instructions are not limited to the operations of the processing method for virtual gift special effects described above, and may also perform related operations in the processing method for virtual gift special effects provided in any embodiment of the present application, and have corresponding functions and advantages.
From the above description of the embodiments, it is obvious for those skilled in the art that the present application can be implemented by software and necessary general hardware, and certainly can be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk, or an optical disk of a computer, and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute the method for processing the special effects of the virtual gifts according to any embodiment of the present application.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (13)

1. A method for processing a special effect of a virtual gift is characterized by comprising the following steps:
receiving a presentation instruction of a virtual gift, acquiring a corresponding target special-effect gift according to the presentation instruction and determining a characteristic area of the target special-effect gift;
determining the synthetic position information of the target special effect gift on the live video according to the characteristic area;
sending the live video and the synthesized position information to a spectator client, so that the spectator client adds the target special-effect gift to the live video for synthesis according to the synthesized position information and displays the target special-effect gift in a preset special-effect display area; the method comprises the following steps: encoding and packaging the live video and the synthetic position information into live video stream data, forwarding the live video stream data to the audience client through a server, dividing the live video into a foreground image layer and a background image layer by the audience client, and generating at least one virtual gift special effect layer according to the target special effect gift; synthesizing each virtual gift special effect layer with the foreground image layer and the background image layer according to the synthesis position information in sequence to obtain a special effect frame image; synchronously rendering the special effect frame image in the special effect display area in the process of playing the live video through the playing area of the live video;
the special effect display area is independent of the live video playing area, and the area of the special effect display area is larger than that of the live video playing area.
2. The method for processing the virtual gift special effect of claim 1, wherein the step of obtaining a corresponding target special effect gift according to the give instruction and determining a characteristic region of the target special effect gift comprises:
acquiring a target special-effect gift identifier;
and searching for a target special-effect gift according to the target special-effect gift identifier, and determining a characteristic region corresponding to the target special-effect gift.
3. The method of processing a virtual gift special effect of claim 1, wherein the step of determining composite position information of the target special effect gift on the live video according to the feature area comprises:
acquiring a current video frame image of the live video;
extracting figure outline key points in the current video frame image;
and determining a corresponding target position of the characteristic region on the current video frame image according to the figure outline key points so as to synthesize the target special effect gift at the target position.
4. The method of claim 1, wherein the step of sending the live video and the composite location information to the viewer client comprises:
and encoding and packaging the live video and the synthesized position information into live video stream data, and forwarding the live video stream data to the audience client through a server.
5. The method of processing a virtual gift effect of any of claims 1 to 4, wherein the target effect gift is an effect gift in a three-dimensional display form.
6. The method of processing a virtual gift special effect of any one of claims 1 to 4, wherein the synthetic position information includes: at least one of face information, body contour information, gesture information, and body skeleton information.
7. A method for processing a special effect of a virtual gift is characterized by comprising the following steps:
sending a virtual gift giving instruction to the anchor client; the anchor client acquires a corresponding target special-effect gift according to the presentation instruction, determines a characteristic region of the target special-effect gift, and determines the synthetic position information of the target special-effect gift on the live video according to the characteristic region;
receiving live video streaming data sent by a main broadcast client; the live video stream data is obtained by encoding and packaging the synthetic position information of a live video and a target special-effect gift;
forwarding the live video stream data to a viewer client; the audience client adds the target special effect gift to the live video according to the synthesis position information to synthesize and display the target special effect gift in a preset special effect display area; the method comprises the following steps: dividing the live video into a foreground image layer and a background image layer, and generating at least one virtual gift special effect layer according to the target special effect gift; synthesizing each virtual gift special effect layer with the foreground image layer and the background image layer according to the synthesis position information in sequence to obtain a special effect frame image; synchronously rendering the special effect frame image in the special effect display area in the process of playing the live video through the playing area of the live video;
the special effect display area is independent of the live video playing area and is arranged above the live video playing area, and the area of the special effect display area is larger than that of the live video playing area.
8. The method for processing the virtual gift special effect of claim 7, wherein the step of adding the target special effect gift to the live video for composition and presentation according to the composition position information comprises:
acquiring a current video frame image of the live video;
adding the target special-effect gift to the current video frame image according to the synthesis position information to obtain a special-effect frame image;
and rendering the special effect frame image to a special effect display area for displaying.
9. A device for processing a special effect of a virtual gift, comprising:
the characteristic acquisition module is used for receiving a presentation instruction of the virtual gift, acquiring a corresponding target special-effect gift according to the presentation instruction and determining a characteristic area of the target special-effect gift;
the information determining module is used for determining the synthetic position information of the target special effect gift on the live video according to the characteristic area;
the information sending module is used for sending the live video and the synthesized position information to a spectator client, so that the spectator client adds the target special-effect gift to the live video according to the synthesized position information for synthesis and displays the target special-effect gift in a preset special-effect display area;
the information sending module is specifically configured to encode and encapsulate the live video and the synthesized position information into live video stream data, forward the live video stream data to the audience client through a server, divide the live video into a foreground image layer and a background image layer by the audience client, and generate at least one virtual gift special effect layer according to the target special effect gift; synthesizing each virtual gift special effect layer with the foreground image layer and the background image layer according to the synthesis position information in sequence to obtain a special effect frame image; synchronously rendering the special effect frame image in the special effect display area in the process of playing the live video through the playing area of the live video; the special effect display area is independent of the live video playing area and is arranged above the live video playing area, and the area of the special effect display area is larger than that of the live video playing area.
10. A device for processing a special effect of a virtual gift, comprising:
the instruction sending module is used for sending a presentation instruction of the virtual gift to the anchor client; the anchor client acquires a corresponding target special-effect gift according to the presentation instruction, determines a characteristic region of the target special-effect gift, and determines the synthetic position information of the target special-effect gift on the live video according to the characteristic region;
the data receiving module is used for receiving live video stream data sent by the anchor client; the live video stream data is obtained by encoding and packaging the synthetic position information of a live video and a target special-effect gift;
the data forwarding module is used for forwarding the live video stream data to a spectator client; the audience client adds the target special effect gift to the live video according to the synthesis position information for synthesis, and displays the target special effect gift in a preset special effect display area; the method comprises the following steps: the audience client divides the live video into a foreground image layer and a background image layer, and generates at least one virtual gift special effect layer according to the target special effect gift; synthesizing each virtual gift special effect layer with the foreground image layer and the background image layer according to the synthesis position information in sequence to obtain a special effect frame image; synchronously rendering the special effect frame image in the special effect display area in the process of playing the live video through the playing area of the live video; the special effect display area is independent of the live video playing area and is arranged above the live video playing area, and the area of the special effect display area is larger than that of the live video playing area.
11. A live broadcast system is characterized by comprising a main broadcast client, an audience client and a server, wherein the main broadcast client is in communication connection with the audience client through the server through a network;
the server is used for receiving a presentation instruction of the virtual gift sent by the audience client side and sending the presentation instruction to the anchor client side;
the anchor client is used for receiving the presentation instruction and acquiring a target special-effect gift identifier; searching for a target special-effect gift according to the target special-effect gift identification, and determining a characteristic area corresponding to the target special-effect gift; determining the synthetic position information of the target special effect gift on the live video according to the characteristic area; encoding the synthesized position information and the live video into live video stream data and sending the live video stream data to a server;
the server is further used for forwarding the live video stream data to the audience client;
the audience client is used for receiving live broadcast video stream data and a target special effect gift, and acquiring the synthetic position information of a live broadcast video and the target special effect gift from the live broadcast video stream data; adding the target special effect gift to the live video according to the synthesis position information to synthesize and display the target special effect gift in a preset special effect display area;
the audience client is specifically used for dividing the live video into a foreground image layer and a background image layer and generating at least one virtual gift special effect layer according to the target special effect gift; synthesizing each virtual gift special effect layer with the foreground image layer and the background image layer according to the synthesis position information in sequence to obtain a special effect frame image; synchronously rendering the special effect frame image in the special effect display area in the process of playing the live video through the playing area of the live video; the special effect display area is independent of the live video playing area and is arranged above the live video playing area, and the area of the special effect display area is larger than that of the live video playing area.
12. Computer device for live broadcasting, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the program, carries out the steps of the method for processing a virtual gift special effect of any of claims 1-8.
13. A storage medium containing computer-executable instructions for performing the steps of the method of processing a virtual gift special effect recited in any one of claims 1-8 when executed by a computer processor.
CN201910859930.5A 2019-09-11 2019-09-11 Processing method and device for special effect of virtual gift and live broadcast system Active CN110493630B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910859930.5A CN110493630B (en) 2019-09-11 2019-09-11 Processing method and device for special effect of virtual gift and live broadcast system
PCT/CN2019/125929 WO2021047094A1 (en) 2019-09-11 2019-12-17 Virtual gift special effect processing method and apparatus, and live streaming system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910859930.5A CN110493630B (en) 2019-09-11 2019-09-11 Processing method and device for special effect of virtual gift and live broadcast system

Publications (2)

Publication Number Publication Date
CN110493630A CN110493630A (en) 2019-11-22
CN110493630B true CN110493630B (en) 2020-12-01

Family

ID=68557666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910859930.5A Active CN110493630B (en) 2019-09-11 2019-09-11 Processing method and device for special effect of virtual gift and live broadcast system

Country Status (2)

Country Link
CN (1) CN110493630B (en)
WO (1) WO2021047094A1 (en)

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110475150B (en) * 2019-09-11 2021-10-08 广州方硅信息技术有限公司 Rendering method and device for special effect of virtual gift and live broadcast system
CN110536151B (en) * 2019-09-11 2021-11-19 广州方硅信息技术有限公司 Virtual gift special effect synthesis method and device and live broadcast system
CN110493630B (en) * 2019-09-11 2020-12-01 广州华多网络科技有限公司 Processing method and device for special effect of virtual gift and live broadcast system
CN110557649B (en) * 2019-09-12 2021-12-28 广州方硅信息技术有限公司 Live broadcast interaction method, live broadcast system, electronic equipment and storage medium
CN110958463A (en) * 2019-12-06 2020-04-03 广州华多网络科技有限公司 Method, device and equipment for detecting and synthesizing virtual gift display position
CN111698523B (en) * 2019-12-06 2021-11-12 广州方硅信息技术有限公司 Method, device, equipment and storage medium for presenting text virtual gift
CN113315924A (en) * 2020-02-27 2021-08-27 北京字节跳动网络技术有限公司 Image special effect processing method and device
CN111277854A (en) * 2020-03-04 2020-06-12 网易(杭州)网络有限公司 Display method and device of virtual live broadcast room, electronic equipment and storage medium
CN111405343A (en) * 2020-03-18 2020-07-10 广州华多网络科技有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN111432235A (en) * 2020-04-01 2020-07-17 网易(杭州)网络有限公司 Live video generation method and device, computer readable medium and electronic equipment
CN111464828A (en) * 2020-05-14 2020-07-28 广州酷狗计算机科技有限公司 Virtual special effect display method, device, terminal and storage medium
CN111464830B (en) * 2020-05-19 2022-07-15 广州酷狗计算机科技有限公司 Method, device, system, equipment and storage medium for image display
CN111970575B (en) * 2020-07-24 2022-08-05 网易(杭州)网络有限公司 Virtual gift processing method, storage medium, processor and electronic equipment
CN111784418B (en) * 2020-07-27 2023-08-08 网易(杭州)网络有限公司 Display control method and device for live broadcasting room, computer medium and electronic equipment
CN111970533B (en) * 2020-08-28 2022-11-04 北京达佳互联信息技术有限公司 Interaction method and device for live broadcast room and electronic equipment
CN112383788B (en) * 2020-11-11 2023-05-26 成都威爱新经济技术研究院有限公司 Live broadcast real-time image extraction system and method based on intelligent AI technology
CN112565806B (en) * 2020-12-02 2023-08-29 广州繁星互娱信息科技有限公司 Virtual gift giving method, device, computer equipment and medium
CN112929680B (en) * 2021-01-19 2023-09-05 广州虎牙科技有限公司 Live broadcasting room image rendering method and device, computer equipment and storage medium
CN113038228B (en) * 2021-02-25 2023-05-30 广州方硅信息技术有限公司 Virtual gift transmission and request method, device, equipment and medium thereof
CN113163220B (en) * 2021-02-26 2023-02-28 广州方硅信息技术有限公司 Virtual gift positioning display method and device, equipment and medium thereof
CN113139913B (en) * 2021-03-09 2024-04-05 杭州电子科技大学 New view correction generation method for portrait
CN113196785B (en) * 2021-03-15 2024-03-26 百果园技术(新加坡)有限公司 Live video interaction method, device, equipment and storage medium
CN114501041B (en) * 2021-04-06 2023-07-14 抖音视界有限公司 Special effect display method, device, equipment and storage medium
CN113395533B (en) * 2021-05-24 2023-03-21 广州博冠信息科技有限公司 Virtual gift special effect display method and device, computer equipment and storage medium
CN113766258B (en) * 2021-07-27 2023-04-11 广州方硅信息技术有限公司 Live broadcast room virtual gift presentation processing method and equipment and storage medium
CN113596498A (en) * 2021-07-29 2021-11-02 广州方硅信息技术有限公司 Virtual gift display method, device, system and storage medium in live broadcast
CN115706821B (en) * 2021-08-17 2024-04-30 上海哔哩哔哩科技有限公司 Virtual gift display method and device
CN113473211B (en) * 2021-08-20 2023-07-14 广州博冠信息科技有限公司 Virtual gift processing method and device, storage medium and electronic equipment
CN113873267B (en) * 2021-08-20 2024-01-19 广州方硅信息技术有限公司 Live broadcast room virtual gift naming method, server, live broadcast system and storage medium
CN115733995A (en) * 2021-08-30 2023-03-03 北京字跳网络技术有限公司 Interaction method, interaction device, electronic equipment, readable storage medium and program product
CN114025188A (en) * 2021-10-28 2022-02-08 广州虎牙科技有限公司 Live broadcast advertisement display method, system, device, terminal and readable storage medium
CN114222147B (en) * 2021-11-03 2023-10-03 广州方硅信息技术有限公司 Live broadcast layout adjustment method and device, storage medium and computer equipment
CN114125488A (en) * 2021-12-09 2022-03-01 小象(广州)商务有限公司 Virtual gift display method and system in live broadcast
CN114237792A (en) * 2021-12-13 2022-03-25 广州繁星互娱信息科技有限公司 Virtual object display method and device, storage medium and electronic equipment
CN114422821A (en) * 2022-02-14 2022-04-29 广州方硅信息技术有限公司 Live broadcast home page interaction method, device, medium and equipment based on virtual gift
CN114567793A (en) * 2022-02-23 2022-05-31 广州博冠信息科技有限公司 Method and device for realizing live broadcast interactive special effect, storage medium and electronic equipment
CN115134627A (en) * 2022-07-01 2022-09-30 广州博冠信息科技有限公司 Live broadcast room information processing method and device, readable storage medium and electronic equipment
CN115484472B (en) * 2022-09-23 2024-05-28 广州方硅信息技术有限公司 Method and device for playing and processing special effect in live broadcasting room, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107613360A (en) * 2017-09-20 2018-01-19 北京奇虎科技有限公司 Video data real-time processing method and device, computing device
CN108391153A (en) * 2018-01-29 2018-08-10 北京潘达互娱科技有限公司 Virtual present display methods, device and electronic equipment
CN109729288A (en) * 2018-12-17 2019-05-07 广州城市职业学院 A kind of short video-generating device and method

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007243573A (en) * 2006-03-08 2007-09-20 Toshiba Corp Server apparatus
KR101533065B1 (en) * 2008-12-01 2015-07-01 삼성전자주식회사 Method and apparatus for providing animation effect on video telephony call
CN101452582B (en) * 2008-12-18 2013-09-18 北京中星微电子有限公司 Method and device for implementing three-dimensional video specific action
CN104780458A (en) * 2015-04-16 2015-07-15 美国掌赢信息科技有限公司 Method and electronic equipment for loading effects in instant video
CN105334963B (en) * 2015-10-29 2018-11-20 广州华多网络科技有限公司 A kind of virtual objects methods of exhibiting and system
CN106231434B (en) * 2016-07-25 2019-09-10 武汉斗鱼网络科技有限公司 A kind of living broadcast interactive special efficacy realization method and system based on Face datection
CN106658035A (en) * 2016-12-09 2017-05-10 武汉斗鱼网络科技有限公司 Dynamic display method and device for special effect gift
CN107071580A (en) * 2017-03-20 2017-08-18 北京潘达互娱科技有限公司 Data processing method and device
CN107483892A (en) * 2017-09-08 2017-12-15 北京奇虎科技有限公司 Video data real-time processing method and device, computing device
CN107682729A (en) * 2017-09-08 2018-02-09 广州华多网络科技有限公司 It is a kind of based on live interactive approach and live broadcast system, electronic equipment
CN107888965B (en) * 2017-11-29 2020-02-14 广州酷狗计算机科技有限公司 Image gift display method and device, terminal, system and storage medium
CN108022279B (en) * 2017-11-30 2021-07-06 广州市百果园信息技术有限公司 Video special effect adding method and device and intelligent mobile terminal
CN108156507B (en) * 2017-12-27 2020-11-13 广州酷狗计算机科技有限公司 Virtual article presenting method, device and storage medium
CN108900858A (en) * 2018-08-09 2018-11-27 广州酷狗计算机科技有限公司 A kind of method and apparatus for giving virtual present
CN110493630B (en) * 2019-09-11 2020-12-01 广州华多网络科技有限公司 Processing method and device for special effect of virtual gift and live broadcast system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107613360A (en) * 2017-09-20 2018-01-19 北京奇虎科技有限公司 Video data real-time processing method and device, computing device
CN108391153A (en) * 2018-01-29 2018-08-10 北京潘达互娱科技有限公司 Virtual present display methods, device and electronic equipment
CN109729288A (en) * 2018-12-17 2019-05-07 广州城市职业学院 A kind of short video-generating device and method

Also Published As

Publication number Publication date
WO2021047094A1 (en) 2021-03-18
CN110493630A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN110493630B (en) Processing method and device for special effect of virtual gift and live broadcast system
CN110536151B (en) Virtual gift special effect synthesis method and device and live broadcast system
CN110475150B (en) Rendering method and device for special effect of virtual gift and live broadcast system
CN110892453B (en) Point cloud and grid compression using image/video codec
US20220014819A1 (en) Video image processing
CN106303354B (en) Face special effect recommendation method and electronic equipment
CN106303289B (en) Method, device and system for fusion display of real object and virtual scene
CN110784730B (en) Live video data transmission method, device, equipment and storage medium
KR20190094451A (en) Overlay processing method and device in 360 video system
CN109302628B (en) Live broadcast-based face processing method, device, equipment and storage medium
US20200302664A1 (en) Creating and displaying multi-layered augemented reality
US11451858B2 (en) Method and system of processing information flow and method of displaying comment information
US10958950B2 (en) Method, apparatus and stream of formatting an immersive video for legacy and immersive rendering devices
CN113206992A (en) Method for converting projection format of panoramic video and display equipment
KR20180076720A (en) Video transmitting device and video playing device
KR102417959B1 (en) Apparatus and method for providing three dimensional volumetric contents
US11151747B2 (en) Creating video augmented reality using set-top box
CN111464828A (en) Virtual special effect display method, device, terminal and storage medium
CN110958463A (en) Method, device and equipment for detecting and synthesizing virtual gift display position
CN110213640B (en) Virtual article generation method, device and equipment
CN113127637A (en) Character restoration method and device, storage medium and electronic device
CN111726598A (en) Image processing method and device
CN116962744A (en) Live webcast link interaction method, device and live broadcast system
CN112423108B (en) Method and device for processing code stream, first terminal, second terminal and storage medium
CN111954081B (en) Method for acquiring mask data, computer device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210115

Address after: 511442 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Patentee after: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 511442 29 floor, block B-1, Wanda Plaza, Huambo business district, Panyu District, Guangzhou, Guangdong.

Patentee before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right