CN112383793B - Picture synthesis method and device, electronic equipment and storage medium - Google Patents

Picture synthesis method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112383793B
CN112383793B CN202011264683.3A CN202011264683A CN112383793B CN 112383793 B CN112383793 B CN 112383793B CN 202011264683 A CN202011264683 A CN 202011264683A CN 112383793 B CN112383793 B CN 112383793B
Authority
CN
China
Prior art keywords
layer
picture
content
client
merging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011264683.3A
Other languages
Chinese (zh)
Other versions
CN112383793A (en
Inventor
刘文辉
易页
揭艳霞
万松
林勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Video Technology Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Video Technology Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Video Technology Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202011264683.3A priority Critical patent/CN112383793B/en
Publication of CN112383793A publication Critical patent/CN112383793A/en
Application granted granted Critical
Publication of CN112383793B publication Critical patent/CN112383793B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234327Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into layers, e.g. base layer and one or more enhancement layers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/458Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules ; time-related management operations
    • H04N21/4586Content update operation triggered locally, e.g. by comparing the version of software modules in a DVB carousel to the version stored locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8352Generation of protective data, e.g. certificates involving content or source identification data, e.g. Unique Material Identifier [UMID]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the invention discloses a picture synthesis method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: receiving layer related information sent by each client, wherein the layer related information comprises a live broadcast room ID, a layer ID, layer content, a frame number SN and a video stream ID; merging the layer contents from each client according to the live broadcasting room ID and the layer ID to obtain layer merged contents; and obtaining a target video frame from the source video file according to the video stream ID and the frame sequence number SN, and combining the layer combination content with the target video frame to obtain a combined picture. According to the embodiment of the invention, each client can realize interaction of multiple persons under the synchronous live broadcast picture, so that not only are live broadcast interaction modes enriched, but also user experience of interaction and live broadcast watching is improved.

Description

Picture synthesis method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a method and apparatus for synthesizing a picture, an electronic device, and a storage medium.
Background
In the field of video live broadcasting, the interaction mode is mainly focused on a method independent of video live broadcasting stream such as chat room text interaction, gift interaction and the like. The interactive objects (text and gift sending information) are sent out from the client side and then sent to other client sides of the living broadcast room through independent service of the server side, and the other client sides display the interactive objects after receiving the information. This service does not interact with the live video stream itself. The user interacts in a text, gift message and other modes, the interaction mode is not rich enough, the degree of freedom is lacking, and no matter how the interaction is performed, the video live stream picture is not influenced; meanwhile, the interaction is carried by independent services and is in double-line operation with the live video stream, so that the situation that the interaction and the picture are asynchronous is easy to occur. For example: in live broadcast of football events with strong real-time performance, a plurality of audiences want to discuss tactical routes and cannot directly interact intuitively in a live broadcast room in a tactical board drawing mode; if the discussion is performed in the text interaction mode, the audience at one end can see the goal and text interaction due to the delay of part of the audience live stream, and the audience at the other end with the delay of the live stream can see the text interaction first to say the goal, and can see the goal in the picture after a period of time. Affecting the user's viewing and interactive experience.
Disclosure of Invention
Based on the problems existing in the prior art, the embodiment of the invention provides a picture synthesis method, a picture synthesis device, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present invention provides a method for synthesizing a picture, including:
receiving layer related information sent by each client, wherein the layer related information comprises a live broadcasting room ID, a layer ID, layer contents, a frame sequence number SN and a video stream ID;
merging the layer contents from each client according to the live broadcasting room ID and the layer ID to obtain layer merged contents;
and acquiring a target video frame from a source video file according to the video stream ID and the frame sequence number SN, and combining the layer combination content with the target video frame to obtain a combined picture.
Further, the merging the layer content from each client according to the live room ID and the layer ID to obtain a layer merged content, which includes:
judging whether the layer ID exists in the live broadcasting room;
if the layer ID exists, updating the stored layer;
and superposing and merging the stored layers to obtain the layer merging content.
Further, before determining whether the layer ID exists in the live room, the method further includes: judging whether a live broadcasting room corresponding to the live broadcasting room ID exists or not; if the live room does not exist, the live room is created in advance.
Further, the stacking and merging the stored layers to obtain the layer merging content includes:
acquiring user information corresponding to each client, wherein the user information comprises a user type, a user grade and current playing time;
sequencing the content of the layers from each client according to the user type, the user grade and the current playing time to obtain a sequencing result of the content of the layers;
and superposing and merging the stored layers according to the sequencing result of the layer contents to obtain the layer merging contents.
Further, the obtaining the target video frame from the source video file according to the video stream ID and the frame sequence SN, and combining the layer combination content with the target video frame to obtain a combined picture, includes:
obtaining the source video file according to the video stream ID;
extracting a video frame corresponding to the frame sequence number SN from the source video file, and taking the video frame as the target video frame;
and superposing and merging the layer merging content and the target video frame to obtain a synthesized picture, and pushing the synthesized picture to a pre-established stream address so that each client side synchronously acquires the synthesized picture from the stream address.
Further, before pushing the synthesized picture to the pre-created stream address, the method further comprises:
and creating the stream address for the live broadcasting room according to the ID of the live broadcasting room, and sending a notification of synchronously acquiring the synthesized picture from the stream address after a preset time to each client.
Further, the image layer content comprises the graffiti content of each client in the corresponding drawing picture, wherein each client performs graffiti by creating the drawing picture covered on the live picture area to obtain the graffiti content.
In a second aspect, an embodiment of the present invention further provides a device for synthesizing a picture, including:
the system comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving layer related information sent by each client, and the layer related information comprises a live broadcasting room ID, a layer ID, layer content, a frame sequence number SN and a video stream ID;
the layer merging module is used for merging the layer contents from each client according to the live broadcasting room ID and the layer ID to obtain layer merging contents;
and the picture synthesis module is used for acquiring a target video frame from a source video file according to the video stream ID and the frame sequence number SN, and combining the combined content of the layers with the target video frame to obtain a synthesized picture.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the method for synthesizing a picture according to the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present invention also provide a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of compositing pictures according to the first aspect.
According to the technical scheme, the picture synthesizing method, the picture synthesizing device, the electronic equipment and the storage medium provided by the embodiment of the invention, when a user watches live broadcast in a live broadcast room, the user can conduct arbitrary scrawling on the live broadcast picture to obtain picture layer contents, the server can combine the picture layer contents of all clients and send the picture layer contents to all the clients together with the live broadcast picture, and therefore, all the clients can realize interaction of multiple people under the synchronous live broadcast picture, so that not only are live broadcast interaction modes enriched, but also user experience of interaction and watching of live broadcast is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings that are necessary for the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention and that other drawings can be obtained from these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for synthesizing a frame according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a storage structure of layer contents of a picture synthesizing method according to an embodiment of the present invention;
FIG. 3 is a block diagram of a device for synthesizing pictures according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following describes the embodiments of the present invention further with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
The following describes a picture synthesizing method, apparatus, electronic device, and storage medium according to an embodiment of the present invention with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating a method for synthesizing a picture according to an embodiment of the present invention. As shown in fig. 1, the method for synthesizing a picture provided by the embodiment of the invention specifically includes the following steps:
s101: and receiving layer related information sent by each client, wherein the layer related information comprises a live broadcasting room ID, a layer ID, layer content, a frame sequence number SN and a video stream ID.
In this example, each client refers to a client that enters multiple users participating in a live broadcast in the same live broadcast room. The client may be an application APP installed on a smart phone, tablet computer, etc., such as live software.
As a specific example, when multiple persons connect to a live broadcasting room through respective clients to watch live broadcasting simultaneously, the pictures of the clients include live broadcasting pictures and multiple-person real-time drawing pictures, and the drawing pictures are superimposed on the live broadcasting pictures. For example:
the user clicks a drawing button in the client, at this time, the client creates a drawing layer (i.e., drawing picture) with transparent background to cover the area of the live picture, and periodically uploads information in the drawing layer (i.e., drawing layer related information) to the server.
In this example, the user may graffiti arbitrarily on the drawing layer, namely: the image layer comprises the graffiti content of each client in the corresponding drawing picture, wherein each client performs graffiti by creating the drawing picture covered on the live picture area to obtain the graffiti content.
The client periodically (e.g., every 40 milliseconds, i.e., 25 frames per second) uploads layer-related information to the server.
The uploaded layer related information includes, but is not limited to: the layer ID (e.g. composed of "live room ID and user ID"), the compressed layer content, the current frame number (i.e. the number of frames the video stream has been played), the video stream ID, the live room ID, the user token, etc. The compression can be used for data transmission, so that transmission efficiency is improved.
S102: and merging the layer contents from each client according to the live broadcasting room ID and the layer ID to obtain the layer merged contents.
In a specific example, the steps specifically include: judging whether the layer ID exists in the live broadcasting room; if the layer ID exists, judging whether the layer content corresponding to the layer ID exists or not; if the layer content is not stored, updating the stored layer content; and superposing and merging the stored layers to obtain the layer merging content.
In this example, when it is determined that the layer content is not stored, the layer content is indicated as new layer content, and therefore, the layer content is updated, that is, the updated layer content is stored, so that the stored layer content can be read out later, and overlapped and combined, thereby obtaining the layer combined content.
Further, before determining whether the layer ID exists in the live room, the method further includes: judging whether a live broadcasting room corresponding to the live broadcasting room ID exists or not; if the live room does not exist, the live room is created in advance.
As a specific example, after receiving the layer related information sent by the client, the server first takes out the live broadcast room ID, the layer ID, and the layer content; judging whether the live broadcasting room ID exists in the memory, namely: judging whether a live broadcasting room corresponding to the live broadcasting room ID exists or not, if not, creating a live broadcasting room; judging whether the layer ID exists in the live broadcasting room, if so, taking out the layer content corresponding to the layer ID from a memory, judging whether the layer content transmitted by the client is the same as the layer content transmitted by the client, if not, updating the layer content, and then continuing the next step; if not, storing the image layer into the memory of the live broadcasting room, and continuing the next step; and taking out all canvas contents in the live broadcasting room, carrying out layer superposition and merging, and storing the merged contents (layer superposition and merging contents) into the live broadcasting room.
The method mainly aims at superposing and mixing canvas layers submitted by multiple persons together to generate one canvas mixing layer. This algorithm is triggered to update the mix layer content each time the user submits the canvas layer content. The storage structure is shown in fig. 2, and in fig. 2, as for the live broadcasting room 1, the layer content sent by each client is called canvas layer 1 content, canvas layer 2 content and the like, and the layer combination content is called canvas content after mixing. Namely: and grouping the data in the memory of the server by using the ID of the live broadcasting room, and storing all canvas layer information and mixed layer information related to the live broadcasting room in the live broadcasting room.
S103: and obtaining a target video frame from the source video file according to the video stream ID and the frame sequence number SN, and combining the layer combination content with the target video frame to obtain a combined picture.
Specifically, the method specifically comprises the following steps: obtaining the source video file according to the video stream ID; extracting a video frame corresponding to the frame sequence number SN from the source video file, and taking the video frame as the target video frame; and superposing and merging the layer merging content and the target video frame to obtain a synthesized picture, and pushing the synthesized picture to a pre-established stream address so that each client side synchronously acquires the synthesized picture from the stream address.
Before pushing the synthesized picture to the pre-created stream address, the method further comprises the following steps: and creating the stream address for the live broadcasting room according to the ID of the live broadcasting room, and sending a notification of synchronously acquiring the synthesized picture from the stream address after a preset time to each client.
As a specific example, this step is implemented by:
1. and acquiring a frame sequence number SN, a live broadcasting room ID and a video stream ID transmitted by the client.
2. A new stream address is created for the live room, from which the client is notified to pull streams 40 ms later. Of these, 40 milliseconds is merely exemplary and can be set by themselves as desired.
3. Frames with frame numbers SN are extracted from the source video file.
4. And acquiring the mixed canvas content corresponding to the live broadcasting room from the memory.
5. And superposing and combining the canvas content and the frame content, compressing the mixed frame codes, and pushing the compressed frame codes to a new stream address.
6. Setting the frame sequence number SN to: sn=sn+1.
7. And circularly executing the steps 3-6 until the live broadcast processing finishes the last frame.
Therefore, each client can synchronously broadcast, and the interaction of each client on the live broadcasting room can be synchronously displayed.
According to the picture synthesis method provided by the embodiment of the invention, when a user watches live in a live broadcasting room, the live broadcasting picture can be graffiti arbitrarily to obtain picture layer contents, and the server can combine the picture layer contents of each client and send the picture layer contents and the live broadcasting picture to each client, so that each client can realize multi-person interaction under the synchronous live broadcasting picture, not only is the live broadcasting interaction mode enriched, but also the interaction and the user experience for watching live broadcasting are improved.
In one embodiment of the present invention, the stacking and merging the stored layer contents to obtain the layer combined content includes: acquiring user information corresponding to each client, wherein the user information comprises a user type, a user grade and current playing time; sequencing the content of the layers from each client according to the user type, the user grade and the current playing time to obtain a sequencing result of the content of the layers; and superposing and merging the stored layers according to the sequencing result of the layer contents to obtain the layer merging contents. And according to the sequencing result of the layer contents, superposing and combining the stored layer contents to obtain the layer combined contents, wherein the layer combined contents can be realized in the following two modes:
acquiring overlapped contents in the layer contents with the front ordering and the layer contents with the back ordering, and covering the overlapped contents in the layer contents with the back ordering by the overlapped contents in the layer contents with the front ordering; or alternatively, the process may be performed,
the layers except the graffiti part are transparent areas, and the layers with the later sequence are overlapped first and then the layers with the earlier sequence are overlapped. Because the layer contents except the graffiti part are transparent areas, when the layer contents which are arranged in front and the layer contents which are arranged in back are overlapped in the overlapped layer contents, the layer contents which are arranged in front are only displayed in the overlapped part. For example: in the same area, the graffiti of the layer content with the front ranking is "oiling", the graffiti of the layer content with the rear ranking is "6666", and after superposition, the area only displays "oiling".
That is, there is a difference in the hierarchical arrangement order between the drawing layers. The earlier the ordering, the higher the presentation when the drawing layers are superimposed. For example: three drawing layers ordered 1, 2, 3 at the lowest layer, 2 superimposed on 3, 1 superimposed on 2. The non-transparent portion of the upper layer is drawn to mask the lower layer.
The ordering of the drawing layers is determined by their associated user data, controlled jointly by three parameters: whether the user is a living room owner or not, the user grade, and the time point when the user enters the living room last time.
The live broadcasting room owner is defined as a user for starting the live broadcasting room, and is an owner of the live broadcasting room.
User class definition: the level of the user in the product is determined by the user's liveness in the product. The level increases cumulatively with the experience value, which can be obtained for the active behavior. Active behavior includes, but is not limited to, watching live, opening live, chat text interactions, and the like.
The action logic of the three parameters is as follows: if the living broadcast room owner is the living broadcast room owner, the ranking is always the forefront, and is 1; if the living room owner is not living, the next judgment is carried out; in the non-homeowner users, sequencing the non-homeowner users from high to low according to user grades, wherein the non-homeowner users correspond to serial numbers of 2, 3, 4 and the like one by one; if the user grades are the same, performing the next judgment; if the user grades are the same, sequencing the users from the early to the late according to the time of the last entering the live broadcasting room, endowing the users with corresponding serial numbers, and enabling the time to be accurate to seconds; if the time is also the same, then the sequence is random. All drawing layer sizes are generated according to the live stream picture size, and the width and the height of the drawing layer sizes are equal. And setting the left upper corner of the live broadcast picture as an anchor point and coordinates [0,0], aligning the vertex of the left upper corner of the drawing layer [0,0], and realizing the superposition of the drawing layer and the live broadcast picture area.
Fig. 3 is a schematic structural diagram of a device for synthesizing pictures according to an embodiment of the present invention, and as shown in fig. 3, the device for synthesizing pictures according to the embodiment of the present invention includes: a receiving module 310, a layer merging module 320 and a picture composing module 330. Wherein:
a receiving module 310, configured to receive layer related information sent by each client, where the layer related information includes a live broadcast room ID, a layer ID, layer content, a frame sequence number SN, and a video stream ID;
the layer merging module 320 is configured to merge the layer from each client according to the live broadcast room ID and the layer ID to obtain layer merged content;
and the picture synthesis module 330 is configured to obtain a target video frame from a source video file according to the video stream ID and the frame sequence SN, and combine the combined content of the layer with the target video frame to obtain a synthesized picture.
According to the picture synthesizing device provided by the embodiment of the invention, when a user watches live in a live broadcasting room, the live broadcasting picture can be graffiti arbitrarily to obtain picture layer contents, and the server can combine the picture layer contents of all clients and issue the picture layer contents to all clients together with the live broadcasting picture, so that all clients can realize multi-person interaction under the synchronous live broadcasting picture, not only are live broadcasting interaction modes enriched, but also user experience of interaction and watching live broadcasting is improved.
It should be noted that, the specific implementation manner of the apparatus for synthesizing pictures in the embodiment of the present invention is similar to the specific implementation manner of the method for synthesizing pictures in the embodiment of the present invention, please refer to the description of the method section specifically, and in order to reduce redundancy, details are not repeated here.
Based on the same inventive concept, a further embodiment of the present invention provides an electronic device, see fig. 4, comprising in particular: a processor 401, a memory 402, a communication interface 403, and a communication bus 404;
wherein, the processor 401, the memory 402, the communication interface 403 complete the communication with each other through the communication bus 404; the communication interface 403 is used for implementing information transmission between devices;
the processor 401 is configured to invoke a computer program in the memory 402, where the processor executes the computer program to implement all the steps of the method for synthesizing a picture, for example, the processor executes the computer program to implement the following steps: inter-cast ID, layer content, frame number SN, and video stream ID; merging the layer contents from each client according to the live broadcasting room ID and the layer ID to obtain layer merged contents; and acquiring a target video frame from a source video file according to the video stream ID and the frame sequence number SN, and combining the layer combination content with the target video frame to obtain a combined picture.
Based on the same inventive concept, a further embodiment of the present invention provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements all the steps of the above-described picture composition method, for example, the processor implements the following steps when executing the computer program: inter-cast ID, layer content, frame number SN, and video stream ID; merging the layer contents from each client according to the live broadcasting room ID and the layer ID to obtain layer merged contents; and acquiring a target video frame from a source video file according to the video stream ID and the frame sequence number SN, and combining the layer combination content with the target video frame to obtain a combined picture.
Further, the logic instructions in the memory described above may be implemented in the form of software functional units and stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules can be selected according to actual needs to achieve the purpose of the embodiment of the invention. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on such understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the index monitoring method described in the respective embodiments or some parts of the embodiments.
Furthermore, in the present disclosure, such as "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Moreover, in the present invention, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Furthermore, in the description herein, reference to the terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. A method of synthesizing a picture, comprising:
receiving layer related information sent by each client, wherein the layer related information comprises a live broadcasting room ID, a layer ID, layer contents, a frame sequence number SN and a video stream ID;
merging the layer contents from each client according to the live broadcasting room ID and the layer ID to obtain layer merged contents;
obtaining a source video file according to the video stream ID; extracting a video frame corresponding to the frame sequence number SN from the source video file, and taking the video frame as a target video frame; and superposing and merging the layer merging content and the target video frame to obtain a synthesized picture, and pushing the synthesized picture to a pre-established stream address so that each client side synchronously acquires the synthesized picture from the stream address.
2. The method for synthesizing pictures according to claim 1, wherein the merging the layer contents from each client according to the live room ID and the layer ID to obtain the layer merged content comprises:
judging whether the layer ID exists in the live broadcasting room;
if the layer ID exists, judging whether the layer content corresponding to the layer ID exists or not;
if the layer content is not stored, updating the stored layer content;
and superposing and merging the stored layers to obtain the layer merging content.
3. The picture synthesizing method according to claim 2, further comprising, before determining whether the layer ID exists in the live room:
judging whether a live broadcasting room corresponding to the live broadcasting room ID exists or not;
if the live room does not exist, the live room is created in advance.
4. A method of synthesizing a picture according to claim 2 or 3, wherein the step of superposing and combining the stored layers to obtain the layer combined content includes:
acquiring user information corresponding to each client, wherein the user information comprises a user type, a user grade and current playing time;
sequencing the content of the layers from each client according to the user type, the user grade and the current playing time to obtain a sequencing result of the content of the layers;
and superposing and merging the stored layers according to the sequencing result of the layer contents to obtain the layer merging contents.
5. The method of picture synthesis according to claim 1, further comprising, prior to pushing the synthesized picture into a pre-created stream address:
and creating the stream address for the live broadcasting room according to the ID of the live broadcasting room, and sending a notification of synchronously acquiring the synthesized picture from the stream address after a preset time to each client.
6. The method of claim 1, wherein the graphical content comprises graffiti content for each client in a corresponding graphical screen, wherein each client obtains the graffiti content by creating a graphical screen overlaid on a live screen area.
7. A picture synthesizing apparatus, comprising:
the system comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving layer related information sent by each client, and the layer related information comprises a live broadcasting room ID, a layer ID, layer content, a frame sequence number SN and a video stream ID;
the layer merging module is used for merging the layer contents from each client according to the live broadcasting room ID and the layer ID to obtain layer merging contents;
the picture synthesis module is used for obtaining a source video file according to the video stream ID; extracting a video frame corresponding to the frame sequence number SN from the source video file, and taking the video frame as a target video frame; and superposing and merging the layer merging content and the target video frame to obtain a synthesized picture, and pushing the synthesized picture to a pre-established stream address so that each client side synchronously acquires the synthesized picture from the stream address.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements a method of composing a picture according to any one of claims 1 to 6 when executing the computer program.
9. A non-transitory computer readable storage medium having stored thereon a computer program, characterized in that the computer program, when executed by a processor, implements a method of compositing pictures according to any of claims 1 to 6.
CN202011264683.3A 2020-11-12 2020-11-12 Picture synthesis method and device, electronic equipment and storage medium Active CN112383793B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011264683.3A CN112383793B (en) 2020-11-12 2020-11-12 Picture synthesis method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011264683.3A CN112383793B (en) 2020-11-12 2020-11-12 Picture synthesis method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112383793A CN112383793A (en) 2021-02-19
CN112383793B true CN112383793B (en) 2023-07-07

Family

ID=74583532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011264683.3A Active CN112383793B (en) 2020-11-12 2020-11-12 Picture synthesis method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112383793B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105187930A (en) * 2015-09-18 2015-12-23 广州酷狗计算机科技有限公司 Video live broadcasting-based interaction method and device
CN107331222A (en) * 2016-04-29 2017-11-07 北京学而思教育科技有限公司 A kind of image processing method and device
WO2018094814A1 (en) * 2016-11-28 2018-05-31 深圳Tcl数字技术有限公司 Video synthesizing method and device
CN108966031A (en) * 2017-05-18 2018-12-07 腾讯科技(深圳)有限公司 Method and device, the electronic equipment of broadcasting content control are realized in video session
CN111147880A (en) * 2019-12-30 2020-05-12 广州华多网络科技有限公司 Interaction method, device and system for live video, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106162230A (en) * 2016-07-28 2016-11-23 北京小米移动软件有限公司 The processing method of live information, device, Zhu Boduan, server and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105187930A (en) * 2015-09-18 2015-12-23 广州酷狗计算机科技有限公司 Video live broadcasting-based interaction method and device
CN107331222A (en) * 2016-04-29 2017-11-07 北京学而思教育科技有限公司 A kind of image processing method and device
WO2018094814A1 (en) * 2016-11-28 2018-05-31 深圳Tcl数字技术有限公司 Video synthesizing method and device
CN108966031A (en) * 2017-05-18 2018-12-07 腾讯科技(深圳)有限公司 Method and device, the electronic equipment of broadcasting content control are realized in video session
CN111147880A (en) * 2019-12-30 2020-05-12 广州华多网络科技有限公司 Interaction method, device and system for live video, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112383793A (en) 2021-02-19

Similar Documents

Publication Publication Date Title
US11792444B2 (en) Dynamic viewpoints of live event
CN109327741B (en) Game live broadcast method, device and system
CN102905170B (en) Screen popping method and system for video
CN110798697B (en) Video display method, device and system and electronic equipment
US8522160B2 (en) Information processing device, contents processing method and program
CN102859486B (en) Zoom display navigates
CN111970532B (en) Video playing method, device and equipment
US9883244B2 (en) Multi-source video navigation
CN112929684B (en) Video superimposed information updating method and device, electronic equipment and storage medium
CN109195003B (en) Interaction method, system, terminal and device for playing game based on live broadcast
JP6473262B1 (en) Distribution server, distribution program, and terminal
CN114025189A (en) Virtual object generation method, device, equipment and storage medium
WO2021199559A1 (en) Video distribution device, video distribution method, and video distribution program
CN106792237B (en) Message display method and system
CN114025185A (en) Video playback method and device, electronic equipment and storage medium
CN114430494B (en) Interface display method, device, equipment and storage medium
CN112383793B (en) Picture synthesis method and device, electronic equipment and storage medium
US20190313156A1 (en) Asynchronous Video Conversation Systems and Methods
CN110444186A (en) A kind of multi-user's order method and storage medium
CN114268827B (en) Method, device, equipment and computer readable storage medium for interaction of viewing and competition
CN105307044A (en) Method and apparatus for displaying interaction information on video program
WO2018165033A1 (en) Video production system with dynamic character generator output
CN115022666B (en) Virtual digital person interaction method and system
CN112752136B (en) Crowd funding and opening method
CN108846707A (en) Interactive advertisement playback method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant