CN114915839B - Rendering processing method for inserting video support element, electronic terminal and storage medium - Google Patents

Rendering processing method for inserting video support element, electronic terminal and storage medium Download PDF

Info

Publication number
CN114915839B
CN114915839B CN202210365078.8A CN202210365078A CN114915839B CN 114915839 B CN114915839 B CN 114915839B CN 202210365078 A CN202210365078 A CN 202210365078A CN 114915839 B CN114915839 B CN 114915839B
Authority
CN
China
Prior art keywords
video frame
texture image
source video
mixed
current source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210365078.8A
Other languages
Chinese (zh)
Other versions
CN114915839A (en
Inventor
利进龙
郭亚斌
袁小明
甘鹏龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN202210365078.8A priority Critical patent/CN114915839B/en
Publication of CN114915839A publication Critical patent/CN114915839A/en
Application granted granted Critical
Publication of CN114915839B publication Critical patent/CN114915839B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Abstract

The application discloses a rendering processing method for video support element insertion, an electronic terminal and a storage medium, wherein the method comprises the following steps: acquiring a first texture image of a mixed element corresponding to a current source video frame of a source video, and first position data of the mixed element corresponding to a corresponding position in the current source video frame; acquiring a second texture image of the inserted element corresponding to the mixed element; performing mixed rendering on the first texture image and the second texture image to obtain a mixed texture image; and rendering the mixed texture image in a third texture image of the current source video frame according to the first position data until the rendering of all source video frames of the source video is completed. By means of the method, elements can be effectively inserted into video frames when the video is rendered and played in the live broadcast process, and therefore the effect of element dynamic is achieved in the video.

Description

Rendering processing method for inserting video support element, electronic terminal and storage medium
Technical Field
The present application relates to the field of live broadcasting technologies, and in particular, to a rendering method for inserting a video support element, an electronic terminal, and a storage medium.
Background
Along with the development of internet technology and communication technology, society has entered the era of intelligent interconnection, and it is also more and more popular to carry out interdynamic, amusement and work in the internet, and wherein it is the live broadcast technique that is more general, people can watch live broadcast or carry out live broadcast through intelligent device anytime and anywhere, has greatly enriched people's life and has widened people's field of vision.
During live broadcast, users are attracted to view by presenting a variety of different dynamic effects at the live interface. In live broadcast, video playing is often involved, for example, the gift effect can be played in a video mode, more details can be presented in the video effect mode, but after video making is completed, corresponding elements are generally difficult to insert in rendering.
Disclosure of Invention
The application mainly solves the technical problem of providing a rendering processing method for inserting video support elements, an electronic terminal and a storage medium, wherein the elements can be inserted into a source video frame.
In order to solve the technical problems, the application adopts a technical scheme that: there is provided a rendering processing method of video support element insertion, the method comprising: acquiring a first texture image of a mixed element corresponding to a current source video frame of a source video, and first position data of the mixed element corresponding to a corresponding position in the current source video frame; acquiring a second texture image of the inserted element corresponding to the mixed element; performing mixed rendering on the first texture image and the second texture image to obtain a mixed texture image; and rendering the mixed texture image in a third texture image of the current source video frame according to the first position data until the rendering of all source video frames of the source video is completed.
In order to solve the technical problems, the application adopts another technical scheme that: an electronic terminal is provided, the electronic terminal comprising a processor, a memory and a communication circuit; the memory and the communication circuit are coupled to the processor, and the memory stores a computer program, and the processor can execute the computer program to implement the rendering processing method for inserting the video support element.
In order to solve the technical problems, the application adopts another technical scheme that: there is provided a computer-readable storage medium storing a computer program executable by a processor to implement the video support element insertion rendering processing method provided by the present application as described above.
The beneficial effects of the application are as follows: different from the prior art, the first texture image of the mixed element corresponding to the current source video frame of the source video is obtained, the second texture image of the inserted element corresponding to the mixed element is obtained, the first texture image and the second texture image are mixed and rendered to obtain the mixed texture image, the mixed texture image is rendered in the third texture image of the current source video frame according to the first position data until the rendering of all the source video frames of the source video is completed, the first position data can represent the position where the inserted element needs to be inserted in the source video frame, the third texture image can be the texture image obtained by rendering the current source video frame, the mixed texture image is obtained by mixing the first texture image of the mixed element and the second texture image of the inserted element, and then the mixed texture image and the third texture image are mixed and rendered in the position corresponding to the first position data, so that the elements can be dynamically inserted in the source video frame when the mixed element is rendered, the first position data corresponding to the corresponding position in the current source video frame is obtained in advance, the effect of the dynamic inserted element can be quickly and the image can be dynamically inserted in the corresponding position when the user is watched, the user can not be synchronously rendered, and the user experience is improved when the user is viewing the dynamic image is further improved.
Drawings
FIG. 1 is a schematic diagram of the system components of a live broadcast system to which embodiments of the video support element insertion rendering processing method of the present application are applied;
FIG. 2 is a flow diagram of an embodiment of a rendering processing method for video support element insertion of the present application;
FIG. 3 is a schematic diagram of a material image of an embodiment of a display processing method of the video support element insertion of the present application;
FIG. 4 is a schematic diagram of a rendering processing method embodiment of video support element insertion outputting video frames according to the present application;
FIG. 5 is a timing diagram of an embodiment of a rendering processing method for video support element insertion of the present application;
FIG. 6 is a schematic diagram of a rendering method for video support element insertion according to an embodiment of the present application, in which a first texture image and a second texture image are mixed and rendered to obtain a mixed texture image;
FIG. 7 is a schematic diagram of a rendering method for inserting video support elements according to an embodiment of the present application, wherein a third blank texture image with a mixed texture image is rendered and a control element is mixed to obtain a third texture image;
FIG. 8 is a schematic block diagram of the circuit structure of an embodiment of the electronic terminal of the present application;
fig. 9 is a schematic block diagram of a circuit configuration of a computer-readable storage medium of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The inventor of the application finds that in the long-term research and development process, in order to enrich the live effect in scenes such as live broadcast, dynamic elements such as characters and pictures are required to be added in MP4 video dynamic effects, but the current practice is to superimpose a primary view on a player view or realize the effect by using a mode of MP4+SVGA/Y2A when MP4 resources are played, so that two different views often exist, and an asynchronous condition exists when the resources are played, thereby reducing the experience of users in the live broadcast watching process. In order to achieve insertion of dynamic elements in a video rendering process, the present application proposes the following embodiments.
As shown in fig. 1, the rendering method described in the rendering method for video support element insertion of the present application may be applied to a live broadcast system 1, and in particular, the live broadcast system 1 may include a server 10, a hosting terminal 20, a viewer terminal 30, and a configuration terminal 40. The anchor terminal 20, the audience terminal 30, and the configuration terminal 40 may be electronic terminals, and specifically, the anchor terminal 20 and the audience terminal 30 are electronic terminals in which respective client programs are installed, that is, client terminals. The electronic terminal may be a mobile terminal, a computer, a server or other terminals, the mobile terminal may be a mobile phone, a notebook computer, a tablet computer, an intelligent wearable device, etc., and the computer may be a desktop computer, etc.
The server 10 may pull the live data stream from the anchor terminal 20, and may correspondingly process the obtained live data stream and push the live data stream to the viewer terminal 30. The audience terminal 30 may view the live process of the anchor or guest after acquiring the live data stream. A mixed stream of live data streams may occur at least one of the server 10, the anchor terminal 20, and the viewer terminal 30. Video communication or voice communication can be performed between the anchor terminal 20 and the anchor terminal 20, and between the anchor terminal 20 and the viewer terminal 30. In video ligature, the ligature party can push live data streams including video streams to the server 10, and further push corresponding live data to the corresponding ligature party and the audience terminal 30. The anchor terminal 20 and the viewer terminal 30 can display corresponding live pictures in the live broadcasting room.
Of course, the anchor terminal 20 and the audience terminal 30 are relatively speaking, and the terminal in the live broadcast process is the anchor terminal 20, and the terminal in the live broadcast watching process is the audience terminal 30. The configuration terminal 40 is used for configuring the client, the server, etc., for example, configuring various functions such as interface layout, gift effect, animation, etc., displayed in the client. The configuration terminal 40 may send the corresponding project to the server 10 after the completion of the production, and the server 10 sends the project to the client terminal to implement the background configuration of the client. Similarly, the client terminal may process and display the data sent from the server 10 for viewing by the viewer.
As shown in fig. 2, the embodiment of the rendering processing method for inserting video support elements of the present application may use a client terminal as an execution body. The embodiment may include the following steps: s100: and acquiring a first texture image of a mixed element corresponding to the current source video frame of the source video, and first position data of the mixed element corresponding to a corresponding position in the current source video frame. S200: a second texture image of the inserted element corresponding to the blended element is acquired. S300: and performing mixed rendering on the first texture image and the second texture image to obtain a mixed texture image. S400: and rendering the mixed texture image in a third texture image of the current source video frame according to the first position data until the rendering of all source video frames of the source video is completed.
The method comprises the steps of obtaining a first texture image of a mixed element corresponding to a current source video frame of a source video, obtaining a second texture image of an insertion element corresponding to the mixed element, carrying out mixed rendering on the first texture image and the second texture image to obtain a mixed texture image, rendering the mixed texture image in a third texture image of the current source video frame according to first position data until the rendering of all source video frames of the source video is completed, wherein the first position data can represent a position where the insertion element needs to be inserted in the source video frame, the third texture image can be the texture image obtained by rendering the current source video frame, and then carrying out mixed rendering on the mixed texture image obtained by mixing the first texture image of the mixed element and the second texture image of the insertion element, and then carrying out mixed rendering on the mixed texture image and the third texture image at a position corresponding to the first position data.
The method described in this embodiment may be applied to a scenario in which the client terminal processes the dynamic resource file after receiving the dynamic resource file configured via the configuration terminal 40 via the server 10.
Before the dynamic resource file is sent to the client terminal via the server 10, the dynamic resource file needs to be generated by the configuration terminal 40, which may include the following steps:
S010: first position data of a mixed element of each frame of material image in the material image is acquired to serve as position data of the mixed element in a source video frame corresponding to the material image.
When the material resource is designed through AE software, the configuration terminal 40 can read the material resource through corresponding software, where the material resource may be used for describing insertion of a corresponding insertion element in the source video, and includes identification information, a position and a shape corresponding to the insertion element, so that the client terminal can know what element is to be extracted and what shape is to be rendered in what position of the source video frame.
After the material resources are read, the material resources are analyzed. By analyzing each material image in the material resource, the position of each mixed element in each material image in the material image can be calculated, the first position data of each mixed element can be obtained, and then the first position data of each mixed element of all material images of the project can be obtained. The first position data of the mixed element in the corresponding material image is used for reflecting/mapping the position data of the corresponding inserted element in the source video frame, so that the client terminal can know what position of the extracted inserted element is to be rendered to the source video frame.
As to how to acquire the first position data of the mixed element in each material image, a first transformation matrix that maps the relative change of the mixed element generated between the first frame material image and each subsequent frame material image, for example, the frame rate fps of the whole animation item required to be rendered into the source video and the set animation time duration, may be calculated, so that the whole process may be divided into fps×duration frames, that is, the material images may have fps×duration frames, and the positions of the mixed elements on the frame material images need to be acquired. In particular, by analyzing the frame material images, a corresponding first transformation matrix may be generated, for example, one transformation matrix may be created for each frame material image, thereby describing the positional transformation of the mixing elements in the material images of different frames.
The first transformation matrix of each subsequent frame material image is a relative change that maps the blending element of that subsequent frame material image with respect to the same blending element of the first frame material image. For example, the total frame number of the material image is 10 frames, and then frames 2-10 respectively correspond to creating a first transformation matrix, and the first transformation matrices of the material images of different frames describe the relative change of the mixing elements of the material image with respect to the mixing elements of the material image of frame 1.
As for a specific creation process of the first change matrix, reference may be made to the following steps included in step S010:
S011: and acquiring change data of the mixed elements of the material images of each subsequent frame relative to the mixed elements of the material images of the first frame in a translation dimension, a rotation dimension and a scaling dimension.
The subsequent frame material image is in the same item with respect to the first frame material image, that is, the material image subsequent to the first frame material image is referred to as the subsequent frame material image.
The transformation data may refer to data of translation, rotation, and scaling conditions of the blending element in the different frame material images. The change data may be data in translation, rotation, and scaling dimensions of the mixed elements of each frame of the material image that needs to be determined or input at the time of material resource design, or may be obtained by analyzing and calculating all frames of material images of the designed material resource.
For example, at 0s, the anchor point of mixing element 1 is (0, 0), the position is (225,730), the rotation angle is 0, and the scaling is 60%. At 2s, the anchor point of mixing element 1 is (0, 0), the position is (352,730), the rotation angle is 0, and the scaling is 60%.
The anchor point may be selected as the origin of mixing element 1, i.e. the position of the upper left corner of the layer range of mixing element 1, where the anchor point is (0, 0). The positions (225,730) and (352,730) are positions of anchor points with respect to the origin of the material image, that is, positions of vertices with respect to the upper left corner of the material image. The position can represent the translation relation and the rotation angle of the mixed element, and the scaling ratio represents the scaling relation of the mixed element. Thus, the change between 2s and 0s in the translation, rotation and scaling dimensions can be calculated, and further the change data of 2s relative to 0s can be obtained.
In summary, by calculating the change of the subsequent frame material image relative to the first frame material image in three dimensions, corresponding change data can be obtained, so that the corresponding first transformation matrix can be conveniently constructed by using the change data.
S012: and calculating a first transformation matrix corresponding to each subsequent frame material image by using the corresponding change data of each subsequent frame material image.
After the change data of each subsequent frame material image is obtained, the corresponding first transformation matrix can be calculated by using the change data of each subsequent frame material image. After the first transformation matrix corresponding to each subsequent frame material image is calculated, the first position data corresponding to each subsequent frame material image may be calculated using the first position data of the first frame material image, which may be described in detail as follows S022.
S013: and acquiring first position data of the mixed element in the first frame material image, and respectively transforming the first position data of the mixed element in the first frame material image through a first transformation matrix corresponding to each subsequent frame material image to acquire the first position data of the mixed element in each subsequent frame material image.
And acquiring first position data of the mixed element of the first frame material image, namely starting position data of the mixed element for realizing the animation process. The first position data of the first frame material image is transformed by the first transformation matrix of each subsequent frame material image, so that the first position data of the same mixed element of each subsequent frame material image can be obtained. In this way, the first position data of the mixed element of each subsequent frame material image can be rapidly calculated by using the first transformation matrix, that is, the position of the image presentation of the material image is converted into the position data which can be read and calculated by the computer.
Specifically, for a specific process of calculating the first position data corresponding to each subsequent frame material image using the first position data corresponding to the first frame material image, reference may be made to the following steps included in S013:
s0131: and determining that each vertex coordinate of the mixed element in the first frame material image corresponds to the first coordinate data so as to obtain first position data of the mixed element in the first frame material image.
For example, the range of the layer can be obtained according to the width and height of the mixed element of the first frame material image and the position (for example, anchor point position): upper left vertex, lower left vertex, upper right vertex, and lower right vertex. Thus, the coordinates of each vertex can be calculated, and first coordinate data can be obtained. The first location data may be embodied by first coordinate data. Taking the mixed element 1 illustrated in a diagram of fig. 3 as an example, the coordinates of the post-mixed element 1 at the 4 vertices of the first frame material image can be determined in the above-described manner.
S0132: and multiplying the first coordinate data by a first transformation matrix corresponding to each subsequent frame material image respectively to obtain transformation coordinate data of the mixed element in each subsequent frame material image.
And combining the first transformation matrix calculated in the step, multiplying the first coordinate data by the first transformation matrix corresponding to the material image of the subsequent frame, and calculating the transformation coordinate data of the material image of the subsequent frame. Specifically, the four vertices calculated above are multiplied by the first transformation matrix, respectively, to obtain new four vertex positions of the mixing element corresponding to the frame material image, where the positions of the four vertices are not necessarily vertex positions of the layer range of the mixing element of the frame material image.
For example, fig. 3B illustrates a mixing element 1, and the mixing element 1 of a certain subsequent frame material image is moved and rotated with respect to the mixing element 1 of the first frame material image. The mixed element 1 in the first frame material image presents a diamond-like shape in a certain subsequent frame material image after rotation, but the layer ranges of the mixed element 1 in the first frame material image are different, the layer range of the mixed element 1 in the first frame material image is defined by four vertexes of the square, and the layer range of the mixed element 1 in a certain material image is defined by a 'dotted line frame' shown in the figure. Therefore, the layer range of the mixed element of the frame material image needs to be determined according to the obtained new four vertex positions.
S0133: and determining second coordinate data corresponding to each vertex coordinate of the mixed element in each subsequent frame material image by using the transformation coordinate data of the mixed element so as to obtain first position data of the mixed element in each subsequent frame material image.
Specifically, the maximum value and the minimum value of the mixed element of the frame material image on the abscissa and the maximum value and the minimum value on the ordinate may be determined in the transformed coordinate data obtained in the above steps. And determining the layer range of the mixed element by using the maximum value and the minimum value of the mixed element of the frame material image on the abscissa and the maximum value and the minimum value of the mixed element on the ordinate.
For example, in the new vertex position corresponding to the transformed coordinate data, x, y-axis maximum and minimum values are determined, and 4 values are (minX, minY, maxX, maxY). Determining new four vertex positions of a layer range of a mixed element of the subsequent frame material image: upper left: (minX, minY); left lower: (minX, maxY); upper right: (maxX, minY); the right lower: (maxX, maxY), i.e. second coordinate data. The first position data of the mixing element in the subsequent frame material image may be represented by the second coordinate data.
After the first position data of the mixed element of each frame of material image is obtained, each first position data is stored and recorded as the display position of the mixed element in the source video frame corresponding to the frame of material image. And by analogy, the positions of four vertexes of all the mixed elements in each frame of material image can be calculated and stored in a dictionary.
S020: and sending the video data of the source video and the first position data to a server, and forwarding the video data and the first position data to the client terminal through the server.
After the first position data is obtained, the video data of the source video and the first position data may be output. The server 10 may forward the received data to the client terminal where it is rendered. After receiving the video data and the first position data of the source video through the server 10, the client terminal acquires the insertion element corresponding to the mixed element, and renders the insertion element into the corresponding source video frame according to the corresponding first position data when rendering the source video in the live broadcast picture. The output video frame and the second position data may be acquired before the video data and the first position data of the source video are output. Specifically, an output canvas corresponding to an output video frame may be divided into a first region and a second region that are disposed at intervals. Optionally, the number of output video frames is the same as the number of source video frames, i.e. the output video frames and the source video frames are in one-to-one correspondence. The first region of the output video frame displays a corresponding source video frame and the second region of the output video frame displays mixing elements to be inserted into the source video frame. The second area is actually used to store each mixing element corresponding to the source video frame, so that the client terminal can specifically identify each mixing element.
When determining the output video frame, as shown in fig. 4, the source video frame may be displayed in a first area of the output canvas, each mixed element in the material image corresponding to the source video frame is copied to a second area of the output canvas in sequence, so as to form the output video frame, and the second position data of each mixed element in the second area of the output canvas is recorded. Specifically, the output video frame and the second position data of the insert layer in the second area can be obtained through a corresponding plug-in/extension function in the AE software. The second position data may be expressed by inserting the coordinate data of the layer in the output video frame. The description information is generated by the first position data of the generated mixed elements in the source video frame and the second position data in the output video frame, and specifically, the data in the description information can be saved in json array form, and one or more mixed elements can be contained, wherein each mixed element has information of width, height, index information, type and the like. The description information can also comprise first position data and second position data which are stored in a json array form, and the json array which is stored with the first position data and the second position data can be named datas, so that the first position data and the second position data can be acquired more quickly. Finally, all the output video frames are converted into output video according to the sequence, so that video data of the output video, namely the dynamic resource file, is obtained. Specifically, the dynamic resource file may be an MP4 resource file.
As shown in fig. 5, after the client terminal acquires the dynamic resource file sent by the configuration terminal 40 via the server 10, the processing of the dynamic resource of the client terminal may refer to the following steps in this embodiment:
s100: and acquiring a first texture image of a mixed element corresponding to the current source video frame of the source video, and first position data of the mixed element corresponding to a corresponding position in the current source video frame.
The first texture image of the blending element corresponding to the current source video frame of the source video may be an image obtained after rendering the blending element. The first position data of the mixing element corresponding to the corresponding position in the current source video frame may represent a position that is required to be inserted when inserting the mixing element into the current source video frame.
In one implementation, the following steps are included prior to S100:
s110: and receiving the dynamic resource file from the server.
After the configuration terminal 40 obtains the dynamic resource file through the above configuration, the dynamic resource file is sent to the server 10, and forwarded to the client terminal through the server 10, and the client terminal receives the dynamic resource file from the server 10. The client terminal receives the dynamic resource file and processes the dynamic resource file, so that the insertion element is inserted into the video frame when the video is played.
S120: and analyzing the dynamic resource file to obtain an output video file and descriptive information.
After receiving the dynamic resource file, the client terminal analyzes the dynamic resource file to obtain an output video file and description information. The output video file may be a video file composed of a plurality of output video frames, and the description information may be generated by combining first position data of the mixing element in the source video frame and second position data in the output video frame. Specifically, after receiving the dynamic resource file, the client terminal analyzes the dynamic resource file, and extracts and outputs the description information in the video file and the header file of the dynamic resource file.
S130: and extracting the current output video frame from the output video file, and extracting the first position data, the second position data and the index information corresponding to the current source video frame from the description information.
The current output video frame may include a first area, a second area, and a third area that are set at intervals, as shown in fig. 3, specifically, the first area is used for setting an image of the current source video frame, the second area is used for setting images of each mixing element corresponding to the current source video frame at intervals, and the third area is used for setting an image of a control element corresponding to the current source video frame. The length of the current output video frame may be twice the length of the current source video frame, and the length and width of the first region correspond to the length and width of the current source video frame.
After the description information is analyzed from the effective resource file, the first position data, the second position data and the index information corresponding to the current source video frame can be extracted from the description information, and the mixed elements and the inserted elements corresponding to each other can be inserted into the current source video frame based on the first position data, the second position data and the index information, so that the phenomenon that the source video frame and the inserted elements are not displayed synchronously in the video playing process can be reduced.
In one implementation, S100 may include the steps of:
s140: a first blank texture image is created.
The first blank texture image may be a blank texture image for rendering to obtain the first texture image, and in particular, the first texture image may be obtained by rendering the blending element on the first blank texture image. In the process of creating the first blank texture image, the first blank texture image can be created based on the width and height of the mixed element through a preset plug-in tool.
S150: and acquiring RGB data of the mixed element, and rendering RBG data of the mixed element on the first blank texture image to obtain a first texture image.
In the process of manufacturing the first texture image, RGB data can be obtained by acquiring the mixed elements, and the RGB data of the mixed elements are rendered on the first blank texture image to obtain the first texture image.
For how to acquire RGB data of the mixing element, reference may be made to the steps included in S150 as follows:
S151: and acquiring the current output video frame corresponding to the current source video frame, and the second position data of the mixing element in the current output video frame.
Because the image of the current source video frame and the image of the mixed element are arranged on the current output video frame at intervals, and the mixed element, the current source video frame and the current output video frame are in one-to-one correspondence, in the process of acquiring the RGB data of the mixed element, the second position data of the mixed element in the current output video frame can be acquired by determining the current output video frame corresponding to the current source video frame, and then the RGB data of the mixed element can be acquired by utilizing the second position data.
S152: RGB pixel values for each pixel of the image of the blending element are acquired in the current output video frame using the second position data.
The second position data may represent a position of the blending element in the current output video frame, and since the pixel value of the blending element has only two colors, black and white, the pixel value hexadecimal corresponding to black is 0, and the pixel value hexadecimal corresponding to white is 255, the pixel value stored by the blending element has only 0 and 255, the RGB value of each pixel of the image of the blending element can be determined from the second position data of the blending element in the current output video frame.
S200: a second texture image of the inserted element corresponding to the blended element is acquired.
The insertion elements may be elements that need to be inserted into the current source video frame, and the insertion elements are in one-to-one correspondence with the mixing elements. In the process of inserting the insertion element into the current source video frame, the insertion element and the mixed element are mixed and rendered, and then the mixed and rendered image and the current source video are mixed and rendered, so that the corresponding insertion element and the corresponding mixed element are inserted into the current source video frame, and the situation that the insertion element and the video frame are not synchronous in display in the video playing process is reduced.
In one implementation, S200 may include the steps of:
s210: a second blank texture image is created.
The second blank texture image may be a blank texture image for rendering the second texture image, and in particular, the second texture image may be obtained by rendering the insert element on the second blank texture image. In the process of creating the second blank texture image, the second blank texture image may be created based on the width and height of the insert element by presetting the insert.
S220: and acquiring RGB data of the inserted element, and rendering the RGB data of the inserted element on a second blank texture image to obtain a second texture image.
In the process of manufacturing the second texture image, the second texture image can be obtained by acquiring the RGB data of the insert element and rendering the RGB data of the insert element on the second blank texture image.
For how to acquire RGB data of the insertion element, reference may be made to the steps included in S220 as follows:
s221: and acquiring index information of the mixing element corresponding to the current source video frame according to the sequence number of the current output video frame.
Because the current source video frame is arranged in the current output video frame, the sequence numbers of the frames corresponding to the current source video frame and the current output video frame are the same, and the corresponding current source video frame can be determined through the sequence number of the current output video frame. Meanwhile, the current source video frame and the mixed elements are in one-to-one correspondence, and each mixed element can have unique corresponding index information, so that the index information of the mixed element corresponding to the current source video frame can be determined. The index information may be information extracted from the description information for associating the mixed element with the inserted element, and by determining the index information of the mixed element corresponding to the current source video frame, the inserted element corresponding to the index information may be determined, thereby determining the inserted element corresponding to the current source video frame, and thus RGB data of the inserted element may be acquired.
S222: RGB data of the corresponding inserted element is acquired using the address indicated by the index information.
Since the index information may relate the inserted element and the mixed element, that is, the inserted element may be determined by the address indicated by the index information of the determined mixed element, and thus the RGB data of the inserted element may be determined, rendering the RGB data of the inserted element on the second blank texture image to obtain the second texture image may be achieved.
For how to acquire RGB data of the corresponding insertion element using the address indicated by the index information, reference may be made to the following steps included in S222:
S2221: and inquiring the local address of the corresponding inserted element in the Map structure array by utilizing the index information.
The Map structure array may be obtained from a service side preset in the client terminal, and is used for storing the index information and the array of the local address of the corresponding insert element in an associated manner. Specifically, keys in the Map structure array may be used to store index information, and values in the Map structure array may be used to store the local address of the inserted element. And determining a Map structure array carrying the corresponding index information by using the obtained index information of the mixed element corresponding to the current source video frame, and further obtaining the local address of the inserted element from the Map structure array.
S2222: RGB data of an insert element pointed to by a local address is acquired.
After the local address of the corresponding insert element is determined using the index information, RGB data of the insert element to which the local address points may be acquired. After the RGB data of the insert element is obtained, the RGB data of the insert element can be rendered on the second blank texture image to obtain a second texture image.
S300: and performing mixed rendering on the first texture image and the second texture image to obtain a mixed texture image.
After the first texture image rendered by the RGB data of the blending element and the second texture image rendered by the RGB data of the insertion element are acquired, the first texture image and the second texture image may be mixed and rendered to obtain a mixed texture image. After the mixed texture image is obtained, mixed rendering can be carried out on the mixed texture image and the current source video frame, so that the mixed element and the insertion element are inserted into the current source video frame, and when the current source video frame is played, the situation that the insertion element and the source video frame are not synchronous in display can be effectively reduced.
In one implementation, S300 may include the steps of:
s310: and processing the second texture image to be matched with the shape of the first texture image in shape to obtain a mixed texture image.
As shown in fig. 6, a in fig. 6 may represent a first texture image, b in fig. 6 may represent a second texture image, and c in fig. 6 may represent a mixed texture image, and in the process of mixed rendering the first texture image and the second texture image, the second texture image is processed into an image having a shape matching the first texture image using the first texture image as the mixed texture image.
For how the second texture image is processed to match the shape of the first texture image, the following steps can be referred to:
s311: and processing each pixel of the second texture image by using transparency information corresponding to the pixel value of each pixel of the first texture image so as to enable the shape of the second texture image to be matched with the shape of the first texture image.
In the process of processing the second texture image to be matched with the shape of the first texture image, the transparency information corresponding to the pixel values of the pixels of the first texture image is utilized to process the pixels of the second texture image, so that the shape of the second texture image is matched with the shape of the first texture image. For example, as shown in fig. 6, if the second texture image is a colored image, since the first texture image is a black-only and white-only image, in the process of processing the pixels of the second texture image by using the transparency information corresponding to the pixel values of the pixels of the first texture image, when the pixels of the second texture image and the black pixels of the first texture image are mixed, the mixed portion is transparent, and when the pixels of the second texture image and the white pixels of the first texture image are mixed, the mixed portion can be displayed.
S400: and rendering the mixed texture image in a third texture image of the current source video frame according to the first position data until the rendering of all source video frames of the source video is completed.
The first position data can represent the position where the insertion element needs to be inserted in the current source video frame, after the first texture image and the second texture image are mixed and rendered to obtain the mixed texture image, the mixed texture image can be rendered in a third texture image of the current source video frame according to the first position data, the insertion element is inserted into the current source video frame, then the current source video frame of each frame is subjected to the same processing until all source video frames of the source video are rendered, and therefore the situation that the display of the insertion element and the display of the source video frame are not synchronous in the process of playing the source video is reduced, the experience of a user in the watching process is improved, and the user viscosity is improved.
In one implementation, for how the third texture image is obtained, reference may be made to the following steps:
S410: a third blank texture image is created.
The third blank texture image may be a blank texture image for rendering the third texture image, and in particular, the third texture image may be obtained by rendering RGB data of the current source video frame on the third blank texture image. In the process of creating the third texture image, a third blank texture image may be created based on the width and height of the current source video frame by a preset plug-in tool, specifically, the width of the third blank texture image may be half of the width of the current output video frame, and the height of the third blank texture image may be the same as the height of the current output video frame.
S420: and acquiring RGB data of the current source video frame, and rendering the RGB data of the current source video frame on a third blank texture image to obtain a third texture image.
In the process of manufacturing the third texture image, the third texture image can be obtained by acquiring the RGB data of the current source video frame and rendering the RGB data of the current source video frame on the third blank texture image.
For how to acquire RGB data of the current source video frame, reference may be made to the following steps included in S420:
s421: RGB pixel values of pixels of an image of a current source video frame are obtained in a current output video frame.
Since the image of the current source video frame is set in the first region of the corresponding current output video frame, when RGB data of the current source video frame is acquired, RGB values of each pixel of the image of the current source video frame can be acquired in the corresponding current output video frame.
In one implementation, for how to render RGB data of the current source video frame on the third blank texture image, reference may be made to the following steps:
s422: pixel values of pixels of a control element corresponding to a current source video frame are obtained from a current output video frame.
Because the current output video frame is provided with the image of the current source video frame, the image of the mixed element and the image of the control element at intervals, the pixel value of each pixel of the control element corresponding to the current source video frame can be obtained from the current output video frame, so that the transparency information corresponding to the pixel value of each pixel of the control element is utilized to render the third blank texture image, and the third texture image is obtained.
S423: and processing each pixel of the third blank texture image rendered with the RGB data of the current source video frame by utilizing transparency information corresponding to the pixel value of each pixel of the control element so as to obtain a third texture image.
As shown in fig. 7, d in fig. 7 may represent a third blank texture image on which RGB data of the current source video frame is rendered, e in fig. 7 may represent an image of a control element, f in fig. 7 may represent a third texture image, and each pixel of the third blank texture image on which RGB data of the current source video frame is rendered is processed using transparency information corresponding to a pixel value of each pixel of the control element, thereby obtaining the third texture image. The transparency information corresponding to the pixel values of the pixels of the control element is utilized to process the pixels of the third blank texture image rendered with the RGB data of the current source video frame, so that the obtained third texture image meets the transparency requirement corresponding to the transparency information in the control element, the mixed element and the inserted element are further inserted into the current source video frame, the situation that the display of the inserted element and the display of the source video frame are not synchronous in the video playing process is reduced, the display synchronization of the video in the playing process is improved, the experience of a user in watching live video is further improved, and the viscosity of the user is improved.
In summary, in the method for processing the video support element insertion provided in the embodiment, the first texture image of the mixing element corresponding to the current source video frame is obtained, the second texture image of the insertion element corresponding to the mixing element is obtained, the first texture image and the second texture image are mixed and rendered to obtain the mixed texture image, the mixed texture image is rendered in the third texture image of the current source video frame according to the first position data until the rendering of all the source video frames of the source video is completed, and since the first position data can represent the position where the insertion element needs to be inserted in the source video frame, the third texture image can be the texture image of the current source video frame obtained by rendering, and then the mixed texture image and the third texture image are mixed and rendered at the position corresponding to the first position data, the dynamic insertion element can be better and effectively realized when the source video frame is rendered, and since the first position of the mixing element corresponding to the current source video frame is obtained, the dynamic insertion element can not be rendered in the corresponding position when the first position, the user can not be more easily and synchronously observed, the user can not experience the image is better when the user is viewing the dynamic image is rendered.
As shown in fig. 8, the electronic terminal 100 described in the embodiment of the electronic terminal 100 of the present application may be the above-described viewer terminal 30, and the electronic terminal 100 includes a processor 110, a memory 120, and a communication circuit. The memory 120 and the communication circuit are coupled to the processor 110.
The memory 120 is used to store a computer program, and may be a RAM (read-only memory), a ROM (random access memory ), or other type of storage device. In particular, the memory may include one or more computer-readable storage media, which may be non-transitory. The memory may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory is used to store at least one piece of program code.
The processor 110 is used to control the operation of the electronic terminal 100, and the processor 110 may also be referred to as a CPU (Central Processing Unit ). The processor 110 may be an integrated circuit chip with signal processing capabilities. Processor 110 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The general purpose processor may be a microprocessor or the processor 110 may be any conventional processor or the like.
The processor 110 is configured to execute a computer program stored in the memory 120 to implement the rendering processing method described in the embodiment of the rendering processing method for video support element insertion of the present application.
In some embodiments, the electronic terminal 100 may further include: a peripheral interface 130 and at least one peripheral. The processor 110, the memory 120, and the peripheral interface 130 may be connected by a bus or signal line. The individual peripheral devices may be connected to the peripheral device interface 130 by buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 140, display 150, audio circuitry 160, and power supply 170.
Peripheral interface 130 may be used to connect at least one Input/output (I/O) related peripheral to processor 110 and memory 120. In some embodiments, processor 110, memory 120, and peripheral interface 130 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 110, the memory 120, and the peripheral interface 130 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 140 is configured to receive and transmit RF (Radio Frequency) signals, also referred to as electromagnetic signals. The radio frequency circuit 140 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 140 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 140 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 140 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (WIRELESS FIDELITY ) networks. In some embodiments, the radio frequency circuit 140 may further include NFC (NEAR FIELD Communication) related circuits, which is not limited by the present application.
The display 150 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When display 150 is a touch display, display 150 also has the ability to collect touch signals at or above the surface of display 150. The touch signal may be input to the processor 110 as a control signal for processing. At this time, the display 150 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 150 may be one, and disposed on the front panel of the electronic terminal 100; in other embodiments, the display 150 may be at least two, and disposed on different surfaces of the electronic terminal 100 or in a folded design; in other embodiments, the display 150 may be a flexible display disposed on a curved surface or a folded surface of the electronic terminal 100. Even more, the display 150 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display 150 may be made of LCD (Liquid CRYSTAL DISPLAY), OLED (Organic Light-emitting diode) or other materials.
The audio circuit 160 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 110 for processing, or inputting the electric signals to the radio frequency circuit 140 for voice communication. For the purpose of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different positions of the electronic terminal 100. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 110 or the radio frequency circuit 140 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 160 may also include a headphone jack.
The power supply 170 is used to power the various components in the electronic terminal 100. The power source 170 may be alternating current, direct current, disposable or rechargeable. When the power source 170 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
For detailed explanation of the functions and execution processes of each functional module or component in the embodiment of the electronic terminal of the present application, reference may be made to the above explanation of the embodiment of the rendering processing method for inserting video support elements in the embodiment of the present application, which is not repeated here.
In several embodiments provided in the present application, it should be understood that the disclosed electronic terminal 100 and background processing method may be implemented in other manners. For example, the embodiments of the electronic terminal 100 described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
Referring to fig. 9, the above-described integrated units, if implemented in the form of software functional units and sold or used as independent products, may be stored in the computer-readable storage medium 200. Based on such understanding, the technical solution of the present application may be embodied essentially or partly in the form of a software product or all or part of the technical solution, which is stored in a storage medium, and includes several instructions/computer programs to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media such as a USB flash disk, a mobile hard disk, a read-only memory, a random access memory, a magnetic disk or an optical disk, and electronic terminals such as a computer, a mobile phone, a notebook computer, a tablet computer, a camera, and the like having the storage media.
The description of the execution process of the program data in the computer readable storage medium may be described with reference to the above embodiment of the video support element insertion rendering processing method of the present application, which is not repeated herein.
The foregoing description is only illustrative of the present application and is not intended to limit the scope of the application, and all equivalent structures or equivalent processes or direct or indirect application in other related technical fields are included in the scope of the present application.

Claims (15)

1. A rendering processing method for video support element insertion, comprising:
Acquiring a first texture image of a mixed element corresponding to a current source video frame of a source video, and first position data of the mixed element corresponding to a corresponding position in the current source video frame, wherein the mixed element is in a material image, the material image corresponds to the current source video frame, and the first position data of the mixed element in the corresponding material image is used for reflecting the position data of a corresponding inserted element in the current source video frame;
Acquiring a second texture image of an inserted element corresponding to the mixed element;
performing mixed rendering on the first texture image and the second texture image to obtain a mixed texture image;
And rendering the mixed texture image in a third texture image of the current source video frame according to the first position data until all the source video frames of the source video are rendered.
2. The rendering processing method according to claim 1, wherein:
The step of performing mixed rendering on the first texture image and the second texture image to obtain a mixed texture image comprises the following steps:
And processing the second texture image to be matched with the shape of the first texture image in shape to obtain the mixed texture image.
3. The rendering processing method according to claim 2, wherein:
the processing the second texture image to match the shape of the first texture image to obtain the mixed texture image comprises the following steps:
And processing each pixel of the second texture image by using transparency information corresponding to the pixel value of each pixel of the first texture image so as to enable the shape of the second texture image to be matched with the shape of the first texture image.
4. A rendering processing method according to any one of claims 1 to 3, wherein:
the obtaining a first texture image of a mixing element corresponding to a current source video frame of a source video includes:
Creating a first blank texture image;
And acquiring RGB data of the mixed element, and rendering RBG data of the mixed element on the first blank texture image to obtain the first texture image.
5. The rendering processing method according to claim 4, wherein:
the acquiring the RGB data of the mixing element includes:
Acquiring a current output video frame corresponding to the current source video frame and second position data of the mixed element in the current output video frame; the current output video frame is provided with an image of a current source video frame and an image of the mixed element at intervals;
and acquiring RGB pixel values of each pixel of the image of the mixed element in the current output video frame by using the second position data.
6. The rendering processing method according to claim 5, wherein:
the rendering the mixed texture image in the third texture image of the current source video frame according to the first position data comprises the following steps:
Creating a third blank texture image;
And acquiring RGB data of the current source video frame, and rendering the RGB data of the current source video frame on the third blank texture image to obtain the third texture image.
7. The rendering processing method according to claim 6, wherein:
The obtaining the RGB data of the current source video frame includes:
and acquiring RGB pixel values of each pixel of the image of the current source video frame in the current output video frame.
8. The rendering processing method according to claim 7, wherein:
the obtaining RGB data of the source video frame, and rendering RBG data of the source video frame on the third blank texture image to obtain the third texture image includes:
Acquiring pixel values of pixels of a control element corresponding to the current source video frame from the current output video frame; the current output video frame is provided with an image of the current source video frame, an image of the mixed element and an image of the control element at intervals;
And processing each pixel of the third blank texture image rendered with the RGB data of the current source video frame by utilizing transparency information corresponding to the pixel value of each pixel of the control element so as to obtain the third texture image.
9. The rendering processing method according to claim 8, wherein:
The current output video frame comprises a first area, a second area and a third area which are arranged at intervals, wherein the first area is used for setting images of the current source video frame, the second area is used for setting images of all the mixed elements corresponding to the current source video frame at intervals, and the third area is used for setting images of the control elements corresponding to the current source video frame; the length of the current output video frame is twice the length of the current source video frame, the width of the current output video frame is the same as the width of the current source video frame, the length and the width of the first area are corresponding to the length and the width of the current source video frame, and the length of the second area and the length of the third area are corresponding to the length of the current source video frame.
10. The rendering processing method according to claim 5, wherein:
the obtaining a second texture image of the insert element corresponding to the mixed element includes:
creating a second blank texture image;
And acquiring RGB data of the inserted element, and rendering the RGB data of the inserted element on the second blank texture image to obtain the second texture image.
11. The rendering processing method according to claim 10, wherein:
the obtaining the RGB data of the insert element comprises
Acquiring index information of the mixing element corresponding to the current source video frame according to the serial number of the current output video frame;
and acquiring the RGB data of the corresponding inserted element by using the address indicated by the index information.
12. The rendering processing method according to claim 11, wherein:
the obtaining RGB data of the corresponding insert element using the address indicated by the index information includes:
inquiring the local address of the corresponding inserted element in a Map structure array by utilizing the index information, wherein the Map structure array is used for storing the index information and the local address of the corresponding inserted element in a correlated manner;
And acquiring RGB data of the insert element pointed by the local address.
13. The rendering processing method according to claim 11, wherein:
The obtaining the current output video frame corresponding to the current source video frame includes:
Receiving a dynamic resource file from a server;
Analyzing the dynamic resource file to obtain an output video file and description information;
and extracting the current output video frame from the output video file, and extracting the first position data, the second position data and the index information corresponding to the current source video frame from the description information.
14. An electronic terminal comprising a processor, a memory, and a communication circuit; the memory and the communication circuit are coupled to the processor, the memory storing a computer program that is executable by the processor to implement the rendering processing method of any one of claims 1-13.
15. A computer-readable storage medium, storing a computer program executable by a processor to implement the rendering processing method of any one of claims 1-13.
CN202210365078.8A 2022-04-07 2022-04-07 Rendering processing method for inserting video support element, electronic terminal and storage medium Active CN114915839B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210365078.8A CN114915839B (en) 2022-04-07 2022-04-07 Rendering processing method for inserting video support element, electronic terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210365078.8A CN114915839B (en) 2022-04-07 2022-04-07 Rendering processing method for inserting video support element, electronic terminal and storage medium

Publications (2)

Publication Number Publication Date
CN114915839A CN114915839A (en) 2022-08-16
CN114915839B true CN114915839B (en) 2024-04-16

Family

ID=82763695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210365078.8A Active CN114915839B (en) 2022-04-07 2022-04-07 Rendering processing method for inserting video support element, electronic terminal and storage medium

Country Status (1)

Country Link
CN (1) CN114915839B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184856A (en) * 2020-09-30 2021-01-05 广州光锥元信息科技有限公司 Multimedia processing device supporting multi-layer special effect and animation mixing
CN112714357A (en) * 2020-12-21 2021-04-27 北京百度网讯科技有限公司 Video playing method, video playing device, electronic equipment and storage medium
CN113457160A (en) * 2021-07-15 2021-10-01 腾讯科技(深圳)有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN114286175A (en) * 2021-12-23 2022-04-05 天翼视讯传媒有限公司 Method for playing video mosaic advertisement

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140015826A1 (en) * 2012-07-13 2014-01-16 Nokia Corporation Method and apparatus for synchronizing an image with a rendered overlay

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184856A (en) * 2020-09-30 2021-01-05 广州光锥元信息科技有限公司 Multimedia processing device supporting multi-layer special effect and animation mixing
CN112714357A (en) * 2020-12-21 2021-04-27 北京百度网讯科技有限公司 Video playing method, video playing device, electronic equipment and storage medium
CN113457160A (en) * 2021-07-15 2021-10-01 腾讯科技(深圳)有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN114286175A (en) * 2021-12-23 2022-04-05 天翼视讯传媒有限公司 Method for playing video mosaic advertisement

Also Published As

Publication number Publication date
CN114915839A (en) 2022-08-16

Similar Documents

Publication Publication Date Title
WO2021147657A1 (en) Frame interpolation processing method and related product
US11106421B2 (en) Display method and system for wireless intelligent multi-screen display
JP7270661B2 (en) Video processing method and apparatus, electronic equipment, storage medium and computer program
CN110297917B (en) Live broadcast method and device, electronic equipment and storage medium
CN114363696B (en) Display processing method for inserting video support element, electronic terminal and storage medium
EP4231650A1 (en) Picture display method and apparatus, and electronic device
CN106723987A (en) Intelligent platform
CN114428597A (en) Multi-channel terminal screen projection control method and device, screen projector and storage medium
CN101964110A (en) Method and system for creating an image
CN110958464A (en) Live broadcast data processing method and device, server, terminal and storage medium
CN116668732A (en) Virtual lamplight rendering method, equipment and storage medium for live broadcasting room
US20120127280A1 (en) Apparatus and method for generating three dimensional image in portable terminal
CN113645476B (en) Picture processing method and device, electronic equipment and storage medium
CN114915839B (en) Rendering processing method for inserting video support element, electronic terminal and storage medium
JP5224352B2 (en) Image display apparatus and program
CN114928748A (en) Rendering processing method, terminal and storage medium of dynamic effect video of virtual gift
US11202028B2 (en) Display device configuring multi display system and control method thereof
CN107408024A (en) Cross display device
CN112492331B (en) Live broadcast method, device, system and storage medium
CN212519189U (en) VR (virtual reality) host supporting multi-mode video sharing and video sharing system
CN116896664A (en) Rendering processing method, terminal, server and storage medium for dynamic video
CN113625983A (en) Image display method, image display device, computer equipment and storage medium
CN112770167A (en) Video display method and device, intelligent display terminal and storage medium
KR102538479B1 (en) Display apparatus and method for displaying
CN113170229B (en) Display device, server, electronic device and control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant