CN114915839A - Rendering processing method for inserting video support elements, electronic terminal and storage medium - Google Patents

Rendering processing method for inserting video support elements, electronic terminal and storage medium Download PDF

Info

Publication number
CN114915839A
CN114915839A CN202210365078.8A CN202210365078A CN114915839A CN 114915839 A CN114915839 A CN 114915839A CN 202210365078 A CN202210365078 A CN 202210365078A CN 114915839 A CN114915839 A CN 114915839A
Authority
CN
China
Prior art keywords
texture image
video frame
source video
mixed
current source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210365078.8A
Other languages
Chinese (zh)
Other versions
CN114915839B (en
Inventor
利进龙
郭亚斌
袁小明
甘鹏龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN202210365078.8A priority Critical patent/CN114915839B/en
Publication of CN114915839A publication Critical patent/CN114915839A/en
Application granted granted Critical
Publication of CN114915839B publication Critical patent/CN114915839B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a rendering processing method for inserting video support elements, an electronic terminal and a storage medium, wherein the method comprises the following steps: acquiring a first texture image of a mixed element corresponding to a current source video frame of a source video and first position data of the mixed element corresponding to a corresponding position in the current source video frame; acquiring a second texture image of an insertion element corresponding to the mixed element; performing mixed rendering on the first texture image and the second texture image to obtain a mixed texture image; and rendering the mixed texture image in a third texture image of the current source video frame according to the first position data until rendering of all source video frames of the source video is completed. By means of the method, the elements can be effectively inserted into the video frames when the videos are played in the live broadcast process in a rendering mode, and the dynamic effect of the elements is achieved in the videos.

Description

Rendering processing method for inserting video support elements, electronic terminal and storage medium
Technical Field
The present application relates to the field of live broadcast technologies, and in particular, to a rendering processing method for inserting a video support element, an electronic terminal, and a storage medium.
Background
Along with the development of internet technology and communication technology, the society has entered the age of intelligent interconnection, and it is also more and more common to carry out interaction, amusement and work at the internet, and wherein comparatively general is live broadcast technique, and people can watch live broadcast or live broadcast through smart machine anytime and anywhere, greatly richenes people's life and widened people's field of vision.
During live broadcasting, the watching of a user is attracted by presenting various different dynamic effects on a live broadcasting interface. The video playing is often involved in the live broadcasting, for example, the gift animation can be played in a video mode, more details can be presented in the video animation mode, but it is generally difficult to insert corresponding elements in the rendering after the video is made.
Disclosure of Invention
The application mainly solves the technical problem of providing a rendering processing method for video support element insertion, an electronic terminal and a storage medium, which can insert elements in a source video frame.
In order to solve the technical problem, the application adopts a technical scheme that: a rendering processing method for video supporting element insertion is provided, and the method comprises the following steps: acquiring a first texture image of a mixed element corresponding to a current source video frame of a source video and first position data of the mixed element corresponding to a corresponding position in the current source video frame; acquiring a second texture image of an insertion element corresponding to the mixed element; performing mixed rendering on the first texture image and the second texture image to obtain a mixed texture image; and rendering the mixed texture image in a third texture image of the current source video frame according to the first position data until rendering of all source video frames of the source video is completed.
In order to solve the above technical problem, another technical solution adopted by the present application is: an electronic terminal is provided that includes a processor, a memory, and communication circuitry; the memory and the communication circuit are coupled to the processor, the memory stores a computer program, and the processor can execute the computer program to realize the rendering processing method of video support element insertion provided by the application.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a computer-readable storage medium storing a computer program executable by a processor to implement a video support element insertion rendering processing method as provided in the present application.
The beneficial effect of this application is: different from the prior art, the method comprises the steps of obtaining a first texture image of a mixed element corresponding to a current source video frame of a source video, obtaining a second texture image of an inserted element corresponding to the mixed element, performing mixed rendering on the first texture image and the second texture image to obtain a mixed texture image, rendering the mixed texture image in a third texture image of the current source video frame according to first position data until rendering of all source video frames of the source video is completed, rendering the mixed texture image in the third texture image by rendering the current source video frame according to the first position data, wherein the first position data can represent a position where the inserted element needs to be inserted in the source video frame, the third texture image can be a texture image obtained by rendering the current source video frame, performing mixed rendering on the first texture image of the mixed element and the second texture image of the inserted element, and then performing mixed rendering on the mixed texture image and the third texture image in a position corresponding to the first position data, therefore, the dynamic insertion of the elements during the rendering of the source video frame can be better and effectively realized, and the first position data of the mixed elements corresponding to the corresponding positions in the current source video frame is acquired in advance, so that the images can be quickly rendered to the corresponding positions during the rendering, the asynchronous phenomenon in the dynamic insertion element process can be reduced, the experience of a user during the viewing of the dynamic insertion images is improved, and the improvement of the user viscosity is facilitated.
Drawings
FIG. 1 is a schematic system composition diagram of a live broadcast system to which an embodiment of a rendering processing method for video support element insertion is applied;
FIG. 2 is a flowchart illustrating an embodiment of a rendering processing method for inserting a video support element according to the present application;
FIG. 3 is a diagram of a material image according to an embodiment of a display processing method for inserting a video support element;
FIG. 4 is a schematic diagram of an output video frame according to an embodiment of a rendering processing method for inserting a video support element of the present application;
FIG. 5 is a timing diagram illustrating an embodiment of a rendering processing method for inserting video support elements according to the present application;
FIG. 6 is a schematic diagram of a first texture image and a second texture image being rendered in a mixed manner to obtain a mixed texture image according to an embodiment of a rendering processing method for video support element insertion according to the present application;
FIG. 7 is a schematic diagram illustrating a third blank texture image with a mixed texture image rendered according to an embodiment of a rendering processing method for video support element insertion and a third texture image obtained by mixed rendering of a control element;
FIG. 8 is a schematic block diagram of a circuit configuration of an embodiment of an electronic terminal of the present application;
FIG. 9 is a schematic block diagram of the circuit structure of a computer readable storage medium of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The inventor of the application finds that dynamic elements such as characters and pictures need to be added in MP4 video dynamic effects in order to enrich the live broadcast effect in scenes such as live broadcast, but the current method is to superimpose a native view on a player view when MP4 resources are played, or to realize the effect by using a mode of MP4+ SVGA/Y2A, so that two different views often exist, and the resource is played asynchronously, so that the experience of a user in the live broadcast watching process is reduced. In order to realize the insertion of dynamic elements in the video rendering process, the following embodiments are proposed in the present application.
As shown in fig. 1, the rendering processing method described in the rendering processing method for inserting a video support element of the present application may be applied to a live system 1, and specifically, the live system 1 may include a server 10, an anchor terminal 20, a viewer terminal 30, and a configuration terminal 40. The anchor terminal 20, the viewer terminal 30, and the configuration terminal 40 may be electronic terminals, and specifically, the anchor terminal 20 and the viewer terminal 30 are electronic terminals installed with corresponding client programs, that is, client terminals. The electronic terminal can be a mobile terminal, a computer, a server or other terminals, the mobile terminal can be a mobile phone, a notebook computer, a tablet computer, an intelligent wearable device and the like, and the computer can be a desktop computer and the like.
The server 10 may pull the live data stream from the anchor terminal 20 and may push the obtained live data stream to the viewer terminal 30 after performing corresponding processing. After the viewer terminal 30 acquires the live data stream, the live process of the anchor or guest can be watched. The mixing of the live data streams may occur at least one of the server 10, the anchor terminal 20 and the viewer terminal 30. Video or voice connections may be made between the anchor terminal 20 and the anchor terminal 20, and between the anchor terminal 20 and the viewer terminals 30. In the video microphone, the microphone connecting party may push the live data stream including the video stream to the server 10, and further push the corresponding live data to the corresponding microphone connecting party and the viewer terminal 30. The anchor terminal 20 and the viewer terminal 30 can display to the respective live pictures in the live room.
Of course, the anchor terminal 20 and the viewer terminal 30 are relative, and the terminal in the live broadcasting process is the anchor terminal 20, and the terminal in the live broadcasting watching process is the viewer terminal 30. The configuration terminal 40 is used for configuring a client, a server, and the like, for example, configuring various functions such as an interface layout, a gift special effect, and animation displayed in the client. The configuration terminal 40 may complete the production of the corresponding project and send the project to the server 10, and the server 10 sends the project to the client terminal, so as to implement the background configuration of the client. Similarly, the client terminal may process and display the data sent from the server 10 for the viewer to watch.
As shown in fig. 2, in the embodiment of the rendering processing method for inserting a video support element, a client terminal may be used as an execution subject. The present embodiment may include the following steps: s100: a first texture image of a blending element corresponding to a current source video frame of a source video is obtained, and first position data of the blending element corresponding to a corresponding position in the current source video frame is obtained. S200: a second texture image of the insertion element corresponding to the blend element is obtained. S300: and performing mixed rendering on the first texture image and the second texture image to obtain a mixed texture image. S400: and rendering the mixed texture image in a third texture image of the current source video frame according to the first position data until rendering of all source video frames of the source video is completed.
The method comprises the steps of obtaining a first texture image of a mixed element corresponding to a current source video frame of a source video, obtaining a second texture image of an inserted element corresponding to the mixed element, performing mixed rendering on the first texture image and the second texture image to obtain a mixed texture image, rendering the mixed texture image in a third texture image of the current source video frame according to first position data until rendering of all source video frames of the source video is completed, wherein the first position data can represent a position where the inserted element needs to be inserted in the source video frame, the third texture image can be a texture image obtained by rendering the current source video frame, the mixed texture image is obtained by performing mixed rendering on the first texture image of the mixed element and the second texture image of the inserted element, and then performing mixed rendering on the mixed texture image and the third texture image in a position corresponding to first position data, therefore, the dynamic insertion of the elements during the rendering of the source video frame can be better and effectively realized, and the first position data of the mixed elements corresponding to the corresponding positions in the current source video frame is acquired in advance, so that the images can be quickly rendered to the corresponding positions during the rendering, the asynchronous phenomenon in the dynamic insertion element process can be reduced, the experience of a user during the viewing of the dynamic insertion images is improved, and the improvement of the user viscosity is facilitated.
The method described in this embodiment is applicable to a scenario where the client terminal processes the dynamic resource file after receiving the dynamic resource file configured via the configuration terminal 40 via the server 10.
Before the dynamic resource file is transmitted to the client terminal via the server 10, it is necessary to generate the dynamic resource file through the configuration terminal 40, which may include the following steps:
s010: first position data of a mixed element of each frame of material image in the material image is obtained to serve as position data of the mixed element in a source video frame of a source video corresponding to the material image.
When designing the material resources through the AE software, the configuration terminal 40 can read the material resources through the corresponding software, and the material resources can be used for describing the corresponding insertion elements inserted into the source video, including the identification information, the positions and the shapes corresponding to the insertion elements, so that the client terminal can know what elements are to be extracted and what positions of the source video frames are to be rendered in what shapes.
And after the material resources are read, analyzing the material resources. Through analysis of the material images in the material resources, the positions of the mixing elements in each material image in the material image can be calculated, first position data of the mixing elements is obtained, and then the first position data of the mixing elements of all the material images of the project can be obtained. The position data of the corresponding insertion element in the source video frame is reflected/mapped by the first position data of the blend element in the corresponding material image, thereby enabling the client terminal to know to what position of the source video frame the extracted insertion element is to be rendered.
As for how to obtain the first position data of the blending element in each material image, a first transformation matrix for mapping the relative change of the blending element between the first frame material image and each subsequent frame material image may be calculated, for example, the frame rate fps required to render to the whole animation item in the source video and the set animation time duration, so that the whole process may be divided into fps duration frames, that is, the material images may have fps duration frames, and the positions of the blending element on the frame material images need to be obtained. In particular, by analyzing the frame material images, corresponding first transformation matrices may be generated, for example, a transformation matrix may be created for each frame material image, so as to describe the position transformation of the mixing element in the material images of different frames.
The first transformation matrix for each subsequent frame material image maps the relative change of the blending element of that subsequent frame material image relative to the same blending element of the first frame material image. For example, the total frame number of the material images is 10 frames, and then the 2 nd to 10 th frames respectively correspond to creating a first transformation matrix, and the first transformation matrix of the material images of different frames describes the relative change of the mixed elements of the material images relative to the mixed elements of the material images of the 1 st frame.
As for a specific creation process of the first change matrix, the following steps included in step S010 can be referred to:
s011: and acquiring the change data of the mixed elements of the material images of the subsequent frames relative to the mixed elements of the material images of the first frame in the translation dimension, the rotation dimension and the scaling dimension.
The subsequent frame material image is in the same item relative to the first frame material image, that is, the material image after the first frame material image is called the subsequent frame material image.
The transformation data may refer to data of translation, rotation, and scaling of the mixing elements in different frame material images. The change data may be data of the mixed elements of each frame of material image in the translation, rotation and scaling dimensions, which needs to be determined or input when the material resource is designed, or may be obtained by analyzing and calculating all the frame material images of the designed material resource.
For example, at 0s, the anchor point of mixed element 1 is (0,0), position is (225,730), rotation angle is 0, and scaling is 60%. At 2s, anchor point of mixed element 1 is (0,0), position is (352,730), rotation angle is 0, and scaling is 60%.
The anchor point may be selected as the origin of the mixed element 1, that is, the position of the upper left corner of the layer range of the mixed element 1, where the anchor point is (0, 0). The position (225,730) and the position (352,730) are positions of the anchor points with respect to the origin of the material image, that is, positions with respect to the vertex in the upper left corner of the material image. The position can reflect the translation relation of the mixed elements, the rotation angle can reflect the rotation relation of the mixed elements, and the scaling reflects the scaling relation of the mixed elements. Thus, the change between 2s and 0s in the dimensions of translation, rotation and scaling can be calculated, and the change data of 2s relative to 0s can be obtained.
In summary, by calculating the change of the subsequent frame material image relative to the first frame material image in three dimensions, the corresponding change data can be obtained, and the change data is conveniently used for constructing the corresponding first transformation matrix.
S012: and calculating a first transformation matrix corresponding to each subsequent frame material image by using the change data corresponding to each subsequent frame material image.
After obtaining the change data of each subsequent frame material image, the change data of each subsequent frame material image can be used to calculate the corresponding first transformation matrix. After the first transformation matrix corresponding to each subsequent frame material image is calculated, the first position data corresponding to each subsequent frame material image may be calculated by using the first position data of the first frame material image, which may be specifically referred to as S022 below.
S013: and obtaining first position data of the mixed element in the first frame material image, and obtaining the first position data of the mixed element in each subsequent frame material image after the first position data of the mixed element in the first frame material image is respectively transformed by a first transformation matrix corresponding to each subsequent frame material image.
And acquiring first position data of the mixed elements of the first frame material image, namely initial position data of the mixed elements for realizing the animation process. And transforming the first position data of the first frame material image by the first transformation matrix of each subsequent frame material image to obtain the first position data of the same mixed element of each subsequent frame material image. In this way, the first position data of the mixed element of each subsequent frame material image can be quickly calculated by using the first transformation matrix, that is, the position of the material image presented is converted into the position data which can be read and calculated by the computer.
Specifically, the specific process of calculating the first position data corresponding to each subsequent frame material image by using the first position data corresponding to the first frame material image can be seen in the following steps included in S013:
s0131: and determining that each vertex coordinate of the mixed element in the first frame material image corresponds to first coordinate data so as to obtain first position data of the mixed element in the first frame material image.
For example, the range of the layer can be obtained according to the width and height of the mixed element of the first frame material image and the position (e.g., anchor position): upper left vertex, lower left vertex, upper right vertex, and lower right vertex. Thus, the coordinates of each vertex can be calculated, and the first coordinate data can be obtained. The first position data may be embodied by first coordinate data. Taking the blending element 1 shown in the a diagram of fig. 3 as an example, the coordinates of the post-blending element 1 at the 4 vertices of the first frame material image can be determined in the manner described above.
S0132: and multiplying the first coordinate data by a first transformation matrix corresponding to each subsequent frame material image respectively to obtain transformation coordinate data of the mixed elements in each subsequent frame material image.
And calculating the transformation coordinate data of the subsequent frame material image by combining the first transformation matrix calculated in the step and multiplying the first coordinate data by the first transformation matrix corresponding to the subsequent frame material image. Specifically, the four vertices calculated above are multiplied by the first transformation matrix, respectively, to obtain new four vertex positions corresponding to the mixed elements of the frame material image, and the positions of the four vertices are not necessarily the vertex positions of the layer range of the mixed elements of the frame material image.
For example, the blending element 1 illustrated in B of fig. 3, the blending element 1 of a certain subsequent frame material image is moved and rotated with respect to the blending element 1 of the first frame material image. The square mixed element 1 in the first frame material image is rotated to present a rhombus-like shape in a certain subsequent frame material image, but the layer ranges of the two are different, the layer range of the mixed element 1 in the first frame material image is defined by four vertexes of the square, and in a certain material image, the layer range of the mixed element 1 is defined by a dashed frame shown in the figure. Therefore, the layer range of the mixed elements of the frame material image is determined according to the obtained new four vertex positions.
S0133: and determining second coordinate data corresponding to each vertex coordinate of the mixed element in each subsequent frame material image by using the transformed coordinate data of the mixed element so as to obtain first position data of the mixed element in each subsequent frame material image.
Specifically, the maximum value and the minimum value on the abscissa and the maximum value and the minimum value on the ordinate of the mixed element of the frame material image may be determined in the transformed coordinate data obtained in the above step. And determining the layer range of the mixed element by using the maximum value and the minimum value of the mixed element of the frame material image on the abscissa and the maximum value and the minimum value on the ordinate.
For example, in the new vertex position corresponding to the transformed coordinate data, the maximum value and the minimum value of the x and y axes are determined, and 4 numerical values are (minX, minY, maxX, maxY). Determining new four vertex positions of the layer range of the mixed elements of the subsequent frame material image: upper left: (minX, minY); left lower: (minX, maxY); upper right: (maxX, minY); right lower: (maxX, maxY), i.e., second coordinate data. The first position data of the blend element in the subsequent frame material image may be represented by the second coordinate data.
After the first position data of the mixed element of each frame of the material image is acquired, each first position data is stored and recorded as the display position of the mixed element in the source video frame corresponding to the frame of the material image. By analogy, four vertex positions of all mixed elements in each frame of material image can be calculated and stored in a dictionary.
S020: and sending the video data of the source video and the first position data to a server, and forwarding the video data and the first position data to the client terminal through the server.
After the first position data is obtained, the video data of the source video and the first position data may be output. The server 10 may forward the received data to the client terminal for rendering on the client terminal. After receiving the video data of the source video and the first position data via the server 10, the client terminal obtains the insertion element corresponding to the mixing element, and renders the insertion element into a corresponding source video frame according to the corresponding first position data when rendering the source video in the live broadcast picture. The output video frame and the second position data may be acquired before outputting the video data of the source video and the first position data. Specifically, an output canvas corresponding to an output video frame may be divided into a first region and a second region which are arranged at intervals. Optionally, the number of output video frames is the same as the number of source video frames, i.e. the output video frames and the source video frames correspond one to one. A first region of the output video frame displays the corresponding source video frame and a second region of the output video frame displays the respective blending element to be inserted into the source video frame. The second area is used for storing the blending elements corresponding to the source video frame, so that the client terminal can specifically identify the blending elements.
When determining the output video frame, as shown in fig. 4, the source video frame may be displayed in a first region of the output canvas, each mixed element in the material image corresponding to the source video frame may be sequentially copied to a second region of the output canvas to form the output video frame, and second position data of each mixed element in the second region of the output canvas may be recorded. Specifically, the second position data of the output video frame and the insert layer in the second area may be acquired through a corresponding plug-in/extension function in the AE software. The second position data may be expressed by coordinate data inserted in the layer in the output video frame. The generated mixing elements are used for generating description information in the first position data in the source video frame and the second position data in the output video frame, and particularly, the data in the description information can be stored in a json array form, wherein one or more mixing elements can be contained, and each mixing element has information of width, height, index information, type and the like. The description information may further include first location data and second location data stored in a json array form, and the json array in which the first location data and the second location data are stored may be named as data, so that the first location data and the second location data are acquired more quickly. Finally, all the output video frames are converted into the output video in sequence to obtain the video data of the output video, namely the dynamic effect resource file. In particular, the dynamic resource file may be an MP4 resource file.
As shown in fig. 5, after the client terminal acquires the dynamic resource file sent by the configuration terminal 40 via the server 10, the processing of the dynamic resource of the client terminal may refer to the following steps that may be included in this embodiment:
s100: a first texture image of a blending element corresponding to a current source video frame of a source video is obtained, and first position data of the blending element corresponding to a corresponding position in the current source video frame is obtained.
The first texture image of the mixed element corresponding to the current source video frame of the source video may be an image obtained by rendering the mixed element. The first position data for the blending element corresponding to the corresponding position in the current source video frame may represent a position that needs to be inserted when inserting the blending element into the current source video frame.
In one implementation, the following steps are included before S100:
s110: and receiving the dynamic resource file from the server.
After the configuration terminal 40 obtains the dynamic resource file through the above configuration, the dynamic resource file is sent to the server 10 and forwarded to the client terminal through the server 10, and the client terminal receives the dynamic resource file from the server 10. The client terminal receives the dynamic effect resource file and processes the dynamic effect resource file, so that the insertion element is inserted into the video frame when the video is played.
S120: and analyzing the dynamic effect resource file to obtain an output video file and description information.
And after receiving the dynamic effect resource file, the client terminal analyzes the dynamic effect resource file to obtain an output video file and description information. The output video file may be a video file consisting of a number of output video frames and the description information may be generated by mixing elements with first position data in the source video frames and second position data in the output video frames. Specifically, after receiving the dynamic resource file, the client terminal parses the dynamic resource file, and extracts the description information in the header file of the output video file and the dynamic resource file.
S130: and extracting the current output video frame from the output video file, and extracting the first position data, the second position data and the index information corresponding to the current source video frame from the description information.
The currently output video frame may include a first area, a second area and a third area, which are arranged at intervals, as shown in fig. 3, specifically, the first area is used for setting an image of the current source video frame, the second area is used for arranging an image of each mixing element corresponding to the current source video frame at intervals, and the third area is used for setting an image of a control element corresponding to the current source video frame. The length of the current output video frame may be twice the length of the current source video frame, and the length and width of the first region correspond to and coincide with the length and width of the current source video frame.
After the description information is analyzed from the motion resource file, the first position data, the second position data and the index information corresponding to the current source video frame can be extracted from the description information, the mixing element and the inserting element corresponding to each other can be inserted into the current source video frame based on the first position data, the second position data and the index information, and therefore the phenomenon that the source video frame and the inserting element are not displayed synchronously in the video playing process can be reduced.
In one implementation, S100 may include the steps of:
s140: a first blank texture image is created.
The first blank texture image may be a blank texture image for rendering resulting in a first texture image, and in particular, the first texture image may be resulting from rendering a blend element onto the first blank texture image. In the process of creating the first blank texture image, the first blank texture image may be created based on the width and height of the mixed elements through a preset plug-in tool.
S150: and acquiring RGB data of the mixed elements, and rendering the RBG data of the mixed elements on the first blank texture image to obtain a first texture image.
In the process of manufacturing the first texture image, the RGB data may be obtained by obtaining the mixed elements, and the RGB data of the mixed elements is rendered on the first blank texture image to obtain the first texture image.
For how to acquire the RGB data of the mixed elements, reference may be made to the following steps included in S150:
s151: and acquiring a current output video frame corresponding to the current source video frame and second position data of the mixing element in the current output video frame.
Because the image of the current source video frame and the image of the mixing element are arranged on the current output video frame at intervals, and the mixing element, the current source video frame and the current output video frame are in one-to-one correspondence, in the process of acquiring the RGB data of the mixing element, the RGB data of the mixing element in the current output video frame can be acquired by determining the current output video frame corresponding to the current source video frame and then acquiring the second position data of the mixing element in the current output video frame, and further the RGB data of the mixing element can be acquired by utilizing the second position data.
S152: the RGB pixel values of the pixels of the image of the mixed element are acquired in the current output video frame using the second position data.
The second position data may represent a position of the mixing element in the current output video frame, and since the pixel value of the mixing element has only two colors, black and white, a hexadecimal value of the pixel value corresponding to black is 0, and a hexadecimal value of the pixel value corresponding to white is 255, the pixel value stored by the mixing element has only 0 and 255, and thus, the RGB value of each pixel of the image of the mixing element may be determined by the second position data of the mixing element in the current output video frame.
S200: a second texture image of the insertion element corresponding to the blend element is obtained.
The insertion element may be an element that needs to be inserted into the current source video frame, and the insertion element is in a one-to-one correspondence with the blending element. In the process of inserting the insertion element into the current source video frame, the insertion element and the mixing element are mixed and rendered, and then the mixed and rendered image and the current source video are mixed and rendered, so that the corresponding insertion element and the corresponding mixing element are inserted into the current source video frame, and the condition that the insertion element and the video frame are displayed asynchronously in the video playing process is favorably reduced.
In one implementation, S200 may include the steps of:
s210: a second blank texture image is created.
The second blank texture image may be a blank texture image for rendering resulting in a second texture image, and in particular, the second texture image may be obtained by rendering an insert element on the second blank texture image. In the process of creating the second blank texture image, the second blank texture image may be created based on the width and height of the insert element by a preset plug-in.
S220: and acquiring RGB data of the inserted elements, and rendering the RGB data of the inserted elements on the second blank texture image to obtain a second texture image.
In the process of making the second texture image, the second texture image may be obtained by obtaining RGB data of the inserted element and rendering the RGB data of the inserted element on the second blank texture image.
As to how to acquire RGB data of the insertion element, reference may be made to the following steps included in S220:
s221: and acquiring index information of the mixing element corresponding to the current source video frame according to the sequence number of the current output video frame.
Because the current source video frame is arranged in the current output video frame, the sequence numbers of the current source video frame and the current output video frame are the same, and the corresponding current source video frame can be determined according to the sequence number of the current output video frame. Meanwhile, the current source video frame is in one-to-one correspondence with the mixing elements, and each mixing element can have unique corresponding index information, so that the index information of the mixing element corresponding to the current source video frame can be determined. The index information may be information extracted from the description information for associating a blend element with an insertion element, and by determining the index information of the blend element corresponding to the current source video frame, the insertion element corresponding to the index information may be determined, thereby determining the insertion element corresponding to the current source video frame, and thus RGB data of the insertion element can be acquired.
S222: the RGB data of the corresponding insertion element is acquired using the address indicated by the index information.
Since the index information may associate the insertion element and the mixture element, that is, the insertion element may be determined by the address indicated by the determined index information of the mixture element, and then the RGB data of the insertion element is determined, it is possible to render the RGB data of the insertion element on the second blank texture image to obtain the second texture image.
As to how to acquire RGB data of a corresponding insertion element using an address indicated by the index information, reference may be made to the following steps included in S222:
s2221: and querying the local address of the corresponding insert element in the Map structure array by using the index information.
The Map structure array may be an array obtained from a service side preset in the client terminal, and used for associating and storing the index information and the local address of the corresponding insertion element. Specifically, a Key in the Map structure array may be used to store index information, and a Value in the Map structure array may be used to store a local address of the insert element. And determining a Map structure array carrying corresponding index information by using the acquired index information of the mixed element corresponding to the current source video frame, and further acquiring the local address of the inserted element from the Map structure array.
S2222: RGB data of an insert element pointed to by a local address is obtained.
After the local address of the corresponding insertion element is determined by using the index information, RGB data of the insertion element to which the local address points may be acquired. After the RGB data of the inserted elements are obtained, the RGB data of the inserted elements can be rendered on the second blank texture image to obtain a second texture image.
S300: and performing mixed rendering on the first texture image and the second texture image to obtain a mixed texture image.
After a first texture image rendered by the RGB data of the mixed element and a second texture image rendered by the RGB data of the inserted element are acquired, the first texture image and the second texture image may be rendered in a mixed manner to obtain a mixed texture image. After the mixed texture image is obtained, the mixed texture image and the current source video frame can be subjected to mixed rendering, so that the mixed elements and the insertion elements can be inserted into the current source video frame, and the unsynchronized display condition of the insertion elements and the source video frame can be effectively reduced when the current source video frame is played.
In one implementation, S300 may include the steps of:
s310: and processing the second texture image into a shape matched with the shape of the first texture image to obtain a mixed texture image.
As shown in fig. 6, a in fig. 6 may represent a first texture image, b in fig. 6 may represent a second texture image, and c in fig. 6 may represent a mixed texture image, and the second texture image is processed into an image having a shape matching the first texture image using the first texture image as the mixed texture image in the process of mixedly rendering the first texture image and the second texture image.
For how the second texture image is processed to have a shape matching the shape of the first texture image, the following steps may be referenced:
s311: and processing each pixel of the second texture image by using transparency information corresponding to the pixel value of each pixel of the first texture image so as to enable the shape of the second texture image to be matched with the shape of the first texture image.
And in the process of processing the second texture image into a shape matched with the shape of the first texture image, processing each pixel of the second texture image by using transparency information corresponding to the pixel value of each pixel of the first texture image, so that the shape of the second texture image is matched with the shape of the first texture image. For example, as shown in fig. 6, if the second texture image is an image with color, since the first texture image is an image with only black and white, in the process of processing each pixel of the second texture image by using transparency information corresponding to the pixel value of each pixel of the first texture image, when each pixel of the second texture image is mixed with a black pixel in the first texture image, the mixed portion is transparent, and when each pixel of the second texture image is mixed with a white pixel in the first texture image, the mixed portion can be displayed.
S400: and rendering the mixed texture image in a third texture image of the current source video frame according to the first position data until rendering of all source video frames of the source video is completed.
The first position data can represent the position of an insertion element to be inserted in the current source video frame, after the first texture image and the second texture image are mixed and rendered to obtain a mixed texture image, the mixed texture image can be rendered in the third texture image of the current source video frame according to the first position data, the insertion element is inserted into the current source video frame, then the same processing is carried out on each current source video frame until the rendering of all the source video frames of the source video is completed, and therefore the situation that the display of the insertion element and the display of the source video frames are not synchronous is reduced in the process of playing the source video, the experience feeling in the process of watching by a user is improved, and the stickiness of the user is improved.
In one implementation, for how to obtain the third texture image, the following steps may be referenced:
s410: a third blank texture image is created.
The third blank texture image may be a blank texture image used for rendering to obtain the third texture image, and specifically, the third texture image may be obtained by rendering RGB data of the current source video frame on the third blank texture image. In the process of creating the third texture image, a third blank texture image may be created based on the width and height of the current source video frame through a preset plug-in tool, specifically, the width of the third blank texture image may be half of the width of the current output video frame, and the height of the third blank texture image may be the same as the height of the current output video frame.
S420: and acquiring RGB data of the current source video frame, and rendering the RGB data of the current source video frame on the third blank texture image to obtain a third texture image.
In the process of making the third texture image, the third texture image may be obtained by obtaining RGB data of the current source video frame and rendering the RGB data of the current source video frame on the third blank texture image.
As to how to acquire RGB data of the current source video frame, reference may be made to the following steps included in S420:
s421: RGB pixel values of pixels of an image of a current source video frame are acquired in a current output video frame.
Since the image of the current source video frame is set in the first region of the corresponding current output video frame, when acquiring the RGB data of the current source video frame, the RGB values of each pixel of the image of the current source video frame can be acquired in the corresponding current output video frame.
In one implementation, for how to render the RGB data of the current source video frame on the third blank texture image, the following steps can be referenced:
s422: and acquiring the pixel value of each pixel of the control element corresponding to the current source video frame from the current output video frame.
Since the image of the current source video frame, the image of the mixed element, and the image of the control element are arranged on the current output video frame at intervals, the third blank texture image can be rendered by using the transparency information corresponding to the pixel value of each pixel of the control element by acquiring the pixel value of each pixel of the control element corresponding to the current source video frame from the current output video frame, and then the third texture image is obtained.
S423: and processing each pixel of a third blank texture image which renders the RGB data of the current source video frame by using the transparency information corresponding to the pixel value of each pixel of the control element to obtain a third texture image.
As shown in fig. 7, d in fig. 7 may represent a third blank texture image rendering RGB data of the current source video frame, e in fig. 7 may represent an image of a control element, and f in fig. 7 may represent the third texture image, and each pixel of the third blank texture image rendering RGB data of the current source video frame is processed by transparency information corresponding to a pixel value of each pixel of the control element, so as to obtain the third texture image. The transparency information corresponding to the pixel value of each pixel of the control element is utilized to process each pixel of the third blank texture image rendering the RGB data of the current source video frame, so that the obtained third texture image can meet the transparency requirement corresponding to the transparency information in the control element, the mixed element and the inserted element are inserted into the current source video frame, the condition that the display of the inserted element and the display of the source video frame are not synchronous in the video playing process is reduced, the display synchronization of the video in the playing process is improved, the experience of a user in watching live video is improved, and the user viscosity is improved.
To sum up, in the rendering processing method for inserting a video support element provided in this embodiment, a first texture image of a mixed element corresponding to a current source video frame of a source video is obtained, a second texture image of an inserted element corresponding to the mixed element is obtained, the first texture image and the second texture image are mixed and rendered to obtain a mixed texture image, the mixed texture image is rendered in a third texture image of the current source video frame according to first position data until rendering of all source video frames of the source video is completed, because the first position data can represent a position where the inserted element needs to be inserted in the source video frame, the third texture image may be a texture image obtained by rendering the current source video frame, and the mixed texture image is obtained by mixing and rendering the first texture image of the mixed element and the second texture image of the inserted element, and then the mixed texture image and the third texture image are mixed and rendered at the position corresponding to the first position data, so that the dynamic element insertion during the rendering of the source video frame can be better and effectively realized, and the first position data of the mixed element corresponding to the corresponding position in the current source video frame is obtained in advance, so that the image can be quickly rendered to the corresponding position during the rendering, the asynchronous phenomenon in the dynamic element insertion process can be further reduced, the experience of a user when watching the dynamically inserted image is improved, and the improvement of the user viscosity is facilitated.
As shown in fig. 8, the electronic terminal 100 according to the embodiment of the electronic terminal 100 of the present application may be the above-mentioned viewer terminal 30, and the electronic terminal 100 includes a processor 110, a memory 120 and a communication circuit. The memory 120 and the communication circuit are coupled to the processor 110.
The Memory 120 is used for storing computer programs, and may be a RAM (Read-Only Memory), a ROM (Random Access Memory), or other types of storage devices. In particular, the memory may include one or more computer-readable storage media, which may be non-transitory. The memory may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in a memory is used to store at least one program code.
The processor 110 is used for controlling the operation of the electronic terminal 100, and the processor 110 may also be referred to as a CPU (Central Processing Unit). The processor 110 may be an integrated circuit chip having signal processing capabilities. The processor 110 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 110 may be any conventional processor or the like.
The processor 110 is configured to execute the computer program stored in the memory 120 to implement the rendering processing method described in the embodiment of the video support element insertion rendering processing method.
In some embodiments, the electronic terminal 100 may further include: a peripheral interface 130 and at least one peripheral. The processor 110, memory 120, and peripheral interface 130 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 130 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 140, display 150, audio circuitry 160, and power supply 170.
The peripheral interface 130 may be used to connect at least one peripheral related to I/O (Input/output) to the processor 110 and the memory 120. In some embodiments, processor 110, memory 120, and peripheral interface 130 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 110, the memory 120, and the peripheral interface 130 may be implemented on separate chips or circuit boards, which is not limited by the embodiment.
The Radio Frequency circuit 140 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 140 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 140 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 140 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 140 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 140 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display 150 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 150 is a touch display screen, the display screen 150 also has the ability to capture touch signals on or over the surface of the display screen 150. The touch signal may be input to the processor 110 as a control signal for processing. At this point, the display screen 150 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 150 may be one, disposed on the front panel of the electronic terminal 100; in other embodiments, the display screen 150 may be at least two, respectively disposed on different surfaces of the electronic terminal 100 or in a folding design; in other embodiments, the display 150 may be a flexible display disposed on a curved surface or a folded surface of the electronic terminal 100. Even more, the display 150 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 150 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-emitting diode), and the like.
Audio circuitry 160 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 110 for processing or inputting the electric signals to the radio frequency circuit 140 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the electronic terminal 100. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 110 or the radio frequency circuit 140 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 160 may also include a headphone jack.
The power supply 170 is used to supply power to various components in the electronic terminal 100. The power source 170 may be alternating current, direct current, disposable or rechargeable. When power source 170 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
For detailed description of functions and execution processes of each functional module or component in the embodiment of the electronic terminal according to the present application, reference may be made to the description in the embodiment of the rendering processing method for inserting a video support element according to the present application, and details are not described here again.
In the several embodiments provided in the present application, it should be understood that the disclosed electronic terminal 100 and the background processing method may be implemented in other manners. For example, the above-described embodiments of the electronic terminal 100 are merely illustrative, and for example, a module or a unit may be divided into only one logic function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
Referring to fig. 9, the integrated unit may be stored in a computer-readable storage medium 200 if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions/computer programs for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media such as a U disk, a portable hard disk, a read only memory, a random access memory, a magnetic disk or an optical disk, and electronic terminals such as a computer, a mobile phone, a notebook computer, a tablet computer, a camera, etc. having the storage medium.
The description of the execution process of the program data in the computer-readable storage medium may refer to the above description of the embodiment of the rendering processing method for inserting the video support element in the present application, and will not be described herein again.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are intended to be included within the scope of the present application.

Claims (15)

1. A rendering processing method for video support element insertion is characterized by comprising the following steps:
acquiring a first texture image of a mixed element corresponding to a current source video frame of a source video, and first position data of the mixed element corresponding to a corresponding position in the current source video frame;
acquiring a second texture image of an insertion element corresponding to the mixed element;
performing mixed rendering on the first texture image and the second texture image to obtain a mixed texture image;
rendering the mixed texture image in a third texture image of the current source video frame according to the first position data until rendering of all the source video frames of the source video is completed.
2. The rendering processing method according to claim 1, wherein:
the mixing and rendering the first texture image and the second texture image to obtain a mixed texture image includes:
and processing the second texture image into a shape matched with the shape of the first texture image to obtain the mixed texture image.
3. The rendering processing method according to claim 2, wherein:
the processing the second texture image to have a shape matching the shape of the first texture image to obtain the hybrid texture image comprises:
and processing each pixel of the second texture image by using transparency information corresponding to the pixel value of each pixel of the first texture image so as to enable the shape of the second texture image to be matched with the shape of the first texture image.
4. The rendering processing method according to any one of claims 1 to 3, characterized in that:
the obtaining a first texture image of a mixed element corresponding to a current source video frame of a source video includes:
creating a first blank texture image;
and acquiring the RGB data of the mixed elements, and rendering the RBG data of the mixed elements on the first blank texture image to obtain the first texture image.
5. The rendering processing method according to claim 4, wherein:
the acquiring the RGB data of the mixed element includes:
acquiring a current output video frame corresponding to the current source video frame and second position data of the mixing element in the current output video frame; the current output video frame is provided with an image of a current source video frame and an image of the mixing element at intervals;
and acquiring RGB pixel values of each pixel of the image of the mixed element in the current output video frame by using the second position data.
6. The rendering processing method according to claim 5, wherein:
the rendering the mixed texture image in the third texture image of the current source video frame according to the first position data includes:
creating a third blank texture image;
and acquiring RGB data of the current source video frame, and rendering the RGB data of the current source video frame on the third blank texture image to obtain the third texture image.
7. The rendering processing method according to claim 6, wherein:
the acquiring RGB data of the current source video frame includes:
and acquiring the RGB pixel value of each pixel of the image of the current source video frame in the current output video frame.
8. The rendering processing method according to claim 7, wherein:
the obtaining the RGB data of the source video frame and rendering the RBG data of the source video frame on the third blank texture image to obtain the third texture image includes:
acquiring a pixel value of each pixel of a control element corresponding to the current source video frame from the current output video frame; the image of the current source video frame, the image of the mixing element and the image of the control element are arranged on the current output video frame at intervals;
and processing each pixel of the third blank texture image rendered with the RGB data of the current source video frame by using transparency information corresponding to the pixel value of each pixel of the control element to obtain the third texture image.
9. The rendering processing method according to claim 8, wherein:
the current output video frame comprises a first area, a second area and a third area which are arranged at intervals, wherein the first area is used for arranging images of the current source video frame, the second area is used for arranging images of each mixing element corresponding to the current source video frame at intervals, and the third area is used for arranging images of the control elements corresponding to the current source video frame; the length of the current output video frame is twice the length of the current source video frame, the width of the current output video frame is the same as the width of the current source video frame, and the length and the width of the first region are correspondingly consistent with the length and the width of the current source video frame.
10. The rendering processing method according to claim 5, wherein:
the obtaining of the second texture image of the insertion element corresponding to the blend element includes:
creating a second blank texture image;
and acquiring the RGB data of the inserted element, and rendering the RGB data of the inserted element on the second blank texture image to obtain the second texture image.
11. The rendering processing method according to claim 10, wherein:
the RGB data of the insertion element is obtained, including
Acquiring index information of the mixing element corresponding to the current source video frame according to the sequence number of the current output video frame;
and acquiring the RGB data of the corresponding insertion element by using the address indicated by the index information.
12. The rendering processing method according to claim 11, wherein:
the acquiring the RGB data of the corresponding insertion element using the address indicated by the index information includes:
searching a local address of the corresponding insertion element in a Map structure array by using the index information, wherein the Map structure array is used for storing the index information and the local address of the corresponding insertion element in an associated manner;
RGB data of the insert element pointed to by the local address is obtained.
13. The rendering processing method according to claim 11, wherein:
the acquiring a current output video frame corresponding to the current source video frame includes:
receiving a dynamic effect resource file from a server;
analyzing the dynamic effect resource file to obtain an output video file and description information;
and extracting the current output video frame from the output video file, and extracting the first position data, the second position data and the index information corresponding to the current source video frame from the description information.
14. An electronic terminal comprising a processor, a memory, and communication circuitry; the memory and the communication circuit are coupled to the processor, the memory storing a computer program executable by the processor to implement the rendering processing method of any of claims 1-13.
15. A computer-readable storage medium, characterized in that a computer program is stored, the computer program being executable by a processor to implement the rendering processing method according to any one of claims 1 to 13.
CN202210365078.8A 2022-04-07 2022-04-07 Rendering processing method for inserting video support element, electronic terminal and storage medium Active CN114915839B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210365078.8A CN114915839B (en) 2022-04-07 2022-04-07 Rendering processing method for inserting video support element, electronic terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210365078.8A CN114915839B (en) 2022-04-07 2022-04-07 Rendering processing method for inserting video support element, electronic terminal and storage medium

Publications (2)

Publication Number Publication Date
CN114915839A true CN114915839A (en) 2022-08-16
CN114915839B CN114915839B (en) 2024-04-16

Family

ID=82763695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210365078.8A Active CN114915839B (en) 2022-04-07 2022-04-07 Rendering processing method for inserting video support element, electronic terminal and storage medium

Country Status (1)

Country Link
CN (1) CN114915839B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140015826A1 (en) * 2012-07-13 2014-01-16 Nokia Corporation Method and apparatus for synchronizing an image with a rendered overlay
CN112184856A (en) * 2020-09-30 2021-01-05 广州光锥元信息科技有限公司 Multimedia processing device supporting multi-layer special effect and animation mixing
CN112714357A (en) * 2020-12-21 2021-04-27 北京百度网讯科技有限公司 Video playing method, video playing device, electronic equipment and storage medium
CN113457160A (en) * 2021-07-15 2021-10-01 腾讯科技(深圳)有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN114286175A (en) * 2021-12-23 2022-04-05 天翼视讯传媒有限公司 Method for playing video mosaic advertisement

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140015826A1 (en) * 2012-07-13 2014-01-16 Nokia Corporation Method and apparatus for synchronizing an image with a rendered overlay
CN112184856A (en) * 2020-09-30 2021-01-05 广州光锥元信息科技有限公司 Multimedia processing device supporting multi-layer special effect and animation mixing
CN112714357A (en) * 2020-12-21 2021-04-27 北京百度网讯科技有限公司 Video playing method, video playing device, electronic equipment and storage medium
CN113457160A (en) * 2021-07-15 2021-10-01 腾讯科技(深圳)有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN114286175A (en) * 2021-12-23 2022-04-05 天翼视讯传媒有限公司 Method for playing video mosaic advertisement

Also Published As

Publication number Publication date
CN114915839B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN109191549B (en) Method and device for displaying animation
US7889192B2 (en) Mobile equipment with three dimensional display function
CN109271125B (en) Screen display control method and device of split type terminal equipment and storage medium
CN110297917B (en) Live broadcast method and device, electronic equipment and storage medium
US20200257487A1 (en) Display method and system for wireless intelligent multi-screen display
CN113436301B (en) Method and device for generating anthropomorphic 3D model
CN106803984B (en) Method and device compatible with VR and television functions
CN111541907A (en) Article display method, apparatus, device and storage medium
CN114363696B (en) Display processing method for inserting video support element, electronic terminal and storage medium
CN114428597A (en) Multi-channel terminal screen projection control method and device, screen projector and storage medium
CN110958464A (en) Live broadcast data processing method and device, server, terminal and storage medium
TW200814718A (en) Scenario simulation system and method for a multimedia device
US20120127280A1 (en) Apparatus and method for generating three dimensional image in portable terminal
CN113645476B (en) Picture processing method and device, electronic equipment and storage medium
JP2014110034A (en) Portable terminal, terminal program and toy
TW201305847A (en) Digital signage apparatus, portable device synchronization system, and method thereof
CN114928748A (en) Rendering processing method, terminal and storage medium of dynamic effect video of virtual gift
JP5224352B2 (en) Image display apparatus and program
CN114915839B (en) Rendering processing method for inserting video support element, electronic terminal and storage medium
CN112492331B (en) Live broadcast method, device, system and storage medium
CN212519189U (en) VR (virtual reality) host supporting multi-mode video sharing and video sharing system
CN111325702B (en) Image fusion method and device
CN113625983A (en) Image display method, image display device, computer equipment and storage medium
CN116896664A (en) Rendering processing method, terminal, server and storage medium for dynamic video
CN112770167A (en) Video display method and device, intelligent display terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant