CN114827722A - Video preview method, device, equipment and storage medium - Google Patents

Video preview method, device, equipment and storage medium Download PDF

Info

Publication number
CN114827722A
CN114827722A CN202210381717.XA CN202210381717A CN114827722A CN 114827722 A CN114827722 A CN 114827722A CN 202210381717 A CN202210381717 A CN 202210381717A CN 114827722 A CN114827722 A CN 114827722A
Authority
CN
China
Prior art keywords
track
materials
rendering
video preview
grouped
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210381717.XA
Other languages
Chinese (zh)
Inventor
方鹏
刘兴邦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202210381717.XA priority Critical patent/CN114827722A/en
Publication of CN114827722A publication Critical patent/CN114827722A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234327Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into layers, e.g. base layer and one or more enhancement layers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • H04N21/4344Remultiplexing of multiplex streams, e.g. by modifying time stamps or remapping the packet identifiers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The application discloses a video preview method, a video preview device, video preview equipment and a storage medium, wherein the method comprises the following steps: acquiring at least one material segment in a material to be edited, and generating a corresponding timeline track, wherein the timeline track comprises previewed and manufactured track materials; and grouping the track materials, and performing drawing task processing and rendering on the grouped track materials. The method for realizing layered real-time preview based on time line material segmentation is used, the frame is segmented according to the track by taking the current preview time on a time axis as a reference, the segment track material is segmented into different drawing tasks, a plurality of tasks are parallel, asynchronous layered rendering is carried out according to the material level, free and smooth real-time editing can be realized for a user, and free editing without limitation of material types, material formats, material layer number and frame rate is realized.

Description

Video preview method, device, equipment and storage medium
Technical Field
The present application relates to the field of video editing technologies, and in particular, to a video preview method, apparatus, device, and storage medium.
Background
In the non-linear editing software of the web end, the traditional method is that a front-end machine receives UI operation of a user, the UI operation is converted into data and an operation command of a non-linear editing system, a back-end machine is driven to decode, synthesize and render, and then the result is sent to the front-end machine to be played and displayed. That is, as long as the Web side needs to communicate with the server side once after editing once, the result is returned to the Web side for previewing after the decoding of the server side is completed, so that the method for previewing the Web side by background synthesis depends on the back end, real-time previewing cannot be performed, and the production requirements of current internet users are difficult to meet.
Disclosure of Invention
The application mainly aims to provide a video previewing method, a video previewing device, video previewing equipment and a video previewing storage medium, and aims to solve the technical problem that in the prior art, a video production previewing method cannot achieve real-time previewing and cannot meet user requirements.
In order to achieve the above object, the present application provides a video preview method, where the video preview method includes:
acquiring at least one material segment in a material to be edited, and generating a corresponding timeline track, wherein the timeline track comprises previewed and manufactured track materials;
and grouping the track materials, and performing drawing task processing and rendering on the grouped track materials.
Preferably, the step of grouping the track materials, and performing rendering task processing and rendering on the grouped track materials includes:
grouping the track materials, and performing drawing task processing on the grouped track materials;
and performing layered rendering on the track material after the rendering task is processed based on a preset level.
Preferably, the step of grouping the track materials and performing rendering task processing on the grouped track materials includes:
grouping the track materials according to the types of the track materials to obtain grouped materials;
and matching the drawing tasks corresponding to the grouped materials, and performing asynchronous processing on the grouped materials.
Preferably, the step of matching the rendering task corresponding to the grouped material and performing asynchronous processing on the grouped material includes:
screening out a material list previewed at the current moment in the grouped materials;
and when the material list is changed, matching a drawing task corresponding to the track material in the material list, and performing asynchronous processing on the track material through the drawing task.
Preferably, the segment time and the time stamp of the at least one material segment are recorded on the timeline track;
after the step of generating the corresponding timeline track, the method further includes:
loading material segments based on the segment time and the timestamp.
Preferably, the step of performing layered rendering on the track material after the rendering task is processed based on a preset level includes:
acquiring an identification number of a track material after the drawing task is processed;
based on the identification number, adding the track material into a canvas;
and according to the level of the track material, performing layered rendering on the track material in the canvas.
Preferably, before the step of obtaining at least one material segment of the material to be edited, the method further includes:
transcoding and segmenting a material to be edited to obtain at least one alternative material segment;
the step of obtaining at least one material segment in the material to be edited comprises the following steps:
and acquiring at least one material segment in the alternative material segments.
The present application further provides a video preview device, the video preview device including:
the acquisition module is used for acquiring at least one material segment in the material to be edited and generating a corresponding timeline track, wherein the timeline track comprises previewed and manufactured track material;
and the preview module is used for grouping the track materials, and performing drawing task processing and rendering on the grouped track materials.
The present application further provides a video preview device, the video preview device including: a memory, a processor, and a program stored on the memory for implementing the video preview method,
the memory is used for storing a program for realizing the video preview method;
the processor is used for executing the program for implementing the video preview method so as to implement the steps of the video preview method.
The present application provides a storage medium, and the storage medium stores one or more programs, which can be further executed by one or more processors for implementing the steps of the video preview method described in any one of the above.
Compared with the prior art that the editing process cannot be previewed in real time and the manufacturing requirements of current internet users are difficult to meet, the video previewing method, the video previewing device, the video previewing equipment and the storage medium provided by the application have the advantages that at least one material segment in materials to be edited is obtained, the corresponding timeline track is generated, and the timeline track comprises previewed and manufactured track materials; and grouping the track materials, and performing drawing task processing and rendering on the grouped track materials. The method for realizing layered real-time preview based on time line material segmentation is used, the frame is segmented according to the track by taking the current preview time on a time axis as a reference, the segment track material is segmented into different drawing tasks, a plurality of tasks are parallel, asynchronous layered rendering is carried out according to the material level, free and smooth real-time editing can be realized for a user, and free editing without limitation of material types, material formats, material layer number and frame rate is realized.
Drawings
Fig. 1 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating a first embodiment of a video preview method according to the present application;
FIG. 3 is a schematic diagram of a hierarchical relationship in the video preview method of the present application;
fig. 4 is a functional block diagram of a video preview device according to a preferred embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present application.
The terminal in the embodiment of the application may be a PC, or may be a mobile terminal device having a display function, such as a smart phone, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 3) player, a portable computer, or the like.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. The communication bus 1002 is used to implement connection communication among these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the terminal may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the mobile terminal is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer and tapping) and the like for recognizing the attitude of the mobile terminal; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a network operation control application program.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be used to invoke a network operation control application stored in the memory 1005.
Referring to fig. 2, an embodiment of the present application provides a video preview method, where the video preview method includes:
step S100, acquiring at least one material segment in a material to be edited, and generating a corresponding timeline track, wherein the timeline track comprises track materials which are previewed and manufactured;
and step S200, grouping the track materials, and performing rendering task processing and rendering on the grouped track materials.
The method comprises the following specific steps:
step S100, acquiring at least one material segment in a material to be edited, and generating a corresponding timeline track, wherein the timeline track comprises track materials which are previewed and manufactured;
in this embodiment, it should be noted that the video preview method can be applied to a video preview apparatus belonging to a video preview system belonging to a video preview device.
In this embodiment, the specific application scenarios may be:
in the non-linear editing software of the web end, the traditional method is that a front-end machine receives UI operation of a user, the UI operation is converted into data and an operation command of a non-linear editing system, a back-end machine is driven to decode, synthesize and render, and then the result is sent to the front-end machine to be played and displayed. That is, as long as the Web side needs to communicate with the server side once after editing once, the result is returned to the Web side for previewing after the decoding of the server side is completed, so that the method for previewing the Web side by background synthesis depends on the back end, real-time previewing cannot be performed, and the production requirements of current internet users are difficult to meet.
In the application, for a video preview system, at least one material segment in a material to be edited is obtained, and a corresponding timeline track is generated, wherein the timeline track comprises track materials for previewing and making; and grouping the track materials, and performing drawing task processing and rendering on the grouped track materials. The method for realizing layered real-time preview based on timeline material segmentation is used, the frame is segmented according to the track by taking the current preview time on a time axis as a reference, the segment track material is segmented into different drawing tasks, a plurality of tasks are parallel, and asynchronous layered rendering is performed according to the material level, so that a user can realize free and smooth real-time editing, and free editing without limitation of material types, material formats, material layer number and frame rate can be realized.
That is, in the application, the problem that the user experience is reduced and the user requirements are hardly met due to the fact that the existing video clips cannot preview in real time is solved.
In this embodiment, a material to be edited is obtained, and a new video is obtained by editing and creating the material to be edited, where the material to be edited may be a complete section of the material or a section of a certain complete material, that is, at least one material section of the material to be edited is obtained, and a corresponding timeline track is generated according to the at least one material section. It should be noted that the material to be edited may be a material on video software or a platform system, or may be a material uploaded by a user.
The data in the process of editing the material is stored in the memory of the storage server and acquired by the player in the preview window to serve as a data source of editing preview, the editing data is updated in the process of editing and is uploaded to the server to be stored in a timed pushing mode, and management is performed by taking the project as a unit, so that the editing data is downloaded from the server to the memory to serve as the material to be edited when a user edits the project next time, and is displayed back to the editing material list and the editing track to serve as the data source of editing preview.
Further, the timeline track comprises track materials for previewing and making, the track materials comprise special effects, filters, subtitles, pictures, audio and the like, each track material has a corresponding data structure, a user carries out clip splicing on the timeline track of the video clip platform in the clip processing process, and previewing and making are track-based data structures. The track based on the time line can be composed of a main track and a plurality of certain types of auxiliary tracks from top to bottom, material segments on the main track are adsorbed together from left to right, and material segments on the auxiliary tracks are not adsorbed.
In this embodiment, the system receives a situation that a user performs material cutting and splicing based on a timeline, acquires a material segment and a track material corresponding to the material cutting and splicing, generates a data structure of the track material and the track material on the timeline, and synchronizes to the storage server at a fixed time, so that the storage server stores the data structure of the track material and performs real-time video cutting operation. It can be understood that the information edited by the user is converted into a timeline data structure by using the non-editing system, so that the consistency of front-end preview and production export is realized, and the user can conveniently use the information when continuing to edit next time.
The data structure of the track material is composed of attribute data representing various types of materials and special effects, namely, the data structure of the track material is a list composed of single tracks, and the structure of the single track comprises a list composed of track types, IDs (Identity identifications), weights and a plurality of material segments. Tracks of several types, such as special effects, filters, subtitles, stickers, audio, are composed of a plurality of material segments of the type; the video track consists of two types of materials of pictures and videos; the main track is composed of several types of materials and special effects such as videos, pictures, subtitles, pictures, transitions and the like. Therefore, it can be understood that, in the video clip, the track from top to bottom is positioned higher, the corresponding track has a higher weight, and the level in the video clip is higher.
As an example, the data structure of each type of material segment is as follows:
and the corresponding data structure fields of the caption type material segments are ID, type, width, height, center coordinate, zoom, rotation angle, track start time, track end time, interception start time, interception end time and font setting: text content, color, alignment, style, font animation: entering duration and animation name;
the special effect type material segment is characterized in that corresponding data structure fields are ID, type, special effect name, duration, track starting time, track ending time, special effect period duration and special effect canvas range;
filter type material segments, wherein the corresponding data structure fields are ID, type, filter name, filter strength, lut diagram width and height, track start time, track end time and lut diagram download path;
the map type material segment, the corresponding data structure field is ID, type, width, height, upper left coordinate, track start time, track end time, map download path;
the picture type material segment, the corresponding data structure field is ID, type, resource address, width, height, upper left coordinate, cutting coordinate, track start time, track end time, cutting start time, cutting end time, compressed picture download address;
transition type material segments, wherein corresponding data structure fields are a previous material ID, a next material ID, a previous material occurrence time, a next material occurrence time, a transition name, a transition time, a track starting time, a track ending time and a transformation matrix;
and the video type material segment corresponds to data structure fields of ID, type, resource address, width, height, upper left coordinate, cutting coordinate, track starting time, track ending time, cutting starting time, cutting ending time, compressed video downloading address, video duration, resolution, volume, playing speed and transition information.
Further, the data structure of the preview window is also included: the width and height of the playing window, the frame ratio and the like.
It should be noted that the data structure of the track material is created by the user when adding material to the track, and therefore, the data structure is changed in real time, and conditions for updating the data structure are set for facilitating the system to clip and preview the track material in real time. For example, when it is detected that the user drags the position of the whole material segment on the timeline, the data structure is updated; or when the start time or the end time of audio/video cutting is adjusted, the data structure is updated; or when the material segments are added or deleted, the data structure is updated; or when editing and basically setting the material segment, updating the data structure, wherein the editing of the material segment comprises processing of position, width and height, zooming, rotation, picture cutting and the like of the material.
Further, before obtaining at least one material segment in the material to be edited, the method includes the following steps:
a1, transcoding and segmenting a material to be edited to obtain at least one alternative material segment;
the step of obtaining at least one material segment of the material to be edited includes:
and step B1, acquiring at least one material segment in the alternative material segments.
In this embodiment, a material to be edited on video software or a platform is first obtained, where the material to be edited may be uploaded by a user or may be a material in the video software or the platform system. Since a player at a World Wide Web (global Wide area network) end supports a limited audio/video format, a material to be edited needs to be transcoded and converted into a format supported by the Web end to obtain the transcoded material. And then the transcoded material is segmented based on a certain rule, and is cut into at least one alternative material segment. The specific rule may be a fixed time duration, that is, the transcoded material is divided into a plurality of alternative material segments with fixed time duration, so that the material segments are used for downloading during previewing, the time duration of each slice material is fixed, the specific time duration is determined according to the situation or according to the clipping requirement of the user, and no specific limitation is made herein.
It can be understood that obtaining at least one material segment for clipping in the material to be clipped refers to a material segment in the alternative material segments after the transcoding and splitting process.
As an example, the video preview system comprises a proxy server, a transcoding server and a storage server, wherein the transcoding server is used for processing the format of the material and converting the initial material into a web supportable format; the storage server is used for storing the material data in the clipping process.
It should be noted that, each time an audio/video material is added, a player is corresponding to the audio/video material, and is used for downloading a material segment, transcoding the material segment, and controlling playing, so as to execute the initialization work in advance, and initialize the player for decoding, where each audio/video material may be divided into a plurality of material segments.
Wherein, the segment time and the time stamp of the at least one material segment are recorded on the timeline track;
after the generating of the corresponding timeline track, the method further comprises the following steps:
and C, loading the material segments based on the segment time and the time stamp.
In this embodiment, the material segments are clipped based on a timeline, a track material on the timeline and a data structure corresponding to the track material are generated, and in the process of generating a corresponding timeline track according to at least one material segment in the material to be clipped, segment time and a timestamp of each material segment on the timeline track are recorded, where the segment time refers to a duration of the material segment, and the timestamp refers to a time point of the material segment at a timeline position in a video. And loading the corresponding material segments based on the segment time and the time stamp on the timeline track, and editing the material segments.
And step S200, grouping the track materials, and performing rendering task processing and rendering on the grouped track materials.
In this embodiment, the track materials are grouped, each group of grouped track materials is provided with a corresponding drawing task, the track materials of different groups are transmitted to the corresponding drawing tasks for processing, so that multiple drawing tasks are parallel, the track materials are rendered after the drawing tasks are completed, and the execution of the tasks is accelerated through a cache and a GPU (graphics processing unit), so that the clip rendering is implemented locally at a web end, and the clip effect is displayed for a user in real time.
The method for grouping the track materials and rendering the grouped track materials by performing rendering task processing comprises the following steps:
step D, grouping the track materials, and performing drawing task processing on the grouped track materials;
and E, performing layered rendering on the track material after the rendering task is processed based on a preset level.
In this embodiment, the track materials of the segment time corresponding to the timestamp are obtained based on the timestamp, the track materials are grouped according to a certain grouping condition, and the grouped track materials are matched with the corresponding rendering tasks to perform asynchronous processing, so that the track material processing process is improved. And then, based on the preset hierarchy of the track material, performing layered rendering on the track material subjected to the drawing task processing according to a certain hierarchy sequence to finish preview. It should be noted that a new preview request is not executed until all tasks are completed.
As an example, a certain grouping condition may be grouping according to the type of the structure data of the track material.
Further, the grouping the track materials, and performing rendering task processing on the grouped track materials includes the following steps:
step D1, grouping the track materials according to the types of the track materials to obtain grouped materials;
and D2, matching the drawing tasks corresponding to the grouped materials, and performing asynchronous processing on the grouped materials.
In this embodiment, the track materials have specific time points on the timeline track, and when grouping is performed, a timestamp corresponding to the target time point is determined, the target track materials corresponding to the timestamp are obtained, and the target track materials are grouped according to the type of the structural data, so as to obtain the grouped materials. It can be understood that the grouping is performed according to the type of the data structure of the track material, such as the above subtitle type material segment, special effect type material segment, filter type material segment, map type material segment, and the like, and all the video, all the pictures, all the subtitles, all the maps, and the like are classified into the grouping material. And matching the corresponding drawing tasks according to the data structure types of the grouped materials, and performing asynchronous processing on the grouped materials through the corresponding drawing tasks. The rendering task is a processing task for preparing data of some track materials, such as character data of subtitles, font size, display size, and the like, and processing parameters in the rendering tasks of different material segments are different, and the setting of the parameters can be set in a timeline track or a track corresponding to the materials.
The step of matching the drawing tasks corresponding to the grouped materials and performing asynchronous processing on the grouped materials comprises the following steps:
step D21, screening out a material list previewed at the current moment from the grouped materials;
and D22, when the material list changes, matching the drawing task corresponding to the track material in the material list, and performing asynchronous processing on the track material through the drawing task.
In this embodiment, a material list used in a current picture, that is, a material list previewed at a current time, is screened or filtered from grouped materials, all grouped materials and material data previewed at the current time obtained by screening are dynamically changed, and once a listener monitors that the current material list data for previewing is changed, the data of the material list is handed to a rendering task for processing the type asynchronously.
It should be noted that each rendering task processes data of a corresponding type of grouped material, and can process a plurality of grouped materials at a time, the executed processing logics are the same, and each task executes parallel processing without mutual influence.
As an example, the preview time varies, the data of each type of grouped material currently previewed may vary, and when there is a variation, a differentiated update is performed, for example, when the attribute of the text content, color, position, size, etc. of the subtitle varies, a task of adding the subtitle is performed once.
Further, the step of performing layered rendering on the track material after the rendering task is processed based on a preset level includes the following steps:
step E1, acquiring the identification number of the track material after the drawing task is processed;
step E2, adding the track material into the canvas based on the identification number;
and E3, performing layered rendering on the track material in the canvas according to the level of the track material.
In this embodiment, when the material list changes, a video rendering task is executed, whether the track material already exists in the canvas is determined by the identification number of the track material in the material list, if so, the material corresponding to the identification number in the canvas is removed and then added again, and the rendering task executes rendering once after the track material is added to the canvas. In the rendering process, according to the levels to which the track materials belong, the track materials are rendered in a layered mode one by one on the basis of the level sequence so as to finish preview.
Specifically, all objects (pictures, texts, rectangles, etc.) in the canvas form an array list, and adjusting the display level is adjusting the subscript position of the display level in the array. The rendering task adds the track material to the canvas and then performs rendering once, the canvas is emptied during rendering, and then the api (Application Programming Interface) of the canvas (canvas) is sequentially called to perform rendering according to the object array; and if the canvas is the special effect, rendering the canvas by using WebGL (Web Graphics Library, a 3D (three-dimensional) drawing protocol) through a GPU. Referring to fig. 3, the hierarchical relationship between video material and individual track material.
In this embodiment, all elements at that time are found based on the time stamp of the preview, grouped by type, and the grouped material is input to the rendering task corresponding to the type to be processed, and when the task monitors that the material list changes, the task immediately starts to execute the rendering task and rendering processing. The preview processing logic for the different types of grouped material is as follows:
for a video type or picture type grouping material, a video image or a static picture is obtained as drawing material input, an original video image is cut and zoomed according to the coordinate of the drawing material and the displayed width and height and then drawn to a canvas, specifically, a picture object is created according to the video image, the original picture is zoomed according to the displayed width and height, if the picture needs to be cut, the picture is cut and then zoomed, then the coordinate of the object is set, and the object element is added to the canvas.
For the grouping material of the special effect or the filter type, the special effect or the filter is an image object which acts on the canvas, the image texture, the original image coordinate, the cutting coordinate, the texture coordinate, the special effect parameter and the like are used as input, the WebGL is used for executing an image processing algorithm to complete the rendering, and the output image is drawn to the canvas.
For a transition type grouping material, a user can directly add transitions to the front video and the back video of a section of video or the two sections of video during clipping, in order to ensure that pixel points are consistent, a blank base map (canvas) with the size of the canvas is firstly created, transformation matrixes such as displacement, scaling, rotation, turning and clipping are calculated based on input picture attributes, and a transition (or a filter) is created by taking the progress, the transition picture and the transformation matrixes as input. Adding a blank base map into a canvas, setting the blank base map at a hierarchy level in the canvas, applying the created transition, namely transforming and filling an input image, executing a special effect algorithm to linearly mix the image, and drawing the image on the blank canvas. Before applying the transition, the picture elements input to the transition are hidden from the canvas, and the canvas image generated by the transition is displayed.
For the grouping material of the map type, according to the time point of playing (head), the map existing at the current time point is drawn: clearing away the chartlet which is not required to be displayed on the canvas, determining whether the chartlet exists in the canvas according to the ID of the chartlet material, and if so, setting an attribute value for re-rendering; if not, the map object is added and then the hierarchy is set for rendering.
For the grouped material of the caption type, according to the time point of playing (head), the caption object existing at the current time is drawn:
clearing the un-combined character object and the character frame object existing on the canvas;
determining whether the subtitle object has the reset attribute, if so, indicating that the user clicks an operation button for resetting each attribute, and firstly clearing the canvas current object; if not, setting a character related attribute value according to the current object attribute, defaulting the font width to be calculated according to four corners of the character position, wherein the scaling ratio is a set value, if 1, adding an attribute fix Height (fixed Height), limiting a maximum Height for later chartlet character input, and avoiding outputting outside a frame;
it should be noted that the grouped material of the subtitle type includes a closed caption, a batch caption, and a caption generated for the first time after the text is converted from a recorded caption. And determining the attribute of the subtitle, and performing different preview processing according to the attribute of the subtitle. Specifically, the following examples are included:
for example, if the current subtitle belongs to a non-mapped subtitle and is not "the subtitle is drawn for the First time after the text is converted into a recorded subtitle" (i.e., the First Change is First Change attribute is true), the width of the text object is reset to be the width of the subtitle data record, i.e., the actual width of the current subtitle object, and after the setting, the width of the text object and the width of the outer frame are kept as wide as each other without leaving left and right margins. Because the paste picture subtitle has a frame, the width of the characters cannot be the same as that of the outer frame, the width of the subtitle which is drawn for the first time after the text is converted from the recorded sound is consistent with the default width set in the third step, and resetting is not needed.
Example two, if a current subtitle object exists on the canvas according to the ID matching, and the current subtitle belongs to a non-batch subtitle and is not a subtitle generated for the first time after the text is converted from a recorded sound, setting an attribute value according to a new attribute, and re-rendering the subtitle;
in the third example, if the subtitle belongs to a batch subtitle or a subtitle generated for the first time after the text is converted from the recording, after the subtitle is removed from the canvas, a new character object is generated according to a new attribute value, and after the new character object is generated, the following processing is performed:
a. processing a subtitle generated for the First time after the recording is converted into the text, because the position information of the subtitle does not exist, drawing a subtitle object in advance, setting the coordinate of the subtitle object to ensure centered display, updating the data attribute value of the object in the track according to the centered object, deleting the is First Change attribute, and triggering monitoring and redrawing;
b. if the current subtitle object is a batch subtitle and is a batch subtitle added later, the existing batch attribute needs to be inherited; acquiring the center point of the existing batch subtitles to set the center point of the current subtitles, waiting for other position attributes, and updating the attributes;
c. if the Height of the generated text is less than the fix Height, adjusting the position of the text to the middle position of the fix Height;
example four, the process of previewing the text subtitle object of the text background box is as follows:
step F1, if the current character group object exists on the canvas, resetting the property value in the object, re-rendering, and setting the display hierarchy
Step F2, if the canvas does not have the text group object, determine whether there is a background:
if not, directly rendering a new character set object according to the data, and setting a display level;
if yes, when the background is a gif (Graphics exchange Format) graph, analyzing a current frame as the background to generate a new subtitle group object; and when the background is a static background, generating a new subtitle group object by taking the static image as the background. It should be noted that, in the presence of a background, a corresponding display hierarchy is required.
In this embodiment, the corresponding drawing material is drawn by controlling the canvas, and the drawing is completed for a plurality of times according to the hierarchical sequence of the material, so that the real-time preview of the video editing effect is realized by asynchronous layered rendering of the material, the special effect and the transition.
As an example, during the execution of a drawing task, elements that should not be displayed are first removed from a canvas as each element corresponds to a unique ID in the canvas, and if the element to be drawn already exists, a rendering update is performed.
As an example, taking video as an example, when the track material is video material, the video material will build a player to load the assets one-to-one when being added to the timeline track. Downloading the material segment of the display picture according to the timestamp of the display picture, caching the material segment file (if the file is cached, the file data of the material segment is not needed to be downloaded), converting the file data of the material segment into a video stream coding format supported by a player, and mounting the video stream coding format to the player to obtain the original video picture of the frame to be previewed. And then executing a video drawing task, searching whether the unique ID of the video clip exists in the canvas, and removing the video clip from the canvas before adding the video clip. Creating a picture object according to the video image, zooming the original picture according to the displayed width and height, if the picture needs to be cut, cutting and zooming, setting the coordinates of the object, and adding the object element into the canvas. And in the drawing task, the created elements are added into the canvas and then the rendering is executed once, the canvas is emptied firstly during the rendering, then the api of the canvas is called in sequence to execute the drawing according to the object array, and if the effect is special, the WebGL is utilized to render the canvas through the GPU, so that the user can freely and smoothly edit and preview in real time.
Compared with the prior art that the editing process cannot be previewed in real time and the manufacturing requirements of current internet users are difficult to meet, the video previewing method, the video previewing device, the video previewing equipment and the storage medium provided by the application have the advantages that at least one material segment in materials to be edited is obtained, the corresponding timeline track is generated, and the timeline track comprises previewed and manufactured track materials; and grouping the track materials, and performing drawing task processing and rendering on the grouped track materials. The method for realizing layered real-time preview based on time line material segmentation is used, the frame is segmented according to the track by taking the current preview time on a time axis as a reference, the segment track material is segmented into different drawing tasks, a plurality of tasks are parallel, asynchronous layered rendering is carried out according to the material level, free and smooth real-time editing can be realized for a user, and free editing without limitation of material types, material formats, material layer number and frame rate is realized.
The present application further provides a video preview device, the video preview device including: a memory, a processor, and a program stored on the memory for implementing the video preview method,
the memory is used for storing a program for realizing the video preview method;
the processor is used for executing the program for implementing the video preview method so as to implement the steps of the video preview method.
The specific implementation of the video preview device of the present application is substantially the same as that of the embodiments of the video preview method described above, and is not described herein again.
The present application also provides a video preview device, and with reference to fig. 4, the video preview device includes:
the acquisition module is used for acquiring at least one material segment in the material to be edited and generating a corresponding timeline track, wherein the timeline track comprises previewed and manufactured track material;
and the preview module is used for grouping the track materials, and performing drawing task processing and rendering on the grouped track materials.
The specific implementation of the video preview apparatus of the present application is substantially the same as that of the embodiments of the video preview method, and is not described herein again.
The present application provides a storage medium, and the storage medium stores one or more programs, which can be further executed by one or more processors for implementing the steps of the video preview method described in any one of the above.
The specific implementation of the storage medium of the present application is substantially the same as that of the embodiments of the video preview method, and is not described herein again.
The present application provides a storage medium, and the storage medium stores one or more programs, which can be further executed by one or more processors for implementing the steps of the video preview method described in any one of the above.
The specific implementation of the storage medium of the present application is substantially the same as that of the embodiments of the video preview method described above, and is not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description, and do not represent the advantages and disadvantages of the embodiments.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. A video preview method, the video preview method comprising:
acquiring at least one material segment in the material to be edited, and generating a corresponding timeline track, wherein the timeline track comprises preview and manufactured track material;
and grouping the track materials, and performing drawing task processing and rendering on the grouped track materials.
2. The video preview method of claim 1, wherein said step of grouping said track material, rendering said grouped track material with rendering tasks comprises:
grouping the track materials, and performing drawing task processing on the grouped track materials;
and performing layered rendering on the track material after the rendering task is processed based on a preset level.
3. The video preview method of claim 2, wherein said step of grouping said track material and rendering said grouped track material comprises:
grouping the track materials according to the types of the track materials to obtain grouped materials;
and matching the drawing tasks corresponding to the grouped materials, and performing asynchronous processing on the grouped materials.
4. The video preview method of claim 3, wherein said step of matching rendering tasks corresponding to said grouped material to asynchronously process said grouped material comprises:
screening out a material list previewed at the current moment in the grouped materials;
and when the material list is changed, matching a drawing task corresponding to the track material in the material list, and performing asynchronous processing on the track material through the drawing task.
5. The video preview method of any of claims 1 to 4 wherein a clip time and a time stamp of the at least one material clip are recorded on a timeline track;
after the step of generating the corresponding timeline track, the method further includes:
loading material segments based on the segment time and the timestamp.
6. The video preview method of claim 2, wherein the step of rendering the track material after the rendering task is processed based on the preset hierarchy level in a hierarchical manner comprises:
acquiring an identification number of a track material after the drawing task is processed;
based on the identification number, the track material is added into the canvas;
and according to the level of the track material, performing layered rendering on the track material in the canvas.
7. The video preview method of claim 1, wherein said step of obtaining at least one material segment of the material to be edited is preceded by the steps of:
transcoding and segmenting a material to be edited to obtain at least one alternative material segment;
the step of obtaining at least one material segment in the material to be edited comprises the following steps:
and acquiring at least one material segment in the alternative material segments.
8. A video preview device, wherein the video pre-loading device comprises:
the acquisition module is used for acquiring at least one material segment in the material to be edited and generating a corresponding timeline track, wherein the timeline track comprises previewed and manufactured track material;
and the preview module is used for grouping the track materials, and performing drawing task processing and rendering on the grouped track materials.
9. A video preview device, the video preview device comprising: a memory, a processor, and a video preview program stored on the memory and executable on the processor, the video preview program configured to implement the steps of the video preview method of any of claims 1 to 7.
10. A storage medium having stored thereon a video preview program configured to implement the steps of the video preview method of any of claims 1 to 7.
CN202210381717.XA 2022-04-12 2022-04-12 Video preview method, device, equipment and storage medium Pending CN114827722A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210381717.XA CN114827722A (en) 2022-04-12 2022-04-12 Video preview method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210381717.XA CN114827722A (en) 2022-04-12 2022-04-12 Video preview method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114827722A true CN114827722A (en) 2022-07-29

Family

ID=82534434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210381717.XA Pending CN114827722A (en) 2022-04-12 2022-04-12 Video preview method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114827722A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024046268A1 (en) * 2022-08-31 2024-03-07 北京字跳网络技术有限公司 Rendering hierarchy sequence adjustment method, and apparatus

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107736033A (en) * 2015-06-30 2018-02-23 微软技术许可有限责任公司 Layering interactive video platform for interactive video experience
CN110198486A (en) * 2019-05-28 2019-09-03 上海哔哩哔哩科技有限公司 A kind of method, computer equipment and the readable storage medium storing program for executing of preview video material
CN111510744A (en) * 2020-07-01 2020-08-07 北京美摄网络科技有限公司 Method and device for processing video and audio, electronic equipment and storage medium
CN112381907A (en) * 2020-11-12 2021-02-19 上海哔哩哔哩科技有限公司 Multimedia track drawing method and system
CN112860944A (en) * 2021-02-05 2021-05-28 北京百度网讯科技有限公司 Video rendering method, device, equipment, storage medium and computer program product
CN113473204A (en) * 2021-05-31 2021-10-01 北京达佳互联信息技术有限公司 Information display method and device, electronic equipment and storage medium
CN113613066A (en) * 2021-08-03 2021-11-05 天翼爱音乐文化科技有限公司 Real-time video special effect rendering method, system, device and storage medium
CN113891113A (en) * 2021-09-29 2022-01-04 阿里巴巴(中国)有限公司 Video clip synthesis method and electronic equipment
CN114205612A (en) * 2021-12-31 2022-03-18 湖南快乐阳光互动娱乐传媒有限公司 Ultra-high-definition video processing method based on cloud computing and related equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107736033A (en) * 2015-06-30 2018-02-23 微软技术许可有限责任公司 Layering interactive video platform for interactive video experience
CN110198486A (en) * 2019-05-28 2019-09-03 上海哔哩哔哩科技有限公司 A kind of method, computer equipment and the readable storage medium storing program for executing of preview video material
CN111510744A (en) * 2020-07-01 2020-08-07 北京美摄网络科技有限公司 Method and device for processing video and audio, electronic equipment and storage medium
CN112381907A (en) * 2020-11-12 2021-02-19 上海哔哩哔哩科技有限公司 Multimedia track drawing method and system
CN112860944A (en) * 2021-02-05 2021-05-28 北京百度网讯科技有限公司 Video rendering method, device, equipment, storage medium and computer program product
CN113473204A (en) * 2021-05-31 2021-10-01 北京达佳互联信息技术有限公司 Information display method and device, electronic equipment and storage medium
CN113613066A (en) * 2021-08-03 2021-11-05 天翼爱音乐文化科技有限公司 Real-time video special effect rendering method, system, device and storage medium
CN113891113A (en) * 2021-09-29 2022-01-04 阿里巴巴(中国)有限公司 Video clip synthesis method and electronic equipment
CN114205612A (en) * 2021-12-31 2022-03-18 湖南快乐阳光互动娱乐传媒有限公司 Ultra-high-definition video processing method based on cloud computing and related equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024046268A1 (en) * 2022-08-31 2024-03-07 北京字跳网络技术有限公司 Rendering hierarchy sequence adjustment method, and apparatus

Similar Documents

Publication Publication Date Title
CN109819313B (en) Video processing method, device and storage medium
US7071942B2 (en) Device for editing animating, method for editin animation, program for editing animation, recorded medium where computer program for editing animation is recorded
US9583140B1 (en) Real-time playback of an edited sequence of remote media and three-dimensional assets
CN112639691A (en) Video clip object tracking
US7073130B2 (en) Methods and systems for creating skins
CN109300181B (en) Animation of computer-generated display components of user interfaces and content items
JP5037574B2 (en) Image file generation device, image processing device, image file generation method, and image processing method
US9240070B2 (en) Methods and systems for viewing dynamic high-resolution 3D imagery over a network
CN108924622B (en) Video processing method and device, storage medium and electronic device
CN111770288B (en) Video editing method, device, terminal and storage medium
US20110302513A1 (en) Methods and apparatuses for flexible modification of user interfaces
CN108876887B (en) Rendering method and device
US11606532B2 (en) Video reformatting system
EP1876569A1 (en) Data structure for expressing video object, program for generating data structure for expressing video object, method for generating data structure for expressing video object, video software development device, image processing program, video processing method, video processing device, and recordin
CN111914523B (en) Multimedia processing method and device based on artificial intelligence and electronic equipment
CN104394422A (en) Video segmentation point acquisition method and device
WO2018095253A1 (en) Method and device for making graphics interchange format chart
WO2018080826A1 (en) System and method for definition, capture, assembly and display of customized video content
CN110996150A (en) Video fusion method, electronic device and storage medium
CN111581564B (en) Webpage synchronous communication method implemented by Canvas
WO2023231235A1 (en) Method and apparatus for editing dynamic image, and electronic device
CN114827722A (en) Video preview method, device, equipment and storage medium
CN111951356A (en) Animation rendering method based on JSON data format
KR101771473B1 (en) Method and apparatus for generating responsive webpage
KR101771475B1 (en) Method and apparatus for generating responsive webpage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination