CN116366879A - Video export method, device, equipment and storage medium - Google Patents

Video export method, device, equipment and storage medium Download PDF

Info

Publication number
CN116366879A
CN116366879A CN202211527935.6A CN202211527935A CN116366879A CN 116366879 A CN116366879 A CN 116366879A CN 202211527935 A CN202211527935 A CN 202211527935A CN 116366879 A CN116366879 A CN 116366879A
Authority
CN
China
Prior art keywords
video
target
target video
videos
processing parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211527935.6A
Other languages
Chinese (zh)
Inventor
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202211527935.6A priority Critical patent/CN116366879A/en
Publication of CN116366879A publication Critical patent/CN116366879A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234309Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234345Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234381Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440245Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application discloses a video export method, a device, equipment and a storage medium, wherein the method comprises the following steps: obtaining a first target video export instruction, wherein the first target video applies a first processing parameter to at least one segment of a reference video; determining that a first video exists, the first video applying a second processing parameter to at least one segment of the reference video; if the segments with the same processing parameters as the first video in the first target video are determined, copying the same segments in the first video, and applying the first processing parameters to the segments with different processing parameters from the first video in the first target video to obtain the first target video.

Description

Video export method, device, equipment and storage medium
Technical Field
The present application relates to the field of video processing technology, and relates to, but is not limited to, a video export method, apparatus, device, and storage medium.
Background
In the related art, in the process of editing and special effect processing through video editing software, parameters often need to be repeatedly modified, and the same video is derived repeatedly, sometimes only a small part of parameters are modified, or only a small sample of a plurality of different parameters need to be derived from the same video. The video export time is often longer, resulting in lower working efficiency.
Disclosure of Invention
In view of this, embodiments of the present application provide a video derivation method, apparatus, device, and storage medium.
In a first aspect, an embodiment of the present application provides a video derivation method, including: obtaining a first target video export instruction, wherein the first target video applies a first processing parameter to at least one segment of a reference video; determining that a first video exists, the first video applying a second processing parameter to at least one segment of the reference video; if the segments with the same processing parameters as the first video in the first target video are determined, copying the same segments in the first video, and applying the first processing parameters to the segments with different processing parameters from the first video in the first target video to obtain the first target video.
In a second aspect, an embodiment of the present application provides a video deriving device, including: a first acquisition module for acquiring a first target video export instruction, the first target video applying a first processing parameter to at least one segment of a reference video; a first determining module for determining that a first video exists, the first video applying a second processing parameter to at least one segment of the reference video; and the first processing module is used for copying the same fragments in the first video if determining that the fragments with the same processing parameters exist in the first target video, and applying the first processing parameters to the fragments with different processing parameters in the first target video to obtain the first target video.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the 5 memory stores a computer program executable on the processor, and the processor implements steps in the video export method according to the embodiment of the present application when the processor executes the program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements steps in a video derivation method as described in embodiments of the present application.
In the embodiment of the application, the first target video to be exported is processed with the exported first video
And comparing, so that the fragments in the first video, which are the same as the processing parameters of the first target video, can be directly copied, and the first processing parameters are applied to the fragments in the first target video, which are different from the processing parameters of the first video, so that the deriving speed of the first target video can be improved.
Drawings
Fig. 1 is a schematic flow chart of a video export method according to an embodiment of the present application;
FIG. 2 is a flowchart of another video derivation method according to an embodiment of the present application;
FIG. 3 is a flowchart of another video derivation method according to an embodiment of the present application;
fig. 4 is a flow chart of a video repetition deriving method according to an embodiment of the present application;
FIG. 5 is a flowchart of another video derivation method according to an embodiment of the present application;
fig. 6 is a flowchart of a method for simultaneously exporting video according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a video deriving device according to an embodiment of the present application;
fig. 8 is a schematic diagram of a hardware entity of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the present application are further described in detail below with reference to the drawings and examples.
Fig. 1 is a schematic flow chart of a video export method according to an embodiment of the present application, as shown in fig. 1, the method includes the following steps:
step 102: obtaining a first target video export instruction, wherein the first target video applies a first processing parameter to at least one segment of a reference video;
the first target video export instruction is used for exporting the first target video, the reference video may be a video source file, the first processing parameter may include at least one kind of processing parameter, the first processing parameter may be an editing parameter used when video editing is performed on the video source file, such as a filter parameter, a speed parameter, a contrast parameter, an audio parameter, etc., and the first processing parameter may also be an export parameter used when video exporting is performed on the edited video source file to obtain the first target video, such as a video format (e.g., MP4, MOV) and a resolution (e.g., 720P, 1080P, 4K) of the exported first target video; the same or different first processing parameters may be applied to different segments of the reference video to obtain the first target video.
In the case that the at least one segment includes a first segment, a second segment, a third segment, and a fourth segment, and the first processing parameter includes parameters X1, X2, and X3, the first target video may be obtained by applying the parameter X1 to the first segment, applying the parameter X2 to the second segment, and applying the parameter X2 and X3 to the third segment of the reference video, respectively.
Step 104: determining that a first video exists, the first video applying a second processing parameter to at least one segment of the reference video;
the first video may be a video file that has been derived, i.e. the first video is a video derived by applying the second processing parameters to at least one segment of the video source file; the video source file here corresponds to a reference video. Similar to the first processing parameters, the second processing parameters may include at least one kind of processing parameters, where the first processing parameters may be at least partially the same as or different from the second processing parameters, the second processing parameters may be editing parameters used when video editing is performed on the video source file, and the second processing parameters may also be derived parameters used when video derivation is performed on the edited video source file to obtain the first video; the same or different second processing parameters may be applied to different segments of the reference video to obtain the first video.
Step 106: if the segments with the same processing parameters as the first video in the first target video are determined, copying the same segments in the first video, and applying the first processing parameters to the segments with different processing parameters from the first video in the first target video to obtain the first target video.
The first target video and the first video can be compared according to the same time axis, and if the processing parameters of the fragments corresponding to the time sequences of the first target video and the first video are the same, the same fragment in the first video is copied; if the processing parameters of the first target video and the fragments corresponding to the time sequence of the first video are different, applying the first processing parameters to the fragments, which are different from the processing parameters of the first video, in the first target video to obtain the first target video; the segments corresponding to the time sequence refer to the first video and the segments corresponding to the starting time and the ending time of the video frame in the first video.
For example, when the first target video includes a first segment, a second segment, and a third segment, the first video includes a fourth segment, a fifth segment, and a sixth segment, the first segment corresponds to the fourth segment in time sequence, the second segment corresponds to the fifth segment in time sequence, the third segment corresponds to the sixth segment in time sequence, when the parameter X1 is applied to the first segment, the parameter X2 is applied to the second segment, the parameter X3 is applied to the third segment, the parameter Y1 is applied to the first segment, the parameter Y2 is applied to the second segment, and when the parameter X1 and the parameter Y1 are the same for the third segment, the parameter X3 and the parameter Y3 are the same for the first segment and the fourth segment in time sequence, and the parameter X2 and the parameter Y2 are different for the second segment and the fifth segment in time sequence, the fourth segment and the sixth segment in the first video and the fourth segment in the first video and the second segment in the second video are directly copied, and the parameter Y2 is applied to the first target video.
In the embodiment of the present application, by comparing the first target video to be exported with the exported first video, the segments of the first video having the same processing parameters as those of the first target video may be directly copied, and the first processing parameters may be applied to the segments of the first target video having different processing parameters from those of the first video, so that the export speed of the first target video may be increased.
Fig. 2 is a flowchart of another video derivation method according to an embodiment of the present application, as shown in fig. 2, where the method includes the following steps:
step 202: obtaining a first target video export instruction, wherein the first target video applies a first processing parameter to at least one segment of a reference video;
step 204: determining that i videos exist in a cache, and applying a j processing parameter to at least one fragment of the reference video by a j video, wherein j is an integer greater than or equal to 2 and less than or equal to i;
step 206: comparing the first target video with each of the i videos;
step 208: if the segments with the same processing parameters in the first target video and the i videos are determined, the same segments in the i videos are copied, and the first processing parameters are applied to the segments with different processing parameters in the first target video and the i videos, so that the first target video is obtained.
Wherein, similar to the method of comparing the first target video and the first video in step 106, for example, for the jth video in the i videos, the first target video and the jth video may be compared according to the same time axis, and if the processing parameters of the segments corresponding to the time sequences of the first target video and the jth video are the same, the same segment in the jth video is duplicated; if the processing parameters of the fragments corresponding to the time sequence of the first target video and the j-th video are different, applying the first processing parameters to the fragments, in the first target video, of which the processing parameters are different from those of the j-th video; and the fragments corresponding to the time sequence refer to fragments corresponding to the starting time and the ending time of the video frame in the first video and the jth video.
In the embodiment of the present application, by comparing a first target video to be exported with a plurality of videos to which processing parameters are applied on the same time axis, fragments of i videos, which are identical to the processing parameters of the first target video, may be directly copied, the first processing parameters are applied to fragments of the first target video, which are different from the i video processing parameters, so that materials that need to be repeatedly edited may be reduced to the greatest extent, and only the changed portions may be exported by encoding and decoding, so that the export speed of the first target video may be increased.
In some embodiments, as shown in fig. 3, after step 208, the method further comprises:
step 210: deriving the first target video;
step 212: storing the first target video as an i+1th video into the cache;
step 214: when a second target video export instruction is acquired, the second target video is compared to each of the i+1 videos.
And if the segments with the same processing parameters exist in the first target video and the i+1 videos, copying the same segments in the i+1 videos, and applying the first processing parameters to the segments with different processing parameters in the first target video and the i+1 videos, thereby obtaining the first target video.
As shown in fig. 4, it is assumed that the first target video needs to apply at least one first processing parameter to a first segment of the reference video, the first segment including at least a first sub-segment P1, a second sub-segment P2, a third sub-segment P3, a fourth sub-segment P4, a fifth sub-segment P5, a sixth sub-segment P6, a seventh sub-segment P7, and an eighth sub-segment P8, the i videos including the reference video and the derived first video, second video, and third video.
Assuming that the segment Q1 of the reference video corresponds to the third sub-segment P3 in time sequence and has the same processing parameters, and the segment Q2 of the reference video corresponds to the seventh sub-segment P7 in time sequence and has the same processing parameters; the segment Q3 of the first video corresponds to the first sub-segment P1 and has the same processing parameters, and the segment Q4 of the first video corresponds to the sixth sub-segment P6 and has the same processing parameters; the segment Q5 of the second video corresponds to the second sub-segment P2 and has the same processing parameters, and the segment Q6 of the second video corresponds to the fourth sub-segment P4 and has the same processing parameters; and if the segment Q7 of the third video corresponds to the fifth sub-segment P5 and the processing parameters are the same, copying the segments Q3, Q5, Q1, Q6, Q7, Q4 and Q2, and applying the first processing parameters to the segments P8 which are different from the i video processing parameters in the first target video to obtain the first target video.
The reference video may be an original video clip material (also called an original video clip source file script), the first video may be a first hidden copy of the original video clip material, the second video may be a second hidden copy of the original video clip material, the third video may be a third hidden copy of the original video clip material, when video files with different processing parameters need to be repeatedly exported, the video files which have been exported last time may be cached in a memory while being used as the hidden copy of the original video clip material, for the same engineering file, each exported video cache may be used as the hidden copy, so that a plurality of editable original materials may be obtained, when exported next time, hidden copies of different versions may be compared on the same time axis, the materials with the same format and parameters in the hidden copies and the video to be exported may be directly copied, and for the materials with changed parameters, real-time video encoding and export may be performed, and the video in the cache may be repeatedly utilized, so that the edited materials needing repeated editing may be reduced, and only the portion with changed export may be encoded and decoded, and the video may be updated and exported repeatedly, and the video may be updated.
Fig. 5 is a flowchart of still another video deriving method according to an embodiment of the present application, as shown in fig. 5, where the method includes the following steps:
step 502: obtaining a first target video export instruction, wherein the first target video applies a first processing parameter to at least one segment of a reference video; the first target video export instruction is used for exporting N target videos;
step 504: determining that a first video exists, the first video applying a second processing parameter to at least one segment of the reference video;
step 506: if determining that the segments in the first target video have the same processing parameters as the first video, copying the same segments in the first video, and applying the first processing parameters to the segments in the first target video, which are different from the first video processing parameters, to obtain the first target video;
step 508: deriving an mth target video according to a preset time interval based on the derivation sequence of the N target videos; m is an integer greater than or equal to 2 and less than or equal to N; when the export sequence is 1 st, the first video is the reference video; when the export order is m, the first video comprises the reference video and video fragments of exported parts in the target videos of which the export orders are 1 st to m-1 st.
When a first target video is exported, the first target video can be compared with the reference video, if the same fragments as the reference video exist in the first target video, the same fragments in the reference video are copied, and the first processing parameters are applied to the fragments different from the reference video processing parameters in the first target video, so that the first target video is obtained; when deriving the fourth target video, comparing the fourth target video with the reference video and the video segments of the derived parts of the first target video to the third target video, if it is determined that the fourth target video and the reference video and the first target video to the third target video have segments with the same processing parameters, copying the same segments of the reference video and the first target video to the third target video, and applying the first processing parameters to the segments of the first target video, which are different from the processing parameters of the reference video and the first target video to the third target video, to obtain the fourth target video.
In some embodiments, the preset time interval may be denoted as t, and the second target video may be derived after the first target video is derived from the time t, where the second target video may directly copy the segments of the same format and parameters of the reference video derived portion and the first target video derived portion, and the third target video may be derived after the second target video is derived from the time t, where the third target video may directly copy the segments of the same format and parameters of the reference video derived portion, the first target video derived portion, and the second target video derived portion.
As shown in fig. 6, when it is required to derive videos with different parameters at the same time, video encoding and decoding derivation can be performed successively with a fixed time difference, when the video with a later time is derived, the video is compared with the derived video segment of the video with a earlier time, at time t1, the first target video is derived, the first target video is compared with the derived first segment of the reference video, at time t2, the second target video is derived, and the first segment and the second segment of the reference video, and the third segment of the first target video are compared. Thus, a plurality of video files with different parameters can be exported simultaneously, and 2 different video files can be exported in a time close to one video export time.
In the embodiment of the application, the video is exported successively at the preset time interval, and the video with the later time can be compared with the exported parts of other videos with the earlier time, so that the videos with various processing parameters can be exported simultaneously, and the export speed of the videos with the same material and different processing parameters is further accelerated.
In some embodiments, prior to step 508, the method further comprises:
step 507: determining the export sequence of the corresponding target video based on the category number of the first processing parameters corresponding to each of the N target videos;
the more the number of the types of the first processing parameters is, the later the export order of the target video is.
In the embodiment of the application, the video with more types of processing parameters can be derived later, and the video with less types of processing parameters can be derived preferentially, and the video with more types of parameters derived later can copy the part with the same parameters in the video with less types of parameters derived earlier, so that the deriving speed of the video can be further improved.
In some embodiments, prior to step 507, the method further comprises: acquiring the byte number of the first target video and/or the category number of the first processing parameters;
and applying the first processing parameter to at least one segment of the first target video to obtain the first target video under the condition that the byte number of the first target video is smaller than or equal to a first preset threshold value and/or the category number of the first processing parameter is smaller than or equal to a second preset threshold value.
In this embodiment of the present application, if the number of bytes of the first target video is smaller, or the types of processing parameters are smaller, the first processing parameters may be directly applied to at least one segment of the first target video, without comparing with other derived videos, so as to increase flexibility of video derivation.
In some embodiments, the method comprises the steps of:
step S702: obtaining a first target video export instruction, wherein the first target video applies a first processing parameter to at least one segment of a reference video; the first target video export instruction is used for exporting N target videos;
step S704: deriving according to a first mode for the 1 st to nth target videos, and deriving according to a second mode for the n+1th to nth target videos, wherein N is greater than or equal to 2 and less than N;
the first mode/the second mode includes step S7041: determining that i videos exist in the cache, and applying a j processing parameter to at least one fragment of the reference video by a j video; comparing the first target video with each of the i videos; if the segments with the same processing parameters in the first target video and the i videos are determined, copying the same segments in the i videos, and applying the first processing parameters to the segments with different processing parameters in the first target video to obtain the first target video;
the second mode/the first mode includes step S7042: determining that a first video exists, the first video applying a second processing parameter to at least one segment of the reference video; deriving an mth target video according to a preset time interval based on the derivation sequence of the N target videos; when the export sequence is 1 st, the first video is the reference video; when the export order is m, the first video comprises the reference video and video fragments of exported parts in the target videos of which the export orders are 1 st to m-1 st.
Wherein, in the case that the first mode includes step S7041 and the second mode includes step S7042, that is, the first mode is derived according to step S7041 for the 1 st to nth target videos and the second mode is derived according to step S7042 for the n+1 th to nth target videos.
In the case where the first mode includes step S7042 and the second mode includes step S7041, that is, the first mode is derived according to step S7042 for the 1 st to nth target videos and the second mode is derived according to step S7041 for the n+1 st to nth target videos.
In the embodiment of the application, the video can be exported in a mode of combining two export methods, so that the flexibility and the efficiency of video export are further improved.
It should be noted that, in the embodiment of the present application, if the video export method is implemented in the form of a software functional module and sold or used as a separate product, the video export method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in essence or a part contributing to the related art in the form of a software product stored in a storage medium, including several instructions for causing an electronic device (which may be a mobile phone, a tablet computer, a desktop computer, a personal digital assistant, a navigator, a digital phone, a video phone, a television, a sensing device, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Fig. 7 is a schematic structural diagram of a video deriving device according to an embodiment of the present application, as shown in fig. 7, the device 700 includes: a first acquisition module 701, a first determination module 702 and a first processing module 703,
wherein:
a first obtaining module 701, configured to obtain a first target video export instruction, where the first target video 5 applies a first processing parameter to at least one segment of a reference video;
a first determining module 702 configured to determine that a first video exists, where the first video applies a second processing parameter to at least one segment of the reference video;
a first processing module 703, configured to, if it is determined that the first target video exists with the first video
Copying the same segments in the first video, and applying the first processing parameters to segments in the first target video 0, which are different from the first video processing parameters, to obtain the first video
A target video.
In some embodiments, the apparatus further comprises: a second determining module, configured to determine that there are i videos in the buffer, where j applies a j processing parameter to at least one segment of the reference video, where j is an integer greater than or equal to 2 and less than or equal to i; a first comparison module for comparing the first target video with
Comparing each video of the i videos; and the second processing module is used for copying the same fragments in the i videos if determining that the fragments with the same processing parameters exist in the first target video and the i videos, and applying the first processing parameters to the fragments with different processing parameters in the first target video to obtain the first target video.
In some embodiments, the apparatus further comprises: a first export module for exporting the first target video; the storage module is used for storing the first target video as an i+1th video into the cache; and the second comparison module is used for comparing the second target video with each video in the i+1 videos when the second target video export instruction is acquired.
In some embodiments, the first target video export instructions are for exporting N target videos; the apparatus further comprises: the second export module is used for exporting the mth target video according to a preset time interval based on the export sequence of the N target videos; m is an integer greater than or equal to 2 and less than or equal to N;
when the export sequence is 1 st, the first video is the reference video; when the export order is m, the first video comprises the reference video and video fragments of exported parts in the target videos of which the export orders are 1 st to m-1 st.
In some embodiments, the apparatus further comprises: the third determining module is used for determining the export sequence of the corresponding target videos based on the category number of the first processing parameters corresponding to each of the N target videos; the more the number of the types of the first processing parameters is, the later the export order of the target video is.
In some embodiments, the first target video export instructions are for exporting N target videos; the apparatus further comprises: the third export module is used for exporting the 1 st to nth target videos according to a first mode, exporting the (n+1) th to nth target videos according to a second mode, wherein N is more than or equal to 2 and less than N; the first mode/the second mode includes: determining that i videos exist in the cache, and applying a j processing parameter to at least one fragment of the reference video by a j video; comparing the first target video with each of the i videos; if the segments with the same processing parameters in the first target video and the i videos are determined, copying the same segments in the i videos, and applying the first processing parameters to the segments with different processing parameters in the first target video to obtain the first target video; the second mode/the first mode includes: deriving an mth target video according to a preset time interval based on the derivation sequence of the N target videos; when the export sequence is 1 st, the first video is the reference video; when the export order is m, the first video comprises the reference video and video fragments of exported parts in the target videos of which the export orders are 1 st to m-1 st.
In some embodiments, the apparatus further comprises: the second acquisition module is used for acquiring the byte number of the first target video and/or the category number of the first processing parameters; and the third processing module is used for applying the first processing parameters to at least one segment of the first target video to obtain the first target video under the condition that the byte number of the first target video is smaller than or equal to a first preset threshold value and/or the category number of the first processing parameters is smaller than or equal to a second preset threshold value.
The description of the apparatus embodiments above is similar to that of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the device embodiments of the present application, please refer to the description of the method embodiments of the present application for understanding.
Correspondingly, an electronic device is provided in the embodiment of the present application, fig. 8 is a schematic diagram of a hardware entity of the electronic device in the embodiment of the present application, and as shown in fig. 8, the hardware entity of the device 800 includes: comprising a memory 801 and a processor 802, said memory 801 storing a computer program executable on the processor 802, said processor 702 implementing the steps of the video derivation method of the above-described embodiments when said program is executed.
The memory 801 is configured to store instructions and applications executable by the processor 802, and may also cache data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or processed by various modules in the processor 802 and the device 800, which may be implemented by a FLASH memory (FLASH) or a random access memory (Random Access Memory, RAM).
Accordingly, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the video derivation method provided in the above embodiments.
It should be noted here that: the description of the storage medium and the device embodiments above is similar to that of the method embodiments above, with similar benefits as the device embodiments. For technical details not disclosed in the embodiments of the storage medium and the method of the present application, please refer to the description of the embodiments of the apparatus of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes. Alternatively, the integrated units described above may be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. Based on such understanding, the technical solutions of the embodiments of the present application are essentially or
The portions that contribute to the related art may be embodied in the form of a software product, which is stored in a storage medium, comprising instructions for causing a computer device (which may be a cell phone, tablet, desktop, personal digital assistant, navigator, digital telephone, video telephone, television, sensing device, etc.) to perform all or part of the methods described in the various embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
0 methods disclosed in several method embodiments provided herein, without conflict
In any combination, new method embodiments are obtained. The features disclosed in the several product embodiments provided in the present application may be combined arbitrarily without conflict to obtain new product embodiments. The features disclosed in the several method or apparatus embodiments provided in the present application may be arbitrarily combined without conflict to obtain new method embodiments or apparatus embodiments.
5, the above is merely an embodiment of the present application, but the scope of the present application is not limited thereto,
any person skilled in the art will readily recognize that changes or substitutions are within the scope of the present disclosure, and are intended to be covered by the scope of this application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A video derivation method, the method comprising:
obtaining a first target video export instruction, wherein the first target video applies a first processing parameter to at least one segment of a reference video;
determining that a first video exists, the first video applying a second processing parameter to at least one segment of the reference video;
if the segments with the same processing parameters as the first video in the first target video are determined, copying the same segments in the first video, and applying the first processing parameters to the segments with different processing parameters from the first video in the first target video to obtain the first target video.
2. The method of claim 1, wherein the method further comprises:
determining that i videos exist in a cache, and applying a j processing parameter to at least one fragment of the reference video by a j video, wherein j is an integer greater than or equal to 2 and less than or equal to i;
comparing the first target video with each of the i videos;
if the segments with the same processing parameters in the first target video and the i videos are determined, the same segments in the i videos are copied, and the first processing parameters are applied to the segments with different processing parameters in the first target video and the i videos, so that the first target video is obtained.
3. The method of claim 2, wherein the method further comprises:
deriving the first target video;
storing the first target video as an i+1th video into the cache;
when a second target video export instruction is acquired, the second target video is compared to each of the i+1 videos.
4. The method of claim 1, wherein the first target video export instruction is to export N target videos; the method further comprises the steps of:
deriving an mth target video according to a preset time interval based on the derivation sequence of the N target videos; m is an integer greater than or equal to 2 and less than or equal to N;
when the export sequence is 1 st, the first video is the reference video; when the export order is m, the first video comprises the reference video and video fragments of exported parts in the target videos of which the export orders are 1 st to m-1 st.
5. The method of claim 4, wherein the method further comprises:
determining the export sequence of the corresponding target video based on the category number of the first processing parameters corresponding to each of the N target videos;
the more the number of the types of the first processing parameters is, the later the export order of the target video is.
6. The method of claim 1, wherein the first target video export instruction is to export N target videos; the method further comprises the steps of:
deriving according to a first mode for the 1 st to nth target videos, and deriving according to a second mode for the n+1th to nth target videos, wherein N is greater than or equal to 2 and less than N;
the first mode/the second mode includes: determining that i videos exist in the cache, and applying a j processing parameter to at least one fragment of the reference video by a j video; comparing the first target video with each of the i videos; if the segments with the same processing parameters in the first target video and the i videos are determined, copying the same segments in the i videos, and applying the first processing parameters to the segments with different processing parameters in the first target video and the i videos to obtain the first target video;
the second mode/the first mode includes: deriving an mth target video according to a preset time interval based on the derivation sequence of the N target videos; when the export sequence is 1 st, the first video is the reference video; when the export order is m, the first video comprises the reference video and video fragments of exported parts in the target videos of which the export orders are 1 st to m-1 st.
7. The method of any one of claims 1 to 6, wherein the method further comprises: acquiring the byte number of the first target video and/or the category number of the first processing parameters;
and applying the first processing parameter to at least one segment of the first target video to obtain the first target video under the condition that the byte number of the first target video is smaller than or equal to a first preset threshold value and/or the category number of the first processing parameter is smaller than or equal to a second preset threshold value.
8. A video derivation device, the device comprising:
a first acquisition module for acquiring a first target video export instruction, the first target video applying a first processing parameter to at least one segment of a reference video;
a first determining module for determining that a first video exists, the first video applying a second processing parameter to at least one segment of the reference video;
and the first processing module is used for copying the same fragments in the first video if determining that the fragments with the same processing parameters exist in the first target video, and applying the first processing parameters to the fragments with different processing parameters in the first target video to obtain the first target video.
9. An electronic device comprising a memory and a processor, the memory storing a computer program executable on the processor, characterized in that the processor implements the steps of the video derivation method of any one of claims 1-7 when the program is executed.
10. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the video derivation method of any one of claims 1 to 7.
CN202211527935.6A 2022-11-30 2022-11-30 Video export method, device, equipment and storage medium Pending CN116366879A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211527935.6A CN116366879A (en) 2022-11-30 2022-11-30 Video export method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211527935.6A CN116366879A (en) 2022-11-30 2022-11-30 Video export method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116366879A true CN116366879A (en) 2023-06-30

Family

ID=86911399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211527935.6A Pending CN116366879A (en) 2022-11-30 2022-11-30 Video export method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116366879A (en)

Similar Documents

Publication Publication Date Title
JP5341095B2 (en) Media fingerprint for reliable handling of media content
US7652595B2 (en) Generating a data stream and identifying positions within a data stream
US8006201B2 (en) Method and system for generating thumbnails for video files
CN110716739B (en) Code change information statistical method, system and readable storage medium
US10491947B1 (en) Systems and methods for personalized video rendering
EP3035213A1 (en) Method and apparatus for deriving a perceptual hash value from an image
JP4888566B2 (en) Data compression method
US20220020397A1 (en) System and Method for Generating Dynamic Media
CN116366879A (en) Video export method, device, equipment and storage medium
KR20150089598A (en) Apparatus and method for creating summary information, and computer readable medium having computer program recorded therefor
CN116781992A (en) Video generation method, device, electronic equipment and storage medium
CN115022670B (en) Video file storage method, video file restoration device, video file storage equipment and storage medium
CN114446330B (en) Method, device and storage medium for repairing MP4 file
JP2017192080A (en) Image compression device, image decoding device, image compression method, and image compression program
Andy et al. Simple duplicate frame detection of MJPEG codec for video forensic
CN110647500A (en) File storage method, device, terminal and computer readable storage medium
CN110782389A (en) Image data byte alignment method and terminal
US6373989B1 (en) Iterated image transformation and decoding apparatus and method, and recording medium
US8165424B1 (en) Method and device for video transformations
CN112528234B (en) Reversible information hiding method based on prediction error expansion
CN113672761B (en) Video processing method and device
CN117034220B (en) Digital watermark processing method and system
CN111225210B (en) Video coding method, video coding device and terminal equipment
CN115456858B (en) Image processing method, device, computer equipment and computer readable storage medium
Iyer et al. Embedding capacity estimation of reversible watermarking schemes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination