CN112738564A - Data processing method and device, electronic equipment and storage medium - Google Patents

Data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112738564A
CN112738564A CN202011585269.2A CN202011585269A CN112738564A CN 112738564 A CN112738564 A CN 112738564A CN 202011585269 A CN202011585269 A CN 202011585269A CN 112738564 A CN112738564 A CN 112738564A
Authority
CN
China
Prior art keywords
video
data
video data
time point
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011585269.2A
Other languages
Chinese (zh)
Other versions
CN112738564B (en
Inventor
董世永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chuangsheng Shilian Digital Technology Beijing Co Ltd
Original Assignee
Chuangsheng Shilian Digital Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chuangsheng Shilian Digital Technology Beijing Co Ltd filed Critical Chuangsheng Shilian Digital Technology Beijing Co Ltd
Priority to CN202011585269.2A priority Critical patent/CN112738564B/en
Publication of CN112738564A publication Critical patent/CN112738564A/en
Application granted granted Critical
Publication of CN112738564B publication Critical patent/CN112738564B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2353Processing of additional data, e.g. scrambling of additional data or processing content descriptors specifically adapted to content descriptors, e.g. coding, compressing or processing of metadata
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Abstract

The application provides a data processing method, a device, an electronic device and a storage medium: decompressing a data resource file to be processed in a preset format to obtain a decompressed resource file, wherein the decompressed resource file comprises text data and video data in a document mode, and the text data corresponds to the content of the video data; acquiring the video data according to the input clip starting time point and the clip ending time point to obtain video data to be compressed; and acquiring the text data corresponding to the timestamp according to the timestamp of the video data to be compressed, and compressing the acquired text data and the video data to be compressed to obtain a resource file in a preset format after being edited. The obtained edited resource file in the preset format is composed of text data and video data in a document mode, so that the occupation of the storage space of a hard disk is reduced. And can realize clipping of resource files of a specific format, i.e., ccr format.

Description

Data processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of multimedia technologies, and in particular, to a data processing method and apparatus, an electronic device, and a storage medium.
Background
The multimedia technology is a technology for comprehensively processing and managing various media information such as characters, data, graphs, images, animations, sounds and the like through a computer and enabling a user to interact real-time information with the computer through various senses.
In the prior art, processing such as video editing and synthesizing is a relatively mature technology, video editing tools for editing general videos are mainly used for editing videos and then synchronously generating new video files, the tools have single functions, only videos with common formats can be edited, videos with specific formats cannot be edited, and the generated video files occupy a very large hard disk storage space.
Disclosure of Invention
Embodiments of the present invention provide a data processing method, an apparatus, an electronic device, and a storage medium, so as to overcome at least one of the above disadvantages.
In a first aspect, an embodiment of the present application provides a data processing method, including:
decompressing a data resource file to be processed in a preset format to obtain a decompressed resource file, wherein the decompressed resource file comprises text data and video data in a document mode, and the text data corresponds to the content of the video data;
acquiring the video data according to the input clip starting time point and the clip ending time point to obtain video data to be compressed;
and acquiring the text data corresponding to the timestamp according to the timestamp of the video data to be compressed, and compressing the acquired text data and the video data to be compressed to obtain a resource file in a preset format after being edited.
In a second aspect, an embodiment of the present application provides a data processing apparatus, including:
the decompression module is used for decompressing a data resource file to be processed in a preset format to obtain a decompressed resource file, wherein the decompressed resource file comprises text data and video data in a document mode, and the text data corresponds to the content of the video data;
the acquisition module is used for acquiring the video data according to the input clip starting time point and the clip ending time point to obtain video data to be compressed;
and the compression module is used for acquiring the text data corresponding to the timestamp according to the timestamp of the video data to be compressed, and compressing the acquired text data and the video data to be compressed to obtain the resource file in the preset format after being edited.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor; and a memory configured to store computer-executable instructions that, when executed, cause the processor to implement the method described in any of the embodiments of the present application.
In a fourth aspect, embodiments of the present application provide a storage medium storing computer-executable instructions that, when executed, implement a method described in any of the embodiments of the present application.
The application provides a data processing method, a device, an electronic device and a storage medium: decompressing a data resource file to be processed in a preset format to obtain a decompressed resource file, wherein the decompressed resource file comprises text data and video data in a document mode, and the text data corresponds to the content of the video data; acquiring the video data according to the input clip starting time point and the clip ending time point to obtain video data to be compressed; and acquiring the text data corresponding to the timestamp according to the timestamp of the video data to be compressed, and compressing the acquired text data and the video data to be compressed to obtain a resource file in a preset format after being edited. Because the obtained edited resource file in the preset format is composed of text data and video data in the document mode, compared with the prior art that video frames of high-definition videos are directly extracted, the occupied space of the video data in the document mode is small, and therefore the occupied space of a hard disk storage space is reduced. Moreover, the resource file in the ccr format can be edited, so that a user can remove a blank part and an unimportant part in a video, the video watching time is saved, and the problem that the resource file in the ccr format in the application cannot be edited in a limited time in the prior art is solved.
1. A method of data processing, the method comprising:
decompressing a data resource file to be processed in a preset format to obtain a decompressed resource file, wherein the decompressed resource file comprises text data and video data in a document mode, and the text data corresponds to the content of the video data;
acquiring the video data according to the input clip starting time point and the clip ending time point to obtain video data to be compressed;
and acquiring the text data corresponding to the timestamp according to the timestamp of the video data to be compressed, and compressing the acquired text data and the video data to be compressed to obtain a resource file in a preset format after being edited.
2. The method according to claim 1, further comprising, after decompressing the data resource file to be processed in the preset format to obtain a decompressed resource file:
and if the single-frame playing time length of the video frames in the video data does not exceed the preset time length, detecting whether the video frames corresponding to the input editing starting time points are key frames or not.
3. The method according to claim 2, wherein the obtaining the video data according to the input clip start time point and the clip end time point to obtain the video data to be compressed comprises:
if the video frame corresponding to the clipping starting time point is not a key frame, detecting from the clipping starting time point forward to obtain a previous key frame of the clipping starting time point as a first key frame, and detecting from the clipping starting time point backward to obtain a next key frame of the clipping starting time point as a second key frame;
after the video frame data between the first key frame and the second key frame are decoded, encoding is carried out from the starting time point of the cutting, and first video data are obtained; copying video frame data between the second key frame and the video frame corresponding to the clipping ending time point to obtain second video data;
and combining the first video data and the second video data to obtain video data to be compressed.
4. The method according to claim 2, wherein the obtaining the video data according to the input clip start time point and the clip end time point to obtain the video data to be compressed comprises:
and if the video frame corresponding to the clipping starting time point is a key frame, copying video frame data between the clipping starting time point and the clipping ending time point to obtain video data to be compressed.
5. The method according to claim 1, wherein there are a plurality of resource files in the preset format, the method further comprising:
decompressing the resource files in the preset formats to obtain at least text data corresponding to the resource files in the preset formats and video data in a document mode;
merging the video data corresponding to each preset resource file according to a specified sequence;
updating the time stamp of the text data according to the time length of the merged video and the designated sequence;
and compressing the text data after updating the timestamp and the merged video data to obtain a merged resource file with a preset format.
6. The method according to claim 1 or 5, characterized in that the method further comprises:
reading the video data corresponding to the resource file with a preset format;
acquiring text data corresponding to a current video frame according to a timestamp of the current video frame in the video data;
splicing and rendering the current video frame and the text data corresponding to the current video frame to obtain video picture data;
and coding the video picture data through a multimedia processing tool to generate a video file.
7. A data processing apparatus, characterized in that the apparatus comprises:
the decompression module is used for decompressing a data resource file to be processed in a preset format to obtain a decompressed resource file, wherein the decompressed resource file comprises text data and video data in a document mode, and the text data corresponds to the content of the video data;
the video data acquisition module is used for acquiring the video data according to the input clip start time point and the clip end time point to obtain video data to be compressed;
and the compression module is used for acquiring the text data corresponding to the timestamp according to the timestamp of the video data to be compressed, and compressing the acquired text data and the video data to be compressed to obtain the resource file in the preset format after being edited.
8. The apparatus of claim 1, further comprising:
and the detection module is used for detecting whether the video frame corresponding to the input editing starting time point is a key frame or not if the single-frame playing time length of the video frame in the video data does not exceed the preset time length.
9. The apparatus according to claim 2, wherein the video data obtaining module is specifically configured to, if the video frame corresponding to the clip start time point is not a key frame, detect from the clip start time point forward to obtain a key frame before the clip start time point as a first key frame, and detect from the clip start time point backward to obtain a key frame after the clip start time point as a second key frame; decoding and encoding video frame data between the first key frame and the second key frame to obtain first video data; copying video frame data between the second key frame and the video frame corresponding to the clipping ending time point to obtain second video data; and combining the first video data and the second video data to obtain video data to be compressed.
10. The apparatus of claim 8, wherein the video data obtaining module is specifically configured to copy video frame data between the clip start time point and the clip end time point to obtain the video data to be compressed if the video frame corresponding to the clip start time point is a key frame.
11. The apparatus according to claim 7, wherein there are a plurality of resource files in the preset format, the apparatus further comprising: a merging module and an updating module;
the decompression module is further used for decompressing the resource files in the preset formats to obtain at least text data corresponding to the resource files in the preset formats and video data in a document mode;
the merging module is used for merging the video data corresponding to each preset resource file according to a specified sequence;
the updating module is used for updating the timestamp of the text data according to the time length of the merged video and the designated sequence;
and the compression module is also used for compressing the text data after the timestamp is updated and the merged video to obtain the merged resource file in the preset format.
12. The apparatus of claim 7 or 11, further comprising:
the reading module is used for reading the video data corresponding to the resource file with a preset format;
the text data acquisition module is further used for acquiring text data corresponding to the current video frame according to the timestamp of the current video frame in the video data;
the rendering module is used for splicing and rendering the current video frame and the text data corresponding to the current video frame to obtain video picture data;
and the generating module is used for coding the video picture data through a multimedia processing tool to generate a video file.
13. An electronic device, comprising: a processor; and a memory configured to store computer-executable instructions that, when executed, cause the processor to implement the method of any of claims 1-6 above.
14. A storage medium storing computer-executable instructions that, when executed, implement the method of any of claims 1-6.
Drawings
Some specific embodiments of the present application will be described in detail hereinafter by way of illustration and not limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
fig. 1 is a flowchart of a data processing method according to an embodiment of the present application;
fig. 2 is another flowchart of a data processing method according to an embodiment of the present application;
FIG. 3 is a further flowchart of data processing provided by embodiments of the present application;
FIG. 4 is a schematic diagram of a data processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following further describes specific implementation of the embodiments of the present invention with reference to the drawings.
An embodiment of the present application provides a data processing method, and in particular, to a method for clipping a resource file, as shown in fig. 1, the method includes the following steps:
step 101, decompressing a data resource file to be processed in a preset format to obtain a decompressed resource file, where the decompressed resource file includes text data and video data in a document mode, and the text data corresponds to the content of the video data.
The data resource file to be processed in the preset format is a file in a special format, and can be a file in a ccr format, wherein the content in a video is extracted into independent data according to categories; for example, the individual data extracted by category may include at least one of: picture data, chat data, animation data, brush interaction data. It is understood that other types of data may be set by those skilled in the art according to the actual situation, and this is only an example.
In this embodiment, the data resource file to be processed in the preset format may include text data and video data in a document mode; the text data may include one or more of chat data, animation data, brush pen interaction data, and the like, and the text data may be stored in the same meta.
In the embodiment, the text data corresponds to the content of the video data, for example, if the video data represents the video content of 3min-28min of the video a, the text data also represents the video content of 3min-28min of the video a.
Optionally, for an educational video, after the resource file to be processed is decompressed by the zip algorithm, the video data therein may be traversed first, and it is checked whether there is a case that the playing time of a single frame is too long (abnormal video frame), if there is an abnormal video frame, it indicates a scene of pause in the course of lecture, for example, if the teacher pauses for 2min in the live broadcasting process, the video frame in this time period may only be 1 frame, that is, the video frame rate in this 2min is 1/120fps, but meta data may be generated as time goes on, in order to align the meta data with the video frame in the pause time, and facilitate subsequent clipping, the video frame corresponding to the pause time point may be copied and inserted into the pause time period, so that the frame rate in the pause time period satisfies the conventional frame rate, for example, the frame rate is 25 fps. It should be noted that, when checking whether there is too long a single frame playing time, the playing time of the remaining single frames of the video may be referred to, and a video frame with more than twice the average playing time of a normally played video frame may be regarded as an abnormal video frame; or, a preset duration may be set according to experience, and a video frame with a single-frame playing duration exceeding the preset duration is regarded as an abnormal video frame.
Optionally, if the playing duration of a single frame of a video frame in the video data does not exceed the preset duration, detecting whether the video frame corresponding to the input clip start time point is a key frame.
In this embodiment, the key frame refers to a frame where a key action in the motion or change of a character or an object is located, and whether the video frame is the key frame can be determined according to the flag bit of the video frame.
And 102, acquiring the video data according to the input clip starting time point and the clip ending time point to obtain video data to be compressed.
The clip start time point and the clip end time point may be time points input by a user according to actual requirements, for example, the clip start time point and the clip end time point input by the user are respectively 5:28 and 13:28, the video data obtained is from 5:28 to 13:28 video content. Of course, the user may also perform segmented acquisition on one video according to actual requirements, for example, respectively acquire 5:28 to 13:28, 14:23 to 16:23, 18:23 to 33:28, etc., the present embodiment is not limited.
In one embodiment, if it is detected that a video frame corresponding to an input clip start time point is a key frame, when the video data is acquired according to the input clip start time point and clip end time point, and video data to be compressed is obtained: the video frame data between the clip start time point and the clip end time point can be copied to obtain the video data to be compressed.
In another embodiment, if it is detected that the video frame corresponding to the input clip start time point is not a key frame, the video data is acquired according to the input clip start time point and the clip end time point, and the video data to be compressed is obtained: the previous key frame of the clip start time point can be obtained as a first key frame by detecting from the clip start time point forward, and the next key frame of the clip start time point can be obtained as a second key frame by detecting from the clip start time point backward; then decoding the video frame data between the first key frame and the second key frame and then coding the video frame data from the starting time point of the cutting to obtain first video data; copying video frame data between the second key frame and the video frame corresponding to the clipping ending time point to obtain second video data; and finally, combining the first video data and the second video data to obtain the video data to be compressed.
In the present embodiment, for example, if the clip start time point and the clip end time point input by the user are 5:28 and 13:28, when 5: when the video frame corresponding to the 28-time point is not a key frame, the video frame can be traversed from 5min28s to the front and back according to the chronological order to find the key frame, if from 5:28, the first key frame found forward is a video frame corresponding to the 5:23 time point, and then the first key frame is taken as a first key frame; if from 5: the first key frame found 28 backward is the video frame corresponding to the 5:30 time point, which is then taken as the second key frame. Then the video frame data between the first key frame and the second key frame, namely the video frame data between 5:23 and 5:30, is decoded, and then after the decoded data of the time period of 5:23 to 5:28 is obtained, the video frame corresponding to the time period can be deleted, so as to reduce the occupation of the storage space. Then encoding the video frames corresponding to the second key frames (5: 30) from the clip start time point 5:28, and taking the acquired decoded data of the 5:23-5:28 time period and the encoded data corresponding to the encoded 5:28-5:30 time period as first video data which can be regarded as a header file; and copying the video data between 5:30 and 13:28 to obtain second video data, wherein the second video data can be regarded as a volume file. Finally, the two header files and the body file can be merged or spliced into video data of one clip segment through the ffmpeg command line.
The obtained first video data can represent the complete picture of the video, and the video frame at the starting time point of the editing is ensured to be the key frame, so that the problem of screen floating of the edited video at the starting time point of the playing is avoided; and, the encoding is performed from the clip start time point, ensuring the accuracy of the clip video time period.
Step 103, acquiring the text data corresponding to the timestamp according to the timestamp of the video data to be compressed, and compressing the acquired text data and the video data to be compressed to obtain a resource file in a preset format after being edited.
In this embodiment, if the timestamp of the video to be compressed is 5: 28-13:28, then 5: 28-13:28, compressing the acquired text data and the video data to be compressed to obtain the resource file in the ccr format after the clipping. And acquiring corresponding text data according to the starting time point and the video duration of the video to be compressed. In this embodiment, content in the text data that is not within the time interval of the video to be compressed may be filtered out, so that the acquired text data may correspond to the video to be compressed.
According to the data processing method provided by the embodiment of the application, the obtained resource file in the preset format after being clipped is composed of the text data and the video data in the document mode, and compared with the prior art that the video frame of the high-definition video is directly extracted, the video data in the document mode occupies a small space, so that the occupation of the storage space of a hard disk is reduced. Moreover, the resource file in the ccr format can be edited, so that a user can remove a blank part and an unimportant part in a video, the video watching time is saved, and the problem that the resource file in the ccr format in the application cannot be edited in a limited time in the prior art is solved. In addition, in the prior art, an individual video file obtained after clipping cannot be encrypted, but the resource file generated by clipping in the application is a compressed file, and a user can encrypt the compressed file, so that the privacy of the user is protected.
An embodiment of the present application provides another data processing method, and in particular, to a method for merging resource files, as shown in fig. 2, the method includes the following steps:
step 201, decompressing a plurality of resource files in a preset format, and obtaining at least text data corresponding to the resource files in each preset format and video data in a document mode.
In this embodiment, the resource files in the preset format may be the resource files in the ccr format as described above, for example, if there are 3 resource files in the preset format, the resource files are resource files of video a, video B, and video C, respectively; and respectively decompressing the 3 resource files to obtain text data corresponding to the video A, video data in a document mode, text data corresponding to the video B, video data in the document mode, text data corresponding to the video C and video data in the document mode.
Step 202, merging the video data corresponding to each preset resource file according to a specified sequence.
In this embodiment, the video data corresponding to the 3 resource files may be merged first, and the merging order may be adjusted according to actual requirements, for example, the merging order may be B-C-a, and then the video data corresponding to the video B, the video C, and the video a may be merged by a multimedia processing tool according to the order, for example, the video data may be merged by an ffmpeg command line.
And step 203, updating the time stamp of the text data according to the time length of the merged video and the specified sequence.
In this embodiment, the time stamps of the text data may be updated according to the time length of the merged video in the merged video and the specified order. For example, if the designated order is B-C-a, the timestamps of the text data corresponding to video B, video C, and video a may be updated according to the order of B-C-a and the time length of the merged video B, video C, and video C in the merged video. After the video data corresponding to the video B, the video C, and the video a are merged, the merged video time length can be obtained, and further, the time stamps or the time lengths corresponding to the video B, the video C, and the video a can be obtained. Illustratively, if the total duration of the merged video is 28min, the timestamp of the video B is 0-10min, the timestamp of the video C is 10-18min, and the timestamp of the video a is 18-28min, the timestamp of the text data corresponding to the video B may be updated to be 0-10min, the timestamp of the text data corresponding to the video C is 10-18min, and the timestamp of the text data corresponding to the video a is 18-28 min.
And step 204, compressing the text data after updating the timestamp and the merged video data to obtain a merged resource file with a preset format.
In this embodiment, the text data corresponding to the video B, the video C, and the video a after updating the time stamp and the merged video data are compressed, and may be compressed by using a zip algorithm, so as to obtain a resource file in ccr format after merging the videos.
The data processing method provided by the embodiment can realize the combination of resource files in ccr format.
An embodiment of the present application provides another data processing method, and in particular, to a method for converting a resource file into a video file, as shown in fig. 3, the method includes the following steps:
and 311, reading the video data corresponding to the resource file with the preset format.
In this embodiment, the resource file in the preset format may be the obtained clipped resource file in the ccr format, the obtained merged resource file in the ccr format, or another resource file in the ccr format, which is not limited in this embodiment.
In this embodiment, a video frame of the video data in the ccr resource file may be read, and the read video frame is used as the current video frame, that is, the current video frame being processed.
And step 312, acquiring text data corresponding to the current video frame according to the timestamp of the current video frame in the video data.
After reading the video frame, the text data corresponding to the timestamp may be obtained according to the timestamp of the current video frame, for example, if the timestamp of the current video frame is the 5min23s28ms, the text data corresponding to the time 5min23s28ms is obtained.
And 313, splicing and rendering the current video frame and the text data corresponding to the current video frame to obtain video picture data.
In this embodiment, any appropriate manner may be used to perform splicing rendering on the current video frame and the text data corresponding to the current video frame, so as to obtain video picture data capable of representing a complete picture of a video. The mosaic rendering can be performed by opengl, for example.
And step 314, encoding the video picture data through a multimedia processing tool to generate a video file.
In this embodiment, the rendered video picture data may be transmitted to a ffmpeg encoder for encoding, so as to obtain a video file in MP4 format. The video file in MP4 format can then be exported according to the output path set by the user and the output width, height and code rate. It should be noted that the MP4 format is only an example, and other conventional video formats, such as MP3 format, may also be used.
According to the embodiment, the resource file in the ccr format can be converted into the video file in the common format, and can be exported for being watched by the user.
Based on the same inventive concept, an embodiment of the present application further provides a data processing apparatus, as shown in fig. 4, including:
the decompression module 40 is configured to decompress a data resource file to be processed in a preset format to obtain a decompressed resource file, where the decompressed resource file includes text data and video data in a document mode, and the text data corresponds to content of the video data;
a video data obtaining module 41, configured to obtain the video data according to the input clip start time point and clip end time point, so as to obtain video data to be compressed;
and the compression module 42 is configured to obtain the text data corresponding to the timestamp according to the timestamp of the video data to be compressed, and compress the obtained text data and the video data to be compressed to obtain a resource file in a preset format after being clipped.
In one embodiment, the apparatus further comprises:
and the detection module is used for detecting whether the video frame corresponding to the input editing starting time point is a key frame or not if the single-frame playing time length of the video frame in the video data does not exceed the preset time length.
In an embodiment, the video data obtaining module 41 is specifically configured to, if a video frame corresponding to a clip start time point is not a key frame, detect from the clip start time point forward, obtain a key frame before the clip start time point as a first key frame, and detect from the clip start time point backward, obtain a key frame after the clip start time point as a second key frame; decoding and encoding video frame data between the first key frame and the second key frame to obtain first video data; copying video frame data between the second key frame and the video frame corresponding to the clipping ending time point to obtain second video data; and combining the first video data and the second video data to obtain video data to be compressed.
In an embodiment, the video data obtaining module 41 is specifically configured to copy video frame data between the clip start time point and the clip end time point to obtain video data to be compressed if a video frame corresponding to the clip start time point is a key frame.
In one embodiment, there are a plurality of resource files in the preset format, and the apparatus further includes: a merging module and an updating module;
the decompression module 40 is further configured to decompress the resource files in the multiple preset formats, so as to obtain at least text data corresponding to the resource files in the multiple preset formats and video data in a document mode;
the merging module is used for merging the video data corresponding to each preset resource file according to a specified sequence;
the updating module is used for updating the timestamp of the text data according to the time length of the merged video and the designated sequence;
the compressing module 42 is further configured to compress the text data with the updated timestamp and the merged video to obtain a merged resource file in a preset format.
In one embodiment, the apparatus further comprises:
the reading module is used for reading the video data corresponding to the resource file with a preset format;
the text data acquisition module is further used for acquiring text data corresponding to the current video frame according to the timestamp of the current video frame in the video data;
the rendering module is used for splicing and rendering the current video frame and the text data corresponding to the current video frame to obtain video picture data;
and the generating module is used for coding the video picture data through a multimedia processing tool to generate a video file.
The data processing device provided by the above embodiment can realize the clipping and merging of the ccr resource file, and can convert the ccr resource file into a video with a common format, thereby overcoming the problem that the existing video clipping tool cannot clip the resource file of the type.
Based on the data processing method described in the foregoing embodiment, an embodiment of the present application provides an electronic device, configured to execute the data processing method described in any of the foregoing embodiments, and as shown in fig. 5, the electronic device provided in the embodiment of the present application includes: a processor (processor) 402; and a memory (memory)404 configured to store computer-executable instructions that, when executed, cause the processor 402 to implement the methods described in any of the embodiments of the present application.
Optionally, the electronic device may further include a bus 406 and a communication Interface (Communications Interface)408, wherein the processor 402, the communication Interface 408, and the memory 404 are configured to communicate with each other via the communication bus 406.
A communication interface 408 for communicating with other devices.
The processor 402 may be a central processing unit CPU or an application Specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present invention. The electronic device comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
The memory 404 may comprise a high-speed RAM memory, and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
Based on the data processing method described in the above embodiments, an embodiment of the present application provides a storage medium storing computer-executable instructions that, when executed, implement the method described in any embodiment of the present application.
The electronic device of the embodiments of the present application exists in various forms, including but not limited to:
(1) a mobile communication device: such devices are characterized by mobile communications capabilities and are primarily targeted at providing voice, data communications. Such terminals include: smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) Ultra mobile personal computer device: the equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include: PDA, MID, and UMPC devices, etc., such as ipads.
(3) A portable entertainment device: such devices can display and play multimedia content. This type of device comprises: audio, video players (e.g., ipods), handheld game consoles, electronic books, and smart toys and portable car navigation devices.
(4) And other electronic equipment with data interaction function.
Thus, particular embodiments of the present subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may be advantageous.
The method illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular transactions or implement particular abstract data types. The application may also be practiced in distributed computing environments where transactions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A method of data processing, the method comprising:
decompressing a data resource file to be processed in a preset format to obtain a decompressed resource file, wherein the decompressed resource file comprises text data and video data in a document mode, and the text data corresponds to the content of the video data;
acquiring the video data according to the input clip starting time point and the clip ending time point to obtain video data to be compressed;
and acquiring the text data corresponding to the timestamp according to the timestamp of the video data to be compressed, and compressing the acquired text data and the video data to be compressed to obtain a resource file in a preset format after being edited.
2. The method according to claim 1, further comprising, after decompressing the data resource file to be processed in the preset format to obtain a decompressed resource file:
and if the single-frame playing time length of the video frames in the video data does not exceed the preset time length, detecting whether the video frames corresponding to the input editing starting time points are key frames or not.
3. The method according to claim 2, wherein the obtaining the video data according to the input clip start time point and the clip end time point to obtain the video data to be compressed comprises:
if the video frame corresponding to the clipping starting time point is not a key frame, detecting from the clipping starting time point forward to obtain a previous key frame of the clipping starting time point as a first key frame, and detecting from the clipping starting time point backward to obtain a next key frame of the clipping starting time point as a second key frame;
after the video frame data between the first key frame and the second key frame are decoded, encoding is carried out from the starting time point of the cutting, and first video data are obtained; copying video frame data between the second key frame and the video frame corresponding to the clipping ending time point to obtain second video data;
and combining the first video data and the second video data to obtain video data to be compressed.
4. The method according to claim 2, wherein the obtaining the video data according to the input clip start time point and the clip end time point to obtain the video data to be compressed comprises:
and if the video frame corresponding to the clipping starting time point is a key frame, copying video frame data between the clipping starting time point and the clipping ending time point to obtain video data to be compressed.
5. The method according to claim 1, wherein there are a plurality of resource files in the preset format, the method further comprising:
decompressing the resource files in the preset formats to obtain at least text data corresponding to the resource files in the preset formats and video data in a document mode;
merging the video data corresponding to each preset resource file according to a specified sequence;
updating the time stamp of the text data according to the time length of the merged video and the designated sequence;
and compressing the text data after updating the timestamp and the merged video data to obtain a merged resource file with a preset format.
6. The method according to claim 1 or 5, characterized in that the method further comprises:
reading the video data corresponding to the resource file with a preset format;
acquiring text data corresponding to a current video frame according to a timestamp of the current video frame in the video data;
splicing and rendering the current video frame and the text data corresponding to the current video frame to obtain video picture data;
and coding the video picture data through a multimedia processing tool to generate a video file.
7. A data processing apparatus, characterized in that the apparatus comprises:
the decompression module is used for decompressing a data resource file to be processed in a preset format to obtain a decompressed resource file, wherein the decompressed resource file comprises text data and video data in a document mode, and the text data corresponds to the content of the video data;
the video data acquisition module is used for acquiring the video data according to the input clip start time point and the clip end time point to obtain video data to be compressed;
and the compression module is used for acquiring the text data corresponding to the timestamp according to the timestamp of the video data to be compressed, and compressing the acquired text data and the video data to be compressed to obtain the resource file in the preset format after being edited.
8. The apparatus of claim 1, further comprising:
and the detection module is used for detecting whether the video frame corresponding to the input editing starting time point is a key frame or not if the single-frame playing time length of the video frame in the video data does not exceed the preset time length.
9. The apparatus according to claim 2, wherein the video data obtaining module is specifically configured to, if the video frame corresponding to the clip start time point is not a key frame, detect from the clip start time point forward to obtain a key frame before the clip start time point as a first key frame, and detect from the clip start time point backward to obtain a key frame after the clip start time point as a second key frame; decoding and encoding video frame data between the first key frame and the second key frame to obtain first video data; copying video frame data between the second key frame and the video frame corresponding to the clipping ending time point to obtain second video data; and combining the first video data and the second video data to obtain video data to be compressed.
10. The apparatus of claim 8, wherein the video data obtaining module is specifically configured to copy video frame data between the clip start time point and the clip end time point to obtain the video data to be compressed if the video frame corresponding to the clip start time point is a key frame.
CN202011585269.2A 2020-12-28 2020-12-28 Data processing method and device, electronic equipment and storage medium Active CN112738564B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011585269.2A CN112738564B (en) 2020-12-28 2020-12-28 Data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011585269.2A CN112738564B (en) 2020-12-28 2020-12-28 Data processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112738564A true CN112738564A (en) 2021-04-30
CN112738564B CN112738564B (en) 2023-04-14

Family

ID=75606978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011585269.2A Active CN112738564B (en) 2020-12-28 2020-12-28 Data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112738564B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114900713A (en) * 2022-07-13 2022-08-12 深圳市必提教育科技有限公司 Video clip processing method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5801685A (en) * 1996-04-08 1998-09-01 Tektronix, Inc. Automatic editing of recorded video elements sychronized with a script text read or displayed
CN102262888A (en) * 2010-05-31 2011-11-30 苏州闻道网络科技有限公司 Video file splitting method
CN107295402A (en) * 2017-08-11 2017-10-24 成都品果科技有限公司 Video encoding/decoding method and device
CN111131884A (en) * 2020-01-19 2020-05-08 腾讯科技(深圳)有限公司 Video clipping method, related device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5801685A (en) * 1996-04-08 1998-09-01 Tektronix, Inc. Automatic editing of recorded video elements sychronized with a script text read or displayed
CN102262888A (en) * 2010-05-31 2011-11-30 苏州闻道网络科技有限公司 Video file splitting method
CN107295402A (en) * 2017-08-11 2017-10-24 成都品果科技有限公司 Video encoding/decoding method and device
CN111131884A (en) * 2020-01-19 2020-05-08 腾讯科技(深圳)有限公司 Video clipping method, related device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114900713A (en) * 2022-07-13 2022-08-12 深圳市必提教育科技有限公司 Video clip processing method and system

Also Published As

Publication number Publication date
CN112738564B (en) 2023-04-14

Similar Documents

Publication Publication Date Title
CN108683826B (en) Video data processing method, video data processing device, computer equipment and storage medium
WO2022037331A1 (en) Video processing method, video processing apparatus, storage medium, and electronic device
US20150062353A1 (en) Audio video playback synchronization for encoded media
CN112437345B (en) Video double-speed playing method and device, electronic equipment and storage medium
CN110505498B (en) Video processing method, video playing method, video processing device, video playing device and computer readable medium
CN111447489A (en) Video processing method and device, readable medium and electronic equipment
CN111182315A (en) Multimedia file splicing method, device, equipment and medium
CN102819851B (en) Method for implementing sound pictures by using computer
CN112738564B (en) Data processing method and device, electronic equipment and storage medium
CN109246433B (en) Video encoding method and device, decoding method and device and video transmission system
CN109874024A (en) A kind of barrage processing method, system and storage medium based on dynamic video poster
CN109618198A (en) Live content reports method and device, storage medium, electronic equipment
US20130151972A1 (en) Media processing comparison system and techniques
CN116055762A (en) Video synthesis method and device, electronic equipment and storage medium
JP2017192080A (en) Image compression device, image decoding device, image compression method, and image compression program
CN106792219B (en) It is a kind of that the method and device reviewed is broadcast live
CN113497963B (en) Video processing method, device and equipment
CN115209215A (en) Video processing method, device and equipment
CN114360545A (en) Voice recognition and audio/video processing method, device, system and storage medium
WO2020175176A1 (en) Information processing device and method, and reproduction device and method
CN112040310A (en) Audio and video synthesis method and device, mobile terminal and storage medium
CN112151048A (en) Method for generating and processing audio-visual image data
WO2022156646A1 (en) Video recording method and device, electronic device and storage medium
CN114979764B (en) Video generation method, device, computer equipment and storage medium
CN111225210B (en) Video coding method, video coding device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant