CN109451248B - Video data processing method and device, terminal and storage medium - Google Patents

Video data processing method and device, terminal and storage medium Download PDF

Info

Publication number
CN109451248B
CN109451248B CN201811404559.5A CN201811404559A CN109451248B CN 109451248 B CN109451248 B CN 109451248B CN 201811404559 A CN201811404559 A CN 201811404559A CN 109451248 B CN109451248 B CN 109451248B
Authority
CN
China
Prior art keywords
target
special effect
frame
processing
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811404559.5A
Other languages
Chinese (zh)
Other versions
CN109451248A (en
Inventor
刘春宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201811404559.5A priority Critical patent/CN109451248B/en
Publication of CN109451248A publication Critical patent/CN109451248A/en
Application granted granted Critical
Publication of CN109451248B publication Critical patent/CN109451248B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a video data processing method, a video data processing device, a video data processing terminal and a video data processing storage medium, and belongs to the technical field of information processing. According to the embodiment of the invention, the corresponding target video frame sequence and the target map frame sequence are generated based on the target special effect type, so that the number of each video frame in the target video frame sequence corresponds to the number of each map frame in the target map frame sequence, each video frame and each corresponding map frame can be synchronously played, each map frame corresponding to the target special effect type achieves the effect of speed change, and the overall speed change effect of target video data is improved.

Description

Video data processing method and device, terminal and storage medium
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for processing video data.
Background
With the continuous development of information processing technology, people have higher and higher requirements on video effects, and in order to meet the requirements of people on special video effects or enhance the movie and television effects of movies, special effect processing needs to be performed on video data. For example, when recording a video, fast special effect processing or slow special effect processing may be performed on the video data, and further, a chartlet may be added to the video data subjected to the variable speed processing to generate a video special effect meeting the requirement.
At present, a commonly used method for processing video data is as follows: in the process of processing video data, frame dropping processing or frame adding processing is carried out on a target video frame sequence according to a target special effect processing instruction, namely, fast special effect processing or slow special effect processing and the like, so that compared with normal processing, the duration of a video frame obtained through the fast special effect processing or the slow special effect processing is changed, and the frame rate is kept unchanged. For example, in normal processing, the numbers of the video frames generated in the first second are 0, 1, and 2, the numbers of the video frames generated in the first second are 0, 2, and 4 after the fast special effect processing, and the numbers of the video frames generated in the first second after the slow special effect processing are 0, 0C, and 1, where 0C is a copy frame of the video frame with the number of 0. Furthermore, when adding other special effects to the video data, it is necessary to add a map to each of the video frames, where the map frame numbers corresponding to the same video frame position are the same for different types of the variable-speed special effects, that is, when performing special effect processing on a target video frame sequence, no corresponding processing is performed on the map frame sequence corresponding to the target video frame sequence.
Based on the video data processing method, in the process of frame loss or frame addition processing of the target video frame sequence, the obtained chartlet frame sequences are the same for different variable-speed special effect types, so that after the fast special effect processing or the slow special effect processing is carried out, the chartlet does not generate a variable-speed effect, and further the whole special effect of the video data after the special effect processing is reduced.
Disclosure of Invention
The embodiment of the invention provides a video data processing method, a video data processing device, a video data processing terminal and a video data processing storage medium, which can solve the problem that a mapping does not generate a speed change effect when speed change special effect processing is carried out on video data. The technical scheme is as follows:
in one aspect, a method for processing video data is provided, and the method includes:
receiving a target special effect processing instruction, wherein the target special effect processing instruction carries a target special effect type, and the target special effect type is used for carrying out variable speed processing on video data;
acquiring a target video frame sequence based on the target special effect type;
based on the target special effect type, processing the target special effect type on a plurality of mapping frames corresponding to the target special effect type to generate a target mapping frame sequence, wherein the number of each mapping frame in the target mapping frame sequence corresponds to the number of each video frame in the target video frame sequence;
generating target video data based on the sequence of target video frames and the sequence of target map frames.
In one possible implementation, the performing, based on the target special effect type, the processing of the target special effect type on multiple tile frames corresponding to the target special effect type, and generating the target tile frame sequence includes:
and when the type of the target special effect is the quick special effect, performing frame loss processing on a plurality of mapping frames corresponding to the quick special effect to generate a first target mapping frame sequence, wherein the number of each mapping frame in the first target mapping frame sequence corresponds to the number of each video frame in the target video frame sequence.
In a possible implementation manner, when the type of the target special effect is a fast special effect, performing frame dropping processing on a plurality of tile frames corresponding to the fast special effect, and generating a first target tile frame sequence includes:
when the type of the target special effect is a quick special effect, discarding a set number of mapping frames after the mapping frame for every other mapping frame of a plurality of mapping frames corresponding to the quick special effect to obtain the rest mapping frames;
generating the first target map frame sequence based on the remaining map frames.
In one possible implementation, the performing, based on the target special effect type, the processing of the target special effect type on multiple tile frames corresponding to the target special effect type, and generating the target tile frame sequence includes:
and when the type of the target special effect is a slow special effect, performing framing processing on a plurality of mapping frames corresponding to the slow special effect to generate a second target mapping frame sequence, wherein the number of each mapping frame in the second target mapping frame sequence corresponds to the number of each video frame in the target video frame sequence.
In one possible implementation manner, when the type of the target special effect is a slow special effect, the framing a plurality of tile frames corresponding to the slow special effect, and generating the second target tile frame sequence includes:
when the type of the target special effect is a slow special effect, copying each mapping frame in a plurality of mapping frames corresponding to the slow special effect to generate a copy of the plurality of mapping frames corresponding to the slow special effect;
numbering based on the copies of the plurality of map frames corresponding to the slow special effect;
and generating the second target map frame sequence based on the plurality of map frames corresponding to the slow special effect and the copies of the plurality of map frames corresponding to the slow special effect after numbering.
In one possible implementation, the generating target video data based on the sequence of target video frames and the sequence of target map frames comprises:
and synchronously rendering the target video frame sequence and the target map frame sequence to generate the target video data.
In one aspect, an apparatus for processing video data is provided, the apparatus comprising:
the video processing device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a target special effect processing instruction, the target special effect processing instruction carries a target special effect type, and the target special effect type is used for carrying out variable speed processing on video data;
the acquisition module is used for acquiring a target video frame sequence based on the target special effect type;
a processing module, configured to perform, based on the target special effect type, processing of the target special effect type on multiple tile frames corresponding to the target special effect type, so as to generate a target tile frame sequence, where a number of each tile frame in the target tile frame sequence corresponds to a number of each video frame in the target video frame sequence;
a generating module for generating target video data based on the target video frame sequence and the target map frame sequence.
In one possible implementation, the processing module includes:
and the first processing unit is used for performing frame dropping processing on a plurality of mapping frames corresponding to the quick special effect to generate a first target mapping frame sequence when the type of the target special effect is the quick special effect, wherein the number of each mapping frame in the first target mapping frame sequence corresponds to the number of each video frame in the target video frame sequence.
In one possible implementation, the first processing unit is configured to:
when the type of the target special effect is a quick special effect, discarding a set number of mapping frames after the mapping frame for every other mapping frame of a plurality of mapping frames corresponding to the quick special effect to obtain the rest mapping frames;
generating the first target map frame sequence based on the remaining map frames.
In one possible implementation, the processing module includes:
and the second processing unit is used for performing frame adding processing on a plurality of mapping frames corresponding to the slow special effect to generate a second target mapping frame sequence when the type of the target special effect is the slow special effect, wherein the number of each mapping frame in the second target mapping frame sequence corresponds to the number of each video frame in the target video frame sequence.
In one possible implementation, the second processing unit is configured to:
when the type of the target special effect is a slow special effect, copying each mapping frame in a plurality of mapping frames corresponding to the slow special effect to generate a copy of the plurality of mapping frames corresponding to the slow special effect;
numbering based on the copies of the plurality of map frames corresponding to the slow special effect;
and generating the second target map frame sequence based on the plurality of map frames corresponding to the slow special effect and the copies of the plurality of map frames corresponding to the slow special effect after numbering.
In one possible implementation, the generating module is configured to:
and synchronously rendering the target video frame sequence and the target map frame sequence to generate the target video data.
In one aspect, a terminal is provided, and the terminal includes a processor and a memory, where at least one instruction is stored in the memory, and the instruction is loaded and executed by the processor to implement the operation performed by the processing method of video data.
In one aspect, a server is provided, and the server includes a processor and a memory, where at least one instruction is stored in the memory, and the instruction is loaded and executed by the processor to implement the operation performed by the processing method for video data.
In one aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the instruction is loaded and executed by a processor to implement the operations performed by the processing method of video data as described above.
According to the embodiment of the invention, the corresponding target video frame sequence and the target map frame sequence are generated based on the target special effect type, so that the number of each video frame in the target video frame sequence corresponds to the number of each map frame in the target map frame sequence, each video frame and each corresponding map frame can be synchronously played, each map frame corresponding to the target special effect type achieves the effect of speed change, and the overall speed change effect of target video data is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a method for processing video data according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for processing video data according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a video data processing apparatus according to an embodiment of the present invention;
fig. 4 is a block diagram of a terminal according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for processing video data according to an embodiment of the present invention, where the method for processing video data can be applied to any electronic device. Referring to fig. 1, the embodiment includes:
101. and receiving a target special effect processing instruction, wherein the target special effect processing instruction carries a target special effect type, and the target special effect type is used for carrying out variable speed processing on the video data.
102. Based on the target special effect type, a target video frame sequence is obtained.
103. And based on the target special effect type, processing the target special effect type on a plurality of mapping frames corresponding to the target special effect type to generate a target mapping frame sequence, wherein the number of each mapping frame in the target mapping frame sequence corresponds to the number of each video frame in the target video frame sequence.
104. Target video data is generated based on the sequence of target video frames and the sequence of target map frames.
In some embodiments, the performing, based on the target special effect type, the target special effect type on the plurality of pasting frames corresponding to the target special effect type to generate the target pasting frame sequence includes:
and when the type of the target special effect is the quick special effect, performing frame loss processing on a plurality of mapping frames corresponding to the quick special effect to generate a first target mapping frame sequence, wherein the number of each mapping frame in the first target mapping frame sequence corresponds to the number of each video frame in the target video frame sequence.
In some embodiments, when the type of the target special effect is a fast special effect, performing frame dropping processing on a plurality of tile frames corresponding to the fast special effect, and generating the first target tile frame sequence includes:
when the type of the target special effect is a quick special effect, discarding a set number of mapping frames behind a plurality of mapping frames corresponding to the quick special effect every other mapping frame to obtain the rest mapping frames;
based on the remaining map frames, the first target map frame sequence is generated.
In some embodiments, the performing, based on the target special effect type, the target special effect type on the plurality of pasting frames corresponding to the target special effect type to generate the target pasting frame sequence includes:
and when the type of the target special effect is a slow special effect, performing frame adding processing on a plurality of mapping frames corresponding to the slow special effect to generate a second target mapping frame sequence, wherein the number of each mapping frame in the second target mapping frame sequence corresponds to the number of each video frame in the target video frame sequence.
In some embodiments, when the type of the target special effect is a slow special effect, performing frame addition on multiple tile frames corresponding to the slow special effect to generate the second target tile frame sequence includes:
when the target special effect type is a slow special effect, copying each pasting frame in a plurality of pasting frames corresponding to the slow special effect to generate a plurality of copies of the pasting frames corresponding to the slow special effect;
numbering based on the copies of the plurality of map frames corresponding to the slow special effect;
and generating the second target map frame sequence based on the plurality of map frames corresponding to the slow special effect and the copies of the plurality of map frames corresponding to the slow special effect after numbering.
In some embodiments, the generating target video data based on the sequence of target video frames and the sequence of target map frames comprises:
and synchronously rendering the target video frame sequence and the target map frame sequence to generate the target video data.
According to the embodiment of the invention, the corresponding target video frame sequence and the target map frame sequence are generated based on the target special effect type, so that the number of each video frame in the target video frame sequence corresponds to the number of each map frame in the target map frame sequence, each video frame and each corresponding map frame can be synchronously played, each map frame corresponding to the target special effect type achieves the effect of speed change, and the overall speed change effect of target video data is improved.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
Fig. 2 is a flowchart of a method for processing video data according to an embodiment of the present invention, where the method for processing video data can be applied to any electronic device. Referring to fig. 2, the embodiment includes:
201. the electronic device receives a target special effect processing instruction, wherein the target special effect processing instruction carries a target special effect type, and the target special effect type is used for carrying out variable speed processing on video data.
In the embodiment of the present invention, the electronic device has a storage function and a video data processing function, the target special effect processing instruction is an instruction for performing corresponding special effect type processing on video data, the target special effect processing instruction may be triggered by a click operation of a corresponding control on the electronic device by a user, and the corresponding control may be an operable control associated with the target special effect type. The electronic equipment can set at least one operable control based on different target special effect types, the target special effect types are used for the electronic equipment to perform corresponding speed change processing on video data according to user requirements, and the target special effect types comprise a fast special effect and a slow special effect. The fast special effect is used for the electronic equipment to enable the playing effect of the video data to be accelerated playing, and correspondingly, the slow special effect is used for the electronic equipment to enable the playing effect of the video data to be decelerated playing.
The electronic device may be a terminal or a server. That is, the video data can be processed by the corresponding target special effect type through the application program with the video data processing function on the terminal. Of course, the terminal may also upload the acquired video data to the server, and then perform corresponding target special effect type processing on the video data by using a video data processing function in the server.
The video data may include a plurality of video frames before the target special effect type processing is not performed, the plurality of video frames may be original images to which no other special effect is added, and of course, the video data may also include a plurality of video frames before the target special effect type processing is not performed, and a map frame corresponding to each video frame, where the map frame is used to add another special effect, such as a filter special effect, to the video frames. The embodiment of the present invention does not limit the specific content of the video data.
It should be noted that, before the video data is recorded, the electronic device may receive a target special effect processing instruction triggered by a user, and then, based on a target special effect type carried by the target special effect processing instruction, the electronic device may record to obtain a plurality of video frames and a plurality of paste frames meeting the requirements of the target special effect type.
Of course, the electronic device may also receive a target special effect processing instruction triggered by the user during the recording process of the video data, that is, the electronic device may receive the target special effect processing instruction triggered by the user during the normal recording process of the video data, and further record according to the target special effect processing instruction to obtain a plurality of corresponding video frames and a plurality of corresponding sticker frames.
In addition, the electronic device may also perform normal recording to obtain a plurality of normal video frames and a plurality of normal mapping frames, receive a target special effect processing instruction triggered by a user, and perform corresponding variable speed processing on the obtained video data based on a target special effect type carried by the target special effect processing instruction. The embodiment of the present invention does not limit the time when the electronic device receives the target special effect processing instruction.
202. The electronic device obtains a target video frame sequence based on the target special effect type.
In the embodiment of the present invention, the target video frame sequence is a sequence composed of target video frames obtained by performing target special effect type processing on a plurality of video frames in the video data.
Specifically, taking the example that the electronic device acquires the target video frame sequence in the process of recording the video, when the target special effect type is the quick special effect, the electronic device may perform frame loss processing on the plurality of recorded video frames. That is, in the process of recording a plurality of video frames, when a video frame is recorded, the electronic device discards a set number of video frames after the recorded video frame, and then records the next video frame of the discarded video frames, so that compared with normal recording, a set number of normal video frames are missed between every two video frames in the plurality of video frames obtained by fast special-effect recording. For example, the video data normally recorded by the electronic device includes twelve video frames, where the numbers of the twelve video frames may be 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, and 11, respectively, and assuming that the recorded frame rate is 6 frames/second, the number of the recorded video frame of the first second may be 0, 1, 2, 3, 4, and 5, and based on the twelve video frames, taking the example of performing double-speed processing on the twelve video frames as an example, the electronic device may discard the video frames with the numbers of 1, 3, 5, 7, 9, and 11, so that the sequence of the video frames with the numbers of 0, 2, 4, 6, 8, and 10 obtained after performing double-speed processing on the twelve video frames is 0, 2, 4, 6, 8, and 10, is the target video frame sequence.
The frame rate of the video frames after the double-speed processing is finished is kept unchanged, the frame rate is still 6 frames/second, and the time stamps corresponding to the rest video frames except the first video frame are changed. That is, after the double shift processing is performed, the time stamps corresponding to the video frames numbered 2n, except for the video frame numbered 0, are changed to the time stamps corresponding to the video frames numbered n before the double shift processing is performed, where n is a natural number. Furthermore, the video playing time length is changed through the change of the timestamp corresponding to the video frame. Taking the double-speed processing as an example, the normal playing time of the twelve video frames is two seconds, and after the double-speed processing, the playing time of the six video frames obtained by the electronic device becomes half of the normal playing time.
Of course, when the electronic device performs other fast processing at double speed on the plurality of recorded video frames, other frame loss processing may be performed on the plurality of video frames based on other modes.
When the target special effect type is a slow special effect, the electronic device may perform framing processing on the plurality of recorded video frames, that is, when the electronic device records a plurality of video frames, the electronic device copies the recorded video frames to obtain a set number of copies of the video frames every time a video frame is recorded, and numbers the recorded copies of each video frame.
Based on the above process, the target video frame sequence corresponding to the slow special effect includes video frames recorded by the electronic device and copies of each video frame after copying and numbering, and compared with the video frame sequence obtained by normal recording, the target video frame sequence adds copies of the previous normal video frame in a set number between every two normal video frames. Taking the six video frames with numbers of 0, 1, 2, 3, 4, and 5 obtained by normal recording for one second as an example, when the six video frames are subjected to slow processing at half-fold speed, each video frame is copied to obtain a copy of one video frame, and one copy corresponding to each video frame is numbered respectively, that is, twelve video frames can be obtained based on the six video frames. The 12 video frames may be 0, 0C, 1C, 2C, 3C, 4C, 5C, where 0C, 1C, 2C, 3C, 4C, 5C are copies of the previous video frame, respectively. The sequence of video frames numbered 0, 0C, 1C, 2C, 3C, 4C, 5C is the target video frame sequence.
Similar to the double-speed processing described above, the frame rate of the video frame obtained after the half-double-speed processing is also kept constant, which is still 6 frames/second. That is, after the half-speed processing, the numbers of the video frames acquired by the electronic device for the first second are 0, 0C, 1C, 2C. The time stamp of the video frame obtained after the one-half double speed processing is changed. Specifically, the timestamp corresponding to the nth video frame after the half-speed processing is changed to the timestamp corresponding to the 2 nth video frame when recording is performed normally, where n is an integer not equal to 1. Furthermore, the whole video playing time length is changed, taking the half-speed processing as an example, the normal playing time length of the six video frames is one second, and after the half-speed processing, the playing time length of the twelve video frames obtained becomes two seconds, namely twice the normal playing time length. Of course, when the electronic device performs other double-speed slow processing on the recorded multiple video frames, other frame adding processing may also be performed on the multiple video frames based on other manners, and the specific manner of the frame adding processing is not limited in this embodiment of the present invention.
In addition, when the electronic device performs normal recording first and then performs target special effect type processing on the video frame obtained by normal recording, the processing mode of the video frame obtained by normal recording is the same as the processing mode of the video frame in the recording process, and the details of the embodiment of the present invention are not repeated here.
203. When the target special effect type is a quick special effect, the electronic equipment discards a set number of mapping frames after the mapping frame every other mapping frame for a plurality of mapping frames corresponding to the quick special effect to obtain the rest of mapping frames.
In the embodiment of the present invention, based on the plurality of video frames, that is, the plurality of original images, the electronic device may add another special effect to the plurality of original images on the basis of performing the target special effect type processing on the plurality of video frames. For example, the electronic device may add a special effect such as a filter to the plurality of original images, wherein the filter special effect is obtained by overlaying a plurality of pictures on the plurality of original images, so that each video frame and the corresponding picture frame form a composite image. Of course, the electronic device may also process the map frames corresponding to the plurality of video frames while performing the target special effect type processing on the plurality of video frames, and the processing sequence of the plurality of video frames and the corresponding map frames is not limited in this embodiment of the present invention.
Similar to the above process of acquiring the target video frame sequence, the electronic device may record a tile frame associated with a video frame every time a video frame meeting the target special effect type requirement is recorded in the process of recording the video. Of course, the electronic device may also record a plurality of normal video frames and a plurality of map frames associated with the plurality of normal video frames, and then perform the target special effect type processing on the plurality of recorded normal video frames and the plurality of map frames, which is not limited herein.
Specifically, when the target special effect type is the quick special effect, the electronic device processes the plurality of tile frames in the same way as the processing of the plurality of video frames in step 202. That is, for a plurality of map frames corresponding to the fast special effect, each time the electronic device obtains one map frame, a set number of map frames after the one map frame are discarded, and then the map frames after the discarded map frames are obtained, and finally, the electronic device can obtain the remaining map frames. The set number may be set based on a double speed at which the fast special effect processing is performed on the map frame, and may be one when the double speed at which the processing is performed is twice the speed, for example.
Taking the twelve mapping frames corresponding to the twelve video frames with the numbers 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, and 11 as an example, after the twelve mapping frames are subjected to the double-speed processing in the same manner as in step 202, the electronic device can obtain six mapping frames with the numbers 0, 2, 4, 6, 8, and 10, and then the six mapping frames with the numbers 0, 2, 4, 6, 8, and 10 are the remaining mapping frames obtained after the twelve mapping frames are subjected to the fast special effect processing.
The frame rates of the six tile frames are still unchanged, and the playing durations of all the tile frames are changed, that is, the playing frame rates before and after the double-speed processing of the twelve tile frames are both 6 frames/second, the playing duration before the double-speed processing of the twelve tile frames is two seconds, and the playing duration after the processing is 1 second.
Of course, the same process is applied to the fast special effect processing of other numbers of mapping frames and other multiple speeds, and the details of the embodiment of the present invention are not repeated here.
204. The electronic device generates a first target map frame sequence based on the remaining map frames, the number of each map in the first target map frame sequence corresponding to the number of each video frame in the target video frame sequence.
In this embodiment of the present invention, based on the remaining map frames obtained in step 203, the electronic device may merge the remaining map frames to generate a first target map frame sequence, where the first target map frame sequence corresponds to the target video frame sequence obtained after performing the fast special effect processing on the plurality of video frames in step 202.
Specifically, the number of each tile frame in the first target tile frame sequence may be the same as the number of each video frame in the target video frame sequence obtained after the fast special effect processing. Taking the numbers of the video frames obtained after the fast special effect processing in step 202 as 0, 2, 4, 6, 8, and 10 as examples, the numbers of the tile frames in the generated first target tile frame sequence may also be 0, 2, 4, 6, 8, and 10, where the numbers of the frames corresponding to the same position in the first target tile frame sequence and the target video frame sequence are the same. Of course, in other embodiments, each tile frame of the first target tile frame sequence and each video frame of the target video frame sequence may also be identified by using other numbers, as long as the numbers corresponding to the same positions of the first target tile frame sequence and the target video frame sequence are the same, and the specific form of the numbers is not limited in this embodiment of the present invention.
The above steps 203 to 204 are processes of performing, by the electronic device, frame dropping on a plurality of tile frames corresponding to the fast special effect to generate a first target tile frame sequence when the target special effect type is the fast special effect, where the above processes are processes of performing, by way of example, frame dropping on every other tile frame by the electronic device by discarding a set number of tile frames after the one tile frame. Of course, in other embodiments, other forms of frame loss processing may be performed on the multiple map frames, and the embodiment of the present invention is not limited herein.
205. The electronic device synchronously renders the target video frame sequence and the first target map frame sequence to generate first target video data.
In the embodiment of the present invention, the first target video data is target video data obtained by the electronic device performing fast special effect processing on video data such as the plurality of video frames and the plurality of sticker frames, and the electronic device can further implement a demand of fast playing video data by a user through a corresponding playing function based on the first target video data.
Specifically, the electronic device may have a rendering function, and the rendering function may render each video frame in the target video frame sequence corresponding to the fast special effect and the map frame at the same position in the first target map frame sequence, so as to achieve the purpose of synchronous rendering, and finally generate the first target video data. The rendering function can better fit each video frame with the corresponding map frame, so that the video frame after the map frame is added is more natural, and the condition that the video frame is not continuous with the map frame is avoided.
The above steps 203 to 205 are processes of generating, by the electronic device, the first target video data based on the plurality of pasting frames corresponding to the fast special effect when the target special effect type is the fast special effect, and the processing process is exemplified by fast special effect processing at twice speed. Of course, in other embodiments, the electronic device may also perform other speed-doubled fast special effect processing on the multiple tile frames, which is not limited herein in the embodiments of the present invention.
206. When the target special effect type is a slow special effect, the electronic equipment copies each of the plurality of pasting frames corresponding to the slow special effect to generate a copy of the plurality of pasting frames corresponding to the slow special effect.
In the embodiment of the present invention, similarly to the process of performing slow special effect processing on a plurality of video frames to obtain the target video frame sequence in step 202, the process of performing slow processing on a plurality of tile frames associated with the plurality of video frames can also be implemented by copying each tile frame.
Specifically, taking an example that the electronic device performs slow special effect processing on a plurality of corresponding tile frames in a process of recording a video, each time the electronic device obtains one tile frame, the electronic device may copy the one tile frame to obtain a set number of copies of the tile frame, where the set number may be set based on a multiple speed of performing the slow special effect processing on the plurality of tile frames. For example, when the speed at which the slow special effect processing is performed on the plurality of tile frames is one-half of the speed, the set number may be one. Of course, when the multiple pasting frames are subjected to slow special effect processing at other speeds, the set number may also be other values, and the specific value of the set number is not limited in the embodiment of the present invention.
It should be noted that, the above is a process of processing a plurality of map frames based on a target special effect type, i.e. a slow special effect, in a process of recording a video by an electronic device, and of course, the electronic device may also perform normal recording on the plurality of map frames first, and then perform a processing process similar to the above process on the plurality of recorded map frames, which is not limited in the embodiment of the present invention.
207. The electronic device numbers based on the copies of the plurality of map frames corresponding to the slow special effect.
In the embodiment of the present invention, based on the copies of the plurality of map frames obtained in step 206, similar to the copies of the plurality of video frames obtained in step 202, the electronic device may number the copies of the plurality of map frames, so that the number of the copy of each map frame corresponds to the number of the copy of each video frame. A
Specifically, taking the numbers of the multiple mapping frames recorded normally by the electronic device as 0, 1, 2, 3, 4, and 5 as an example, when the slow special effect processing at half the speed is performed on the six mapping frames with the numbers of 0, 1, 2, 3, 4, and 5, each mapping frame may be copied to obtain one copy corresponding to each mapping frame. Further, the electronic device may number one copy corresponding to each of the map frames, for example, the electronic device may number the copy of map frame 0 as 0C, the copy of map frame 1 as 1C, the copy of map frame 2 as 2C, the copy of map frame 3 as 3C, the copy of map frame 4 as 4C, and the copy of map frame 5 as 5C, and may obtain twelve map frames numbered 0, 0C, 1C, 2C, 3C, 4C, 5C based on the six map frames numbered 0, 1, 2, 3, 4, 5.
Of course, in other embodiments, the electronic device may also number the copies of each pasting frame in other forms, that is, the number corresponding to the copy of each pasting frame may be other numbers or letters, and the like.
208. The electronic device generates the second target map frame sequence based on the multiple map frames corresponding to the slow special effect and the copies of the multiple map frames corresponding to the slow special effect after numbering, wherein the number of each map frame in the second target map frame sequence corresponds to the number of each video frame in the target video frame sequence.
In an embodiment of the present invention, the second target map frame sequence includes an original plurality of map frames and a numbered copy of the original plurality of map frames. Based on the numbered copies of the plurality of pasting frames obtained in step 206 and step 207, the electronic device may combine the original plurality of pasting frames and the numbered copies of the plurality of pasting frames to generate the second target pasting frame sequence.
Wherein the second target map frame sequence corresponds to the target video frame sequence obtained after the slow special effect processing is performed on the plurality of video frames in step 202. Specifically, the number of each tile frame in the second target tile frame sequence may be the same as the number of each video frame in the target video frame sequence obtained after the slow special effect processing. Taking the numbers of the video frames obtained after the slow special effect processing in step 202 as 0, 0C, 1C, 2C, 3C, 4C, 5, and 5C as an example, the numbers of the tile frames in the second target tile frame sequence may also be 0, 0C, 1C, 2C, 3C, 4C, 5, and 5C, where the numbers of the video frame played in the first second and the tile frame are both 0, 0C, 1C, 2, and 2C, and the numbers of the frames corresponding to the same position in the second target tile frame sequence and the target video frame sequence are the same.
The number of each map frame in the first target map frame sequence or the second target map frame sequence corresponds to the number of each video frame in the target video frame sequence, so that each video frame and a corresponding map frame can be synchronously played in the playing process, each video frame can generate a slow special effect, and meanwhile, each map frame can also generate the same slow special effect, thereby greatly improving the visual effect of a user.
The above steps 206 to 208 are processes of, when the type of the target special effect is a slow special effect, the electronic device performing framing processing on a plurality of map frames corresponding to the slow special effect to generate a second target map frame sequence, where the above processes are performed by the electronic device copying a set number of copies of each map frame. Of course, in other embodiments, other forms of framing processing may also be performed on the plurality of map frames, and the embodiments of the present invention are not limited herein.
209. The electronic device synchronously renders the target video frame sequence and the second target map frame sequence to generate second target video data.
In the embodiment of the present invention, the second target video data is target video data obtained by performing slow special effect processing on video data such as the plurality of video frames and the plurality of sticker frames by the electronic device, and then the electronic device can fulfill a requirement of a user on slow playing of the video data through a corresponding playing function based on the second target video data.
Specifically, similar to step 205, the electronic device may have a rendering function, where the rendering function may render each video frame in the target video frame sequence corresponding to the slow special effect and the map frame at the same position in the second target video frame sequence, so as to achieve the purpose that the electronic device synchronously renders the target video frame sequence and the second target map frame sequence, and finally generates the second target video data.
Each video frame in the second target video data generated based on the rendering process and each mapping frame corresponding to the video frame can be more naturally attached together, so that the overall speed change effect of the video data is improved.
The above steps 206 to 209 are processes of generating, by the electronic device, second target video data based on a plurality of tile frames corresponding to the slow special effect when the target special effect type is the slow special effect, and the processing process is described by taking a slow special effect processing at a speed of one-half times as an example. Of course, in other embodiments, the electronic device may also perform other double-speed slow special effect processing on the multiple tile frames, which is not limited herein in the embodiments of the present invention.
According to the embodiment of the invention, the corresponding target video frame sequence and the target map frame sequence are generated based on the target special effect type, so that the number of each video frame in the target video frame sequence corresponds to the number of each map frame in the target map frame sequence, each video frame and each corresponding map frame can be synchronously played, each map frame corresponding to the target special effect type achieves the effect of speed change, and the overall speed change effect of target video data is improved.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
Fig. 3 is a schematic structural diagram of a video data processing apparatus according to an embodiment of the present invention. Referring to fig. 3, the apparatus includes: the device comprises a receiving module 301, an obtaining module 302, a processing module 303 and a generating module 304.
A receiving module 301, configured to receive a target special effect processing instruction, where the target special effect processing instruction carries a target special effect type, and the target special effect type is used to perform variable speed processing on video data;
an obtaining module 302, configured to obtain a target video frame sequence based on the target special effect type;
a processing module 303, configured to perform, based on the target special effect type, processing of the target special effect type on multiple tile frames corresponding to the target special effect type, so as to generate a target tile frame sequence, where a number of each tile frame in the target tile frame sequence corresponds to a number of each video frame in the target video frame sequence;
a generating module 304 for generating target video data based on the target video frame sequence and the target map frame sequence.
In some embodiments, the processing module 303 includes:
and the first processing unit is used for performing frame dropping processing on a plurality of mapping frames corresponding to the quick special effect to generate a first target mapping frame sequence when the target special effect type is the quick special effect, wherein the number of each mapping frame in the first target mapping frame sequence corresponds to the number of each video frame in the target video frame sequence.
In some embodiments, the first processing unit is to:
when the type of the target special effect is a quick special effect, discarding a set number of mapping frames behind a plurality of mapping frames corresponding to the quick special effect every other mapping frame to obtain the rest mapping frames;
based on the remaining map frames, the first target map frame sequence is generated.
In some embodiments, the processing module 303 includes:
and the second processing unit is used for performing frame adding processing on a plurality of mapping frames corresponding to the slow special effect to generate a second target mapping frame sequence when the type of the target special effect is the slow special effect, wherein the number of each mapping frame in the second target mapping frame sequence corresponds to the number of each video frame in the target video frame sequence.
In some embodiments, the second processing unit is to:
when the target special effect type is a slow special effect, copying each pasting frame in a plurality of pasting frames corresponding to the slow special effect to generate a plurality of copies of the pasting frames corresponding to the slow special effect;
numbering based on the copies of the plurality of map frames corresponding to the slow special effect;
and generating the second target map frame sequence based on the plurality of map frames corresponding to the slow special effect and the copies of the plurality of map frames corresponding to the slow special effect after numbering.
In some embodiments, the generation module 304 is configured to:
and synchronously rendering the target video frame sequence and the target map frame sequence to generate the target video data.
According to the embodiment of the invention, the corresponding target video frame sequence and the target map frame sequence are generated based on the target special effect type, so that the number of each video frame in the target video frame sequence corresponds to the number of each map frame in the target map frame sequence, each video frame and each corresponding map frame can be synchronously played, each map frame corresponding to the target special effect type achieves the effect of speed change, and the overall speed change effect of target video data is improved.
It should be noted that: in the processing apparatus for video data provided in the foregoing embodiment, only the division of the functional modules is illustrated in the processing of video data, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the electronic device may be divided into different functional modules to complete all or part of the functions described above. In addition, the video data processing apparatus and the video data processing method provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
Fig. 4 is a block diagram of a terminal 400 according to an embodiment of the present invention, where the terminal 400 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 400 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, the terminal 400 includes: a processor 401 and a memory 402.
Processor 401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 401 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 401 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 401 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 401 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 402 may include one or more computer-readable storage media, which may be non-transitory. Memory 402 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 402 is used to store at least one instruction for execution by processor 401 to implement a method of processing video data as provided by a method embodiment of the present invention.
In some embodiments, the terminal 400 may further optionally include: a peripheral interface 403 and at least one peripheral. The processor 401, memory 402 and peripheral interface 403 may be connected by bus or signal lines. Each peripheral may be connected to the peripheral interface 403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 404, touch screen display 405, camera 406, audio circuitry 407, positioning components 408, and power supply 409.
The peripheral interface 403 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 401 and the memory 402. In some embodiments, processor 401, memory 402, and peripheral interface 403 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 401, the memory 402 and the peripheral interface 403 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 404 may further include NFC (Near Field Communication) related circuits, which are not limited in the present invention.
The display screen 405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 405 is a touch display screen, the display screen 405 also has the ability to capture touch signals on or over the surface of the display screen 405. The touch signal may be input to the processor 401 as a control signal for processing. At this point, the display screen 405 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 405 may be one, providing the front panel of the terminal 400; in other embodiments, the display screen 405 may be at least two, respectively disposed on different surfaces of the terminal 400 or in a folded design; in still other embodiments, the display 405 may be a flexible display disposed on a curved surface or a folded surface of the terminal 400. Even further, the display screen 405 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display screen 405 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 406 is used to capture images or video. Optionally, camera assembly 406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 401 for processing, or inputting the electric signals to the radio frequency circuit 404 for realizing voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 400. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 401 or the radio frequency circuit 404 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 407 may also include a headphone jack.
The positioning component 408 is used to locate the current geographic position of the terminal 400 for navigation or LBS (Location Based Service). The Positioning component 408 may be a Positioning component based on the GPS (Global Positioning System) of the united states, the beidou System of china, the graves System of russia, or the galileo System of the european union.
The power supply 409 is used to supply power to the various components in the terminal 400. The power source 409 may be alternating current, direct current, disposable or rechargeable. When power source 409 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 400 also includes one or more sensors 410. The one or more sensors 410 include, but are not limited to: acceleration sensor 411, gyro sensor 412, pressure sensor 413, fingerprint sensor 414, optical sensor 415, and proximity sensor 416.
The acceleration sensor 411 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 400. For example, the acceleration sensor 411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 401 may control the touch display screen 405 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 411. The acceleration sensor 411 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 412 may detect a body direction and a rotation angle of the terminal 400, and the gyro sensor 412 may cooperate with the acceleration sensor 411 to acquire a 3D motion of the terminal 400 by the user. From the data collected by the gyro sensor 412, the processor 401 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 413 may be disposed on a side bezel of the terminal 400 and/or a lower layer of the touch display screen 405. When the pressure sensor 413 is disposed on the side frame of the terminal 400, a user's holding signal to the terminal 400 can be detected, and the processor 401 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 413. When the pressure sensor 413 is disposed at the lower layer of the touch display screen 405, the processor 401 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 405. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 414 is used for collecting a fingerprint of the user, and the processor 401 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 414, or the fingerprint sensor 414 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 414 may be disposed on the front, back, or side of the terminal 400. When a physical key or vendor Logo is provided on the terminal 400, the fingerprint sensor 414 may be integrated with the physical key or vendor Logo.
The optical sensor 415 is used to collect the ambient light intensity. In one embodiment, the processor 401 may control the display brightness of the touch display screen 405 based on the ambient light intensity collected by the optical sensor 415. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 405 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 405 is turned down. In another embodiment, the processor 401 may also dynamically adjust the shooting parameters of the camera assembly 406 according to the ambient light intensity collected by the optical sensor 415.
A proximity sensor 416, also known as a distance sensor, is typically disposed on the front panel of the terminal 400. The proximity sensor 416 is used to collect the distance between the user and the front surface of the terminal 400. In one embodiment, when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually decreases, the processor 401 controls the touch display screen 405 to switch from the bright screen state to the dark screen state; when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually becomes larger, the processor 401 controls the touch display screen 405 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 4 is not intended to be limiting of terminal 400 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Fig. 5 is a schematic structural diagram of a server according to an embodiment of the present invention, where the server 500 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 501 and one or more memories 502, where the memory 502 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 501 to implement the processing method of the video data provided by the above-mentioned method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, there is also provided a computer-readable storage medium, such as a memory, including instructions executable by a processor in a terminal to perform the method of processing video data in the above-described embodiments. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present invention and should not be taken as limiting the invention, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (15)

1. A method for processing video data, the method comprising:
receiving a target special effect processing instruction, wherein the target special effect processing instruction carries a target special effect type, and the target special effect type is used for carrying out variable speed processing on video data;
acquiring a target video frame sequence based on the target special effect type;
based on the target special effect type, processing the target special effect type on a plurality of mapping frames corresponding to the target special effect type to generate a target mapping frame sequence, wherein the number of each mapping frame in the target mapping frame sequence corresponds to the number of each video frame in the target video frame sequence;
generating target video data based on the sequence of target video frames and the sequence of target map frames,
when the target special effect type is a quick special effect, the target video frame sequence and the map frame sequence are obtained through frame loss processing; and when the type of the target special effect is a slow special effect, the target video frame sequence and the map frame sequence are obtained by copying and frame adding.
2. The method of claim 1, wherein the processing the target special effect type for a plurality of tile frames corresponding to the target special effect type based on the target special effect type, and wherein generating the sequence of target tile frames comprises:
and when the type of the target special effect is the quick special effect, performing frame loss processing on a plurality of mapping frames corresponding to the quick special effect to generate a first target mapping frame sequence, wherein the number of each mapping frame in the first target mapping frame sequence corresponds to the number of each video frame in the target video frame sequence.
3. The method of claim 2, wherein when the type of the target special effect is a fast special effect, performing frame dropping on a plurality of tile frames corresponding to the fast special effect to generate a first target tile frame sequence comprises:
when the type of the target special effect is a quick special effect, discarding a set number of mapping frames after the mapping frame for every other mapping frame of a plurality of mapping frames corresponding to the quick special effect to obtain the rest mapping frames;
generating the first target map frame sequence based on the remaining map frames.
4. The method of claim 1, wherein the processing the target special effect type for a plurality of tile frames corresponding to the target special effect type based on the target special effect type, and wherein generating the sequence of target tile frames comprises:
and when the type of the target special effect is a slow special effect, performing framing processing on a plurality of mapping frames corresponding to the slow special effect to generate a second target mapping frame sequence, wherein the number of each mapping frame in the second target mapping frame sequence corresponds to the number of each video frame in the target video frame sequence.
5. The method of claim 4, wherein, when the type of the target special effect is a slow special effect, the framing a plurality of tile frames corresponding to the slow special effect to generate a second target tile frame sequence comprises:
when the type of the target special effect is a slow special effect, copying each mapping frame in a plurality of mapping frames corresponding to the slow special effect to generate a copy of the plurality of mapping frames corresponding to the slow special effect;
numbering based on the copies of the plurality of map frames corresponding to the slow special effect;
and generating the second target map frame sequence based on the plurality of map frames corresponding to the slow special effect and the copies of the plurality of map frames corresponding to the slow special effect after numbering.
6. The method of claim 1, wherein generating target video data based on the sequence of target video frames and the sequence of target map frames comprises:
and synchronously rendering the target video frame sequence and the target map frame sequence to generate the target video data.
7. An apparatus for processing video data, the apparatus comprising:
the video processing device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a target special effect processing instruction, the target special effect processing instruction carries a target special effect type, and the target special effect type is used for carrying out variable speed processing on video data;
the acquisition module is used for acquiring a target video frame sequence based on the target special effect type;
a processing module, configured to perform, based on the target special effect type, processing of the target special effect type on multiple tile frames corresponding to the target special effect type, so as to generate a target tile frame sequence, where a number of each tile frame in the target tile frame sequence corresponds to a number of each video frame in the target video frame sequence;
a generating module, configured to generate target video data based on the target video frame sequence and the target map frame sequence, where when the target special effect type is a fast special effect, both the target video frame sequence and the map frame sequence are obtained through frame dropping processing; and when the type of the target special effect is a slow special effect, the target video frame sequence and the map frame sequence are obtained by copying and frame adding.
8. The apparatus of claim 7, wherein the processing module comprises:
and the first processing unit is used for performing frame dropping processing on a plurality of mapping frames corresponding to the quick special effect to generate a first target mapping frame sequence when the type of the target special effect is the quick special effect, wherein the number of each mapping frame in the first target mapping frame sequence corresponds to the number of each video frame in the target video frame sequence.
9. The apparatus of claim 8, wherein the first processing unit is configured to:
when the type of the target special effect is a quick special effect, discarding a set number of mapping frames after the mapping frame for every other mapping frame of a plurality of mapping frames corresponding to the quick special effect to obtain the rest mapping frames;
generating the first target map frame sequence based on the remaining map frames.
10. The apparatus of claim 7, wherein the processing module comprises:
and the second processing unit is used for performing frame adding processing on a plurality of mapping frames corresponding to the slow special effect to generate a second target mapping frame sequence when the type of the target special effect is the slow special effect, wherein the number of each mapping frame in the second target mapping frame sequence corresponds to the number of each video frame in the target video frame sequence.
11. The apparatus of claim 10, wherein the second processing unit is configured to:
when the type of the target special effect is a slow special effect, copying each mapping frame in a plurality of mapping frames corresponding to the slow special effect to generate a copy of the plurality of mapping frames corresponding to the slow special effect;
numbering based on the copies of the plurality of map frames corresponding to the slow special effect;
and generating the second target map frame sequence based on the plurality of map frames corresponding to the slow special effect and the copies of the plurality of map frames corresponding to the slow special effect after numbering.
12. The apparatus of claim 7, wherein the generating module is configured to:
and synchronously rendering the target video frame sequence and the target map frame sequence to generate the target video data.
13. A terminal, characterized in that the terminal comprises a processor and a memory, in which at least one instruction is stored, the instruction being loaded and executed by the processor to implement the operations performed by the method for processing video data according to any one of claims 1 to 6.
14. A server, comprising a processor and a memory, wherein the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement the operations executed by the method for processing video data according to any one of claims 1 to 6.
15. A computer-readable storage medium having stored therein at least one instruction which is loaded and executed by a processor to perform operations performed by a method of processing video data according to any one of claims 1 to 6.
CN201811404559.5A 2018-11-23 2018-11-23 Video data processing method and device, terminal and storage medium Active CN109451248B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811404559.5A CN109451248B (en) 2018-11-23 2018-11-23 Video data processing method and device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811404559.5A CN109451248B (en) 2018-11-23 2018-11-23 Video data processing method and device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN109451248A CN109451248A (en) 2019-03-08
CN109451248B true CN109451248B (en) 2020-12-22

Family

ID=65553546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811404559.5A Active CN109451248B (en) 2018-11-23 2018-11-23 Video data processing method and device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN109451248B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111597984B (en) * 2020-05-15 2023-09-26 北京百度网讯科技有限公司 Label paper testing method, device, electronic equipment and computer readable storage medium
CN114125528B (en) * 2020-08-28 2022-11-11 北京达佳互联信息技术有限公司 Video special effect processing method and device, electronic equipment and storage medium
CN114827695B (en) 2021-01-21 2023-05-30 北京字节跳动网络技术有限公司 Video recording method, device, electronic device and storage medium
CN118138836A (en) * 2022-07-21 2024-06-04 荣耀终端有限公司 Image frame processing method and electronic equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009048723A (en) * 2007-08-21 2009-03-05 Funai Electric Co Ltd Video reproduction device
JP2010141632A (en) * 2008-12-12 2010-06-24 Hitachi Ltd Video reproduction device, video system, and method of converting reproduction speed of video
JP5454604B2 (en) * 2012-03-21 2014-03-26 カシオ計算機株式会社 Video playback method, video playback device, and program
EP2725489A1 (en) * 2012-10-24 2014-04-30 PIXarithmic GmbH Method of operating a video processing apparatus
EP2753069A1 (en) * 2013-01-08 2014-07-09 PIXarithmic GmbH Method of dynamic real-time video processing and apparatus to execute the method
CN103702040B (en) * 2013-12-31 2018-03-23 广州华多网络科技有限公司 Real-time video figure ornament superposition processing method and system
CN106385591B (en) * 2016-10-17 2020-05-15 腾讯科技(上海)有限公司 Video processing method and video processing device

Also Published As

Publication number Publication date
CN109451248A (en) 2019-03-08

Similar Documents

Publication Publication Date Title
CN109246466B (en) Video playing method and device and electronic equipment
CN109348247B (en) Method and device for determining audio and video playing time stamp and storage medium
CN108401124B (en) Video recording method and device
CN108965922B (en) Video cover generation method and device and storage medium
CN108449641B (en) Method, device, computer equipment and storage medium for playing media stream
CN109451343A (en) Video sharing method, apparatus, terminal and storage medium
CN109451248B (en) Video data processing method and device, terminal and storage medium
CN111464830B (en) Method, device, system, equipment and storage medium for image display
CN108965757B (en) Video recording method, device, terminal and storage medium
CN110324689B (en) Audio and video synchronous playing method, device, terminal and storage medium
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN111586431B (en) Method, device and equipment for live broadcast processing and storage medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN109635133B (en) Visual audio playing method and device, electronic equipment and storage medium
CN108922506A (en) Song audio generation method, device and computer readable storage medium
CN110868636B (en) Video material intercepting method and device, storage medium and terminal
CN112929654B (en) Method, device and equipment for detecting sound and picture synchronization and storage medium
CN110839174A (en) Image processing method and device, computer equipment and storage medium
CN110769313A (en) Video processing method and device and storage medium
CN111586444B (en) Video processing method and device, electronic equipment and storage medium
CN111092991B (en) Lyric display method and device and computer storage medium
CN110868642B (en) Video playing method, device and storage medium
CN109819314B (en) Audio and video processing method and device, terminal and storage medium
CN108401194B (en) Time stamp determination method, apparatus and computer-readable storage medium
CN112770177B (en) Multimedia file generation method, multimedia file release method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant