CN114125528A - Video special effect processing method and device, electronic equipment and storage medium - Google Patents

Video special effect processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114125528A
CN114125528A CN202010889092.9A CN202010889092A CN114125528A CN 114125528 A CN114125528 A CN 114125528A CN 202010889092 A CN202010889092 A CN 202010889092A CN 114125528 A CN114125528 A CN 114125528A
Authority
CN
China
Prior art keywords
special effect
video
shooting
video segment
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010889092.9A
Other languages
Chinese (zh)
Other versions
CN114125528B (en
Inventor
王博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202010889092.9A priority Critical patent/CN114125528B/en
Priority to PCT/CN2021/104095 priority patent/WO2022042029A1/en
Publication of CN114125528A publication Critical patent/CN114125528A/en
Application granted granted Critical
Publication of CN114125528B publication Critical patent/CN114125528B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64746Control signals issued by the network directed to the server or the client
    • H04N21/64761Control signals issued by the network directed to the server or the client directed to the server
    • H04N21/64769Control signals issued by the network directed to the server or the client directed to the server for rate control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Studio Devices (AREA)
  • Studio Circuits (AREA)

Abstract

The disclosure relates to a video special effect processing method, a video special effect processing device, an electronic device and a storage medium, wherein the method comprises the following steps: when the fact that a video added with special effects starts to be shot is detected, obtaining a special effect rate corresponding to each video segment in the shot video; at a shooting time node at which each video segment is taken, the shooting time node including: a shooting pause time and a shooting increment time; determining the special effect rendering time of each video segment according to the special effect rate and the shooting increment time of each video segment; rendering the special effect into the corresponding video segment according to the special effect rendering time of each video segment to obtain the corresponding special effect video segment; and sequentially combining all the obtained special effect video segments according to the shooting pause time to obtain the shot special effect video. According to the method and the device, different special effect rates are set for each section of special effects, the purpose of changing the special effect rate at will when the video is shot is achieved, the special effect rendering efficiency is improved, and the flexibility of adjusting the special effect rate at will when the video is shot is improved.

Description

Video special effect processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to processing technologies, and in particular, to a method and an apparatus for processing a video special effect, an electronic device, and a storage medium.
Background
At present, with the popularization of intelligent terminals, video recording and sharing become an indispensable part in the life of people. In particular, in short video applications, for example, fast-hand short video or trembling short video, when performing special effect processing on short video, it is common to perform cyclic processing at a fixed special effect frequency, such as dithering, soul resuscitation, or the like. That is to say, in the related art, after the video and the special effect are synthesized, the special effect of the entire video is processed at a constant speed, or the whole section of speed change processing is performed on the special effect of the video through a complex process at a later stage, and the like, that is, in the related art, the special effect of the video cannot be processed separately in a segmented speed change manner, which not only affects the special effect rendering effect of the short video, but also reduces the flexibility of the video special effect rendering.
In the related art, how to flexibly fuse a special effect into a shot original video is a technical problem to be solved at present.
Disclosure of Invention
The disclosure provides a video special effect processing method, a video special effect processing device, electronic equipment and a storage medium, and aims to at least solve the technical problems that in the related art, special effects cannot be flexibly fused into a shot original video, so that a special effect rendering effect is poor and inflexible. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a video special effect processing method, including:
when the fact that a video added with special effects is shot is detected, obtaining a special effect rate corresponding to shooting of each video segment in the video, wherein each video segment is controlled through shooting pause;
in the process of shooting the video, obtaining a shooting time node for shooting each video segment, wherein the shooting time node comprises: a shooting pause time and a shooting increment time;
determining the special effect rendering time of each video segment according to the special effect rate and the shooting increment time of each video segment;
rendering the special effect into a corresponding video segment according to the special effect rendering time of each video segment to obtain a corresponding special effect video segment;
and sequentially combining all the obtained special effect video segments according to the shooting pause time to obtain the shot special effect video.
Optionally, the obtaining a special effect rate corresponding to each video segment in the captured video includes:
acquiring a special effect rate corresponding to each video segment according to a user instruction or a preset strategy; wherein the preset strategy comprises: and presetting a corresponding special effect rate according to the shooting time period of each video segment in the shot video.
Optionally, the determining a special effect rendering time of each video segment according to the special effect rate and the shooting increment time of each video segment includes:
multiplying the special effect rate of each video segment by the shooting increment time of the corresponding video segment to obtain the special effect rendering increment time of each video segment;
adding the special effect rendering increment time of each video segment to the special effect pause time of the previous video segment to obtain the special effect pause time of the corresponding video segment;
and adding the special effect pause time of each video segment to the special effect rendering increment time of the corresponding video segment to obtain the special effect rendering time of each video segment.
Optionally, the method further includes:
and when the special effect video is finished, establishing a corresponding relation between a shooting pause time node and shooting increment time of each video segment for shooting the special effect video and corresponding special effect speed, special effect rendering time and variable-speed special effect rendering increment time.
According to a second aspect of the embodiments of the present disclosure, there is provided a video special effects processing apparatus including:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is configured to acquire a special effect rate corresponding to the shooting of each video segment in a video when the video for starting the shooting of the special effect is detected, and each video segment is controlled by shooting pause;
a second obtaining module configured to perform obtaining a shooting time node for shooting each video segment in the process of shooting the video, the shooting time node including: a shooting pause time and a shooting increment time;
a determining module configured to perform determining a special effect rendering time of each video segment according to the special effect rate and the shooting increment time of each video segment;
the rendering module is configured to render the special effect into a corresponding video segment according to the special effect rendering time of each video segment to obtain a corresponding special effect video segment;
and the combination module is configured to perform sequential combination of all the obtained special effect video segments according to the shooting pause time to obtain the shot special effect video.
Optionally, the second obtaining module includes:
the instruction acquisition module is configured to execute the acquisition of the special effect rate corresponding to each video segment according to the instruction of the user; and/or
The strategy acquisition module is configured to execute the acquisition of the special effect rate corresponding to each video segment according to a preset strategy; wherein the preset strategy comprises: and presetting a corresponding special effect rate according to the shooting time period of each video segment in the shot video.
Optionally, the determining module includes:
the first calculation module is configured to multiply the special effect rate of each video segment by the shooting increment time of the corresponding video segment to obtain a special effect rendering increment time of each video segment;
the second calculation module is configured to add the special effect rendering increment time of each video segment to the special effect pause time of the previous video segment to obtain the special effect pause time of the corresponding video segment;
and the third calculation module is configured to add the special effect pause time of each video segment to the special effect rendering increment time of the corresponding video segment to obtain the special effect rendering time of each video segment.
Optionally, the apparatus further comprises:
and the establishing module is used for establishing a corresponding relation between a shooting pause time node and shooting increment time for shooting the special effect video and corresponding special effect speed, special effect rendering time and special effect rendering increment time after speed change when the special effect video is obtained by the combining module.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform any of the video effects processing methods described above.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a storage medium, wherein instructions of the storage medium, when executed by a processor of an electronic device, cause the electronic device to execute any one of the video special effects processing methods described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, wherein instructions of the computer program product, when executed by a processor of an electronic device, cause the electronic device to perform any one of the video special effects processing methods described above.
The technical scheme provided by the embodiment of the disclosure at least has the following beneficial effects:
in the embodiment of the disclosure, when the fact that a video added with special effects starts to be shot is detected, a special effect rate corresponding to each video segment in the shot video is obtained; acquiring a shooting time node for shooting each video segment in the process of shooting the video; then, determining the special effect rendering time of each video segment according to the special effect rate and the shooting time node of each video segment; and finally, rendering the special effect into the corresponding video segment according to the special effect rendering time of each video segment to obtain the corresponding special effect video segment, and sequentially combining all the obtained special effect video segments according to the shooting pause time to obtain the shot special effect video. That is to say, in the embodiment of the present disclosure, the special effects are segmented separately, and different special effect rates are set for each segment of the special effects, so that the purpose of changing the special effect rate at will when the video is shot is achieved, the special effects can be adjusted at will when the video is shot, and the flexibility of shooting the special effect video is increased. By setting different special effect rates and independently managing the special effect rendering time, the problem that the special effect rendering rate can be adjusted in real time according to needs when the special effect video is shot is solved, the special effect rendering efficiency is improved, and the satisfaction degree of shooting the special effect video by a user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a flow diagram illustrating a video effects processing method according to an example embodiment.
Fig. 2 is a flowchart illustrating an application example of a video effect processing method according to an exemplary embodiment.
Fig. 3 is a block diagram illustrating a video special effects processing apparatus according to an example embodiment.
Fig. 4 is a block diagram illustrating a structure of an electronic device according to an example embodiment.
Fig. 5 is a block diagram illustrating a structure of a video special effects processing apparatus according to an exemplary embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a video special effects processing method according to an exemplary embodiment, where the video special effects processing method is used in a terminal, as shown in fig. 1, and includes the following steps.
In step 101, when the fact that a video added with special effects is shot is detected, obtaining a special effect rate corresponding to shooting of each video segment in the video, wherein each video segment is controlled by shooting pause;
in step 102, in the process of shooting the video, a shooting time node for shooting each video segment is obtained, and the shooting time node comprises: a shooting pause time and a shooting increment time;
in step 103, determining a special effect rendering time of each video segment according to the special effect rate and the shooting increment time of each video segment;
in step 104, rendering the special effect to a corresponding video segment according to the special effect rendering time of each video segment to obtain a corresponding special effect video segment;
in step 105, all the obtained special effect video segments are sequentially combined according to the shooting pause time to obtain a shot special effect video.
The video special effect processing method disclosed by the disclosure can be applied to a terminal, a server and the like, and is not limited herein, and the terminal implementation equipment can be electronic equipment such as a smart phone, a notebook computer, a tablet computer and the like.
The following describes in detail specific implementation steps of a video special effect processing method provided in an embodiment of the present disclosure with reference to fig. 1.
Firstly, executing step 101, when detecting that a video with special effects is added to be shot, acquiring a special effect rate corresponding to shooting of each video segment in the video, wherein each video segment is controlled by shooting pause;
in this step, when a user wants to shoot a video added with a special effect, a special effect needs to be selected first, where the selected special effect may be a special effect of a virtual video, or may be a material for decorating a virtual video effect, and the material may be a picture animation, or a transparent flash animation, or may be an AR special effect, and the embodiment is not limited; after the special effect is selected, the video with the special effect is started to be shot, namely, the client detects that the video with the special effect is started to be shot, or the server detects that the video with the special effect is started to be shot through a Software Development Kit (SDK) for the special effect of the video image. And then acquiring a special effect rate corresponding to each video segment in the video, wherein when the video with the special effect added is detected to be started to be shot, the acquired special effect rate is an initial special effect rate, each video segment is controlled by shooting pause, and each video segment is normally controlled by a shooting user to pause.
In this embodiment, obtaining the special effect rate corresponding to each video segment in the captured video may be implemented by the following two ways:
one way is that, when a selection instruction input by a user is received for each video segment, the special effect rate of the special effect is selected according to the selection instruction, for example, according to the received user selection instruction, the special effect rate of shooting the special effect of the first video segment is selected to be 0.5, the special effect rate selected by the second video segment is selected to be 2, the special effect rate selected by the third video segment is selected to be 1, and the like. It should be noted that, for each video segment, the selected special effect rates may be the same or different, if the selected special effect rates are the same, it is indicated that the video and the special effect are recorded at a constant speed, and if the selected special effect rates are different, it is indicated that the video is normally shot, and the special effect rate is changed, that is, the special effect video is recorded according to the changed special effect rate.
The other mode is that a user obtains a special effect rate corresponding to each video segment according to a preset strategy; wherein the preset strategy comprises: and presetting a corresponding special effect rate according to the shooting time period of each video segment in the shot video. For example, the shooting time for shooting the first video segment is set to 2 seconds, and the special effect rate (also referred to as special effect rendering rate, the same below) for adding a special effect to the video segment is set to 0.5; shooting time for shooting a second video segment is 2 seconds, and setting a special effect rate for adding a special effect to the second video segment to be 2 seconds; shooting the third video segment for 4 seconds, and setting the special effect rate of adding the special effect to the third video segment to be 3 seconds; the shooting time for shooting the fourth video segment is 7 seconds, and the special effect rate for adding the special effect to the fourth video segment is set to be 5 seconds, and the like.
Next, step 102 is executed, in the process of shooting the video, a shooting time node for shooting each video segment is obtained, where the shooting time node includes: a shooting pause time and a shooting increment time.
In this step, during the shooting of the video, the shooting may be suspended as required, and when the shooting is suspended, the shooting suspension time during which the video is suspended from shooting and the shooting increment time of the currently shot video need to be recorded, where the shooting suspension time and the shooting increment time may be collectively referred to as a shooting time node.
That is, during shooting of the video, the user may pause shooting, continue shooting, and so on, that is, when it is detected that shooting of the video is paused, it is necessary to record a shooting time node for shooting the video segment, where the shooting time node includes a shooting pause time for shooting the video segment, and calculate a shooting increment time of the current video segment according to the shooting pause time, for example, shooting of the video is paused for 2 seconds, the shooting pause time is 2 seconds, and the actual shooting increment time is 2 seconds, and so on.
Other processes for pausing the shooting of the video are similar to the above processes, and are described in detail above, and are not described again here.
And step 103 is executed, and the special effect rendering time of each video segment is determined according to the special effect rate and the shooting increment time of each video segment.
In this step, one determination method is:
multiplying the special effect rate of each video segment by the shooting increment time of the corresponding video segment to obtain the special effect rendering increment time of each video segment; adding the special effect rendering increment time of each video segment to the special effect pause time of the previous video segment to obtain the special effect pause time of the corresponding video segment; and then adding the special effect pause time of each video segment to the product of the shooting increment time of the corresponding video segment and the special effect rate (namely the special effect rendering increment time of the corresponding video segment) to obtain the special effect rendering time of each video segment.
For example, when it is detected that a video starts to be shot, the video and a special effect are shot simultaneously, the shooting time node for starting to be shot is 0, and a special effect rate (i.e., a special effect rendering rate) is set to be 0.5. Then, assuming that the video is paused after being shot for 2 seconds, the shot time of the video segment is recorded for 2 seconds, the shot increment time of the video is 2 seconds, and correspondingly, the special effect rendering increment time 2 × 0.5 is 1, that is, the shot increment time is multiplied by the special effect rate. And then, setting the special effect rate of the second video segment, if the special effect rate is 2, continuing to shoot the video, pausing after beginning to shoot the second video segment for 2 seconds, wherein the shooting time of the video is 4 seconds, the shooting increment time of the shooting video is 2 seconds, correspondingly, calculating the special effect rendering increment time of the second video segment according to the special effect rendering increment time of the first video segment, the special effect rate of the second video segment and the shooting increment time of the second video segment, and the special effect rendering increment time of the second video segment is 2.5 + 2-5.
By analogy, the shooting pause time and the shooting increment time for shooting each video segment of the video, and the corresponding special effect rendering increment time, the special effect pause time and the special effect rendering time can be obtained.
And then, executing step 104, and rendering the special effect into the corresponding video segment according to the special effect rendering time of each video segment to obtain the corresponding special effect video segment.
Specifically, in this step, each frame of special effect image is inserted, fused or superimposed into each frame of video image of the corresponding video segment according to the special effect rendering time of each video segment, so as to obtain the corresponding special effect video segment. The insertion or blending or overlaying is processed according to the existing video frame insertion or blending or overlaying technology, for example, by a video image special effect software development kit (sdk, software development kit), the special effect is added to the corresponding video segment according to the special effect rendering time of each video segment, and the corresponding special effect video segment is obtained. The specific processing procedures are well known to those skilled in the art and will not be described herein.
Finally, step 105 is executed to combine all the obtained special effect video segments in sequence according to the shooting pause time to obtain the shot special effect video.
In this step, the special effect video segments are sequentially spliced according to the time sequence of the shooting pause time to obtain a complete shot video, and the specific video splicing is well known to those skilled in the art and is not described herein again.
In the embodiment of the disclosure, when the fact that a video added with special effects starts to be shot is detected, a special effect rate corresponding to each video segment in the shot video is obtained; acquiring a shooting time node for shooting each video segment in the process of shooting the video; then, determining the special effect rendering time of each video segment according to the special effect rate and the shooting time node of each video segment; and finally, rendering the special effect into the corresponding video segment according to the special effect rendering time of each video segment to obtain the corresponding special effect video segment, and sequentially combining all the obtained special effect video segments according to the shooting pause time to obtain the shot special effect video. That is to say, in the embodiment of the present disclosure, the special effects are segmented separately, and different special effect rates are set for each segment of the special effects, so that the purpose of changing the special effect rate at will when the video is shot is achieved, the special effects can be adjusted at will when the video is shot, and the flexibility of shooting the special effect video is increased. By setting different special effect rates and independently managing the special effect rendering time, the problem that the special effect rendering rate can be adjusted in real time according to needs when the special effect video is shot is solved, the special effect rendering efficiency is improved, and the satisfaction degree of shooting the special effect video by a user is improved.
Optionally, in another embodiment, on the basis of the above embodiment, the method may further include: when the special effect video is finished, establishing a corresponding relation between a shooting pause time node and shooting increment time of each video segment for shooting the special effect video, and corresponding special effect speed, special effect rendering time and variable-speed special effect rendering increment time.
That is, when the special effect video shooting is completed, establishing a one-to-one correspondence relationship between a shooting pause time node and shooting increment time for shooting each section of video, and corresponding special effect speed, special effect rendering time and special effect rendering increment time after speed change; specifically, the corresponding relationship and the queue mode may be stored, or the corresponding relationship may be stored in a special effect time queue, so that the special effect time queue is applied to the overall special effect rate, and is convenient to use when the special effect video is subsequently photographed again.
In the embodiment, the special effect is added when the video is shot, so that the speed of the special effect can be changed in real time, and vivid slow motion or quick playing later effect can be realized through simple operation. By adopting the embodiment of the disclosure, the special effect of the video can be shot more flexibly, and the speed of the special effect can be changed at will in the shooting process according to the needs.
To facilitate understanding of those skilled in the art, please refer to fig. 2, which is a flowchart illustrating an application example of a video special effect processing method according to an embodiment of the present disclosure, in which rendering of all special effects is comprehensively managed by a video image special effect software development kit sdk, and a rendering rate during rendering is set for each special effect, so as to achieve a function of controlling playing of each special effect at a different rate. In the embodiment of the disclosure, in order to achieve the effect of suspending and adjusting the rate in real time to continue shooting, the embodiment of the disclosure completes time recording by recording the current rendering time queue, that is, each time of suspending is the last time of suspending plus the special effect rate multiplied by the shooting increment time. And will record the time node and store it in a queue for the next rate change or suspension. The method and the device solve the problem of adjusting the rendering rate of the special effect in real time during shooting in a mode of independently managing the rendering time of the special effect. It should be noted that, the effect rendering is updated according to time, so that a time-based shift characteristic is added to the effect sdk, and if it is desired to capture a shift effect, the shift characteristic is implemented, and the effect can be added to the shift queue. The method comprises the following steps:
step 201, when it is detected that the video with special effects is added to start shooting, the time nodes for starting shooting are all 0, and the set first section of special effect rate is obtained, which is assumed to be 0.5 in this embodiment.
Step 202: assuming that the video is paused after 2 seconds of shooting, the recording pause time is 2 seconds, the shooting increment time is 2 seconds, and correspondingly, the special effect rendering time 2 × 0.5 is 1 second.
Step 203: a set second segment of the special effect rate is obtained, if 2.
Step 204: and continuing to shoot the second video segment, and pausing after shooting for 2 seconds, wherein the shooting time of the video segment is 4 seconds, the shooting increment time is 2 seconds, and correspondingly, the special effect rendering time is 2 x 0.5+2 x 2 is 5.
Step 205: and acquiring the set third section special effect rate, and if the set third section special effect rate is 3.
Step 206: and continuing to shoot the third video segment, and pausing after shooting for 4 seconds, wherein the shooting time of the video segment is 8 seconds, the shooting increment time is 4 seconds, and correspondingly, the special effect rendering time is 2 × 0.5+2 × 2+4 × 3 is 13.
Step 207: and when an ending instruction of shooting the video is detected, shooting is ended, and complete special-effect videos are sequentially generated according to shooting pause time.
It should be noted that, after the step 206, and so on, a group of special effect time queues corresponding to the actual shooting increment time and the special effect rate one to one may be obtained finally, and the queue is applied to the overall special effect rate, so that the segmented speed change effect of the special effect may be completed. That is to say, when the video is shot, the special effect is added, so that the speed of the special effect can be changed in real time, and the later effect of visual slow motion or quick playing can be realized through simple operation. By adopting the method and the device, the special effect of the video can be shot more flexibly, and the speed of the special effect can be changed at will in the shooting process according to the needs.
In the embodiment of the disclosure, shooting time nodes of an original video and special effect rendering time of special effect rendering are recorded in a mode of respectively recording time queues, time of each section is stored in an incremental mode, and a complete special effect multi-section speed change effect can be obtained by matching with a special effect speed of each section of special effect.
It is noted that, for simplicity of explanation, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will appreciate that the present disclosure is not limited by the order of acts described, as some steps may, in accordance with the present disclosure, occur in other orders and/or concurrently. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required in order to implement the disclosure.
Fig. 3 is a block diagram illustrating a video effects processing apparatus according to an example embodiment. Referring to fig. 3, the apparatus includes: a first acquisition module 301, a second acquisition module 302, a determination module 303, a rendering module 304 and a combination module 305, wherein,
the first obtaining module 301 is configured to, when it is detected that a video for shooting a special effect starts, obtain a special effect rate corresponding to shooting of each video segment in the video, wherein each video segment is controlled by shooting pause;
the second obtaining module 302 is configured to perform obtaining a shooting time node for shooting each video segment during shooting of the video, where the shooting time node includes: a shooting pause time and a shooting increment time;
the determining module 303 is configured to determine a special effect rendering time of each video segment according to the special effect rate and the shooting increment time of each video segment;
the rendering module 304 is configured to perform rendering of the special effect into a corresponding video segment according to the special effect rendering time of each video segment to obtain a corresponding special effect video segment;
the combining module 305 is configured to perform sequential combination of all the obtained special effect video segments according to the shooting pause time to obtain a shot special effect video.
Optionally, in another embodiment, on the basis of the above embodiment, the second obtaining module includes: an instruction fetch module and/or a policy fetch module, wherein,
the instruction acquisition module is configured to execute the acquisition of the special effect rate corresponding to each video segment according to the instruction of the user; and/or
The strategy acquisition module is configured to execute the acquisition of the special effect rate corresponding to each video segment according to a preset strategy; wherein the preset strategy comprises: and presetting a corresponding special effect rate according to the shooting time period of each video segment in the shot video.
Optionally, in another embodiment, on the basis of the foregoing embodiment, the determining module includes: a first calculation module, a second calculation module, and a third calculation module, wherein,
the first calculation module is configured to multiply the special effect rate of each video segment by the shooting increment time of the corresponding video segment to obtain a special effect rendering increment time of each video segment;
the second calculation module is configured to add the special effect rendering increment time of each video segment to the special effect pause time of the previous video segment to obtain the special effect pause time of the corresponding video segment;
the third calculation module is configured to add the special effect pause time of each video segment to the special effect rendering increment time of the corresponding video segment to obtain the special effect rendering time of each video segment.
Optionally, in another embodiment, on the basis of the foregoing embodiment, the rendering module is configured to perform rendering the special effect into the corresponding video segment according to the special effect rendering time of each video segment through the video image special effect software development kit sdk, so as to obtain the corresponding special effect video segment.
Optionally, in another embodiment, on the basis of the above embodiment, the apparatus further includes: a module is established in which, among other things,
the establishing module is used for establishing a corresponding relation between a shooting pause time node and shooting increment time for shooting the special effect video and corresponding special effect speed, special effect rendering time and special effect rendering increment time after speed change when the special effect video is obtained by the combining module.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs operations has been described in detail in the embodiment related to the method, and reference may be made to part of the description of the embodiment of the method for the relevant points, and the detailed description will not be made here.
In an exemplary embodiment, the present disclosure also provides an electronic device including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video effects processing method as described above.
In an exemplary embodiment, the present disclosure also provides a storage medium having instructions that, when executed by a processor of an electronic device, enable the electronic device to perform the video effects processing method of any one of claims 1 to 4.
In an exemplary embodiment, the present disclosure also provides a storage medium comprising instructions, such as a memory comprising instructions, executable by a processor of an electronic device to perform the above-described method. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 4 is a block diagram illustrating an electronic device 400 according to an example embodiment. For example, the electronic device 400 may be a mobile terminal or a server, and in the embodiment of the present disclosure, the electronic device is taken as a mobile terminal for example. For example, the electronic device 400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 4, electronic device 400 may include one or more of the following components: a processing component 402, a memory 404, a power component 406, a multimedia component 408, an audio component 410, an interface for input/output (I/O) 412, a sensor component 414, and a communication component 416.
The processing component 402 generally controls overall operation of the electronic device 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 402 may include one or more processors 420 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 402 can include one or more modules that facilitate interaction between the processing component 402 and other components. For example, the processing component 402 can include a multimedia module to facilitate interaction between the multimedia component 408 and the processing component 402.
The memory 404 is configured to store various types of data to support operations at the device 400. Examples of such data include instructions for any application or method operating on the electronic device 400, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 404 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 406 provides power to the various components of the electronic device 400. Power components 406 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 400.
The multimedia component 408 comprises a screen providing an output interface between the electronic device 400 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 408 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 400 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 410 is configured to output and/or input audio signals. For example, the audio component 410 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 400 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 404 or transmitted via the communication component 416. In some embodiments, audio component 410 also includes a speaker for outputting audio signals.
The I/O interface 412 provides an interface between the processing component 402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 414 includes one or more sensors for providing various aspects of status assessment for the electronic device 400. For example, the sensor component 414 can detect an open/closed state of the device 400, the relative positioning of components, such as a display and keypad of the electronic device 400, the sensor component 414 can also detect a change in the position of the electronic device 400 or a component of the electronic device 400, the presence or absence of user contact with the electronic device 400, orientation or acceleration/deceleration of the electronic device 400, and a change in the temperature of the electronic device 400. The sensor assembly 414 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 416 is configured to facilitate wired or wireless communication between the electronic device 400 and other devices. The electronic device 400 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 416 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the video effect processing methods illustrated above.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 404 comprising instructions, executable by the processor 420 of the electronic device 400 to perform the video effects processing method illustrated above, is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, in which instructions, when executed by a processor of an electronic device, cause the electronic device to perform the video special effects processing method shown above.
Fig. 5 is a block diagram illustrating an apparatus 500 for video special effects processing according to an example embodiment. For example, the apparatus 500 may be provided as a server. Referring to fig. 5, the apparatus 500 includes a processing component 522 that further includes one or more processors and memory resources, represented by memory 532, for storing instructions, such as applications, that are executable by the processing component 522. The application programs stored in memory 532 may include one or more modules that each correspond to a set of instructions. Further, the processing component 522 is configured to execute instructions to perform the above-described method video special effects processing method.
The apparatus 500 may also include a power component 526 configured to perform power management of the apparatus 500, a wired or wireless network interface 550 configured to connect the apparatus 500 to a network, and an input/output (I/O) interface 558. The apparatus 500 may operate based on an operating system stored in the memory 532, such as Windows Server, MacOSXTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for processing a video special effect, comprising:
when the fact that a video added with special effects is shot is detected, obtaining a special effect rate corresponding to shooting of each video segment in the video, wherein each video segment is controlled through shooting pause;
in the process of shooting the video, obtaining a shooting time node for shooting each video segment, wherein the shooting time node comprises: a shooting pause time and a shooting increment time;
determining the special effect rendering time of each video segment according to the special effect rate and the shooting increment time of each video segment;
rendering the special effect into a corresponding video segment according to the special effect rendering time of each video segment to obtain a corresponding special effect video segment;
and sequentially combining all the obtained special effect video segments according to the shooting pause time to obtain the shot special effect video.
2. The video special effect processing method according to claim 1, wherein the obtaining a special effect rate corresponding to capturing each video segment in the video comprises:
acquiring a special effect rate corresponding to each video segment according to a user instruction or a preset strategy; wherein the preset strategy comprises: and presetting a corresponding special effect rate according to the shooting time period of each video segment in the shot video.
3. The video effect processing method according to claim 1, wherein determining the effect rendering time of each video segment based on the effect rate and the capture delta time of each video segment comprises:
multiplying the special effect rate of each video segment by the shooting increment time of the corresponding video segment to obtain the special effect rendering increment time of each video segment;
adding the special effect rendering increment time of each video segment to the special effect pause time of the previous video segment to obtain the special effect pause time of the corresponding video segment;
and adding the special effect pause time of each video segment to the special effect rendering increment time of the corresponding video segment to obtain the special effect rendering time of each video segment.
4. The video special effects processing method according to any one of claims 1 to 3, wherein the method further comprises:
and when the special effect video is finished, establishing a corresponding relation between a shooting pause time node and shooting increment time of each video segment for shooting the special effect video and corresponding special effect speed, special effect rendering time and variable-speed special effect rendering increment time.
5. A video special effects processing apparatus, comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is configured to acquire a special effect rate corresponding to the shooting of each video segment in a video when the video for starting the shooting of the special effect is detected, and each video segment is controlled by shooting pause;
a second obtaining module configured to perform obtaining a shooting time node for shooting each video segment in the process of shooting the video, the shooting time node including: a shooting pause time and a shooting increment time;
a determining module configured to perform determining a special effect rendering time of each video segment according to the special effect rate and the shooting increment time of each video segment;
the rendering module is configured to render the special effect into a corresponding video segment according to the special effect rendering time of each video segment to obtain a corresponding special effect video segment;
and the combination module is configured to perform sequential combination of all the obtained special effect video segments according to the shooting pause time to obtain the shot special effect video.
6. The video special effects processing apparatus according to claim 5, wherein the second obtaining module includes:
the instruction acquisition module is configured to execute the acquisition of the special effect rate corresponding to each video segment according to the instruction of the user; and/or
The strategy acquisition module is configured to execute the acquisition of the special effect rate corresponding to each video segment according to a preset strategy; wherein the preset strategy comprises: and presetting a corresponding special effect rate according to the shooting time period of each video segment in the shot video.
7. The video special effects processing apparatus according to claim 5, wherein the determining module comprises:
the first calculation module is configured to multiply the special effect rate of each video segment by the shooting increment time of the corresponding video segment to obtain a special effect rendering increment time of each video segment;
the second calculation module is configured to add the special effect rendering increment time of each video segment to the special effect pause time of the previous video segment to obtain the special effect pause time of the corresponding video segment;
and the third calculation module is configured to add the special effect pause time of each video segment to the special effect rendering increment time of the corresponding video segment to obtain the special effect rendering time of each video segment.
8. The video special effects processing apparatus according to any one of claims 5 to 7, wherein the apparatus further comprises:
and the establishing module is used for establishing a corresponding relation between a shooting pause time node and shooting increment time for shooting the special effect video and corresponding special effect speed, special effect rendering time and special effect rendering increment time after speed change when the special effect video is obtained by the combining module.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video effects processing method of any of claims 1 to 4.
10. A storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the video effects processing method of any of claims 1 to 4.
CN202010889092.9A 2020-08-28 2020-08-28 Video special effect processing method and device, electronic equipment and storage medium Active CN114125528B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010889092.9A CN114125528B (en) 2020-08-28 2020-08-28 Video special effect processing method and device, electronic equipment and storage medium
PCT/CN2021/104095 WO2022042029A1 (en) 2020-08-28 2021-07-01 Video special effect processing method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010889092.9A CN114125528B (en) 2020-08-28 2020-08-28 Video special effect processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114125528A true CN114125528A (en) 2022-03-01
CN114125528B CN114125528B (en) 2022-11-11

Family

ID=80352585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010889092.9A Active CN114125528B (en) 2020-08-28 2020-08-28 Video special effect processing method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114125528B (en)
WO (1) WO2022042029A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114664331B (en) * 2022-03-29 2023-08-11 深圳万兴软件有限公司 Period-adjustable variable speed special effect rendering method, system and related components thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150007032A1 (en) * 2012-07-03 2015-01-01 Hunt K. Brand System and method for visual editing
CN108616696A (en) * 2018-07-19 2018-10-02 北京微播视界科技有限公司 A kind of video capture method, apparatus, terminal device and storage medium
CN108965705A (en) * 2018-07-19 2018-12-07 北京微播视界科技有限公司 A kind of method for processing video frequency, device, terminal device and storage medium
CN111541914A (en) * 2020-05-14 2020-08-14 腾讯科技(深圳)有限公司 Video processing method and storage medium
CN111556363A (en) * 2020-05-21 2020-08-18 腾讯科技(深圳)有限公司 Video special effect processing method, device and equipment and computer readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6424789B1 (en) * 1999-08-17 2002-07-23 Koninklijke Philips Electronics N.V. System and method for performing fast forward and slow motion speed changes in a video stream based on video content
CN109451248B (en) * 2018-11-23 2020-12-22 广州酷狗计算机科技有限公司 Video data processing method and device, terminal and storage medium
CN111385639B (en) * 2018-12-28 2021-02-12 广州市百果园信息技术有限公司 Video special effect adding method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150007032A1 (en) * 2012-07-03 2015-01-01 Hunt K. Brand System and method for visual editing
CN108616696A (en) * 2018-07-19 2018-10-02 北京微播视界科技有限公司 A kind of video capture method, apparatus, terminal device and storage medium
CN108965705A (en) * 2018-07-19 2018-12-07 北京微播视界科技有限公司 A kind of method for processing video frequency, device, terminal device and storage medium
WO2020015333A1 (en) * 2018-07-19 2020-01-23 北京微播视界科技有限公司 Video shooting method and apparatus, terminal device, and storage medium
CN111541914A (en) * 2020-05-14 2020-08-14 腾讯科技(深圳)有限公司 Video processing method and storage medium
CN111556363A (en) * 2020-05-21 2020-08-18 腾讯科技(深圳)有限公司 Video special effect processing method, device and equipment and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
J. WU ET AL.: "Streaming High-Quality Mobile Video with Multipath TCP in Heterogeneous Wireless Networks", 《IEEE TRANSACTIONS ON MOBILE COMPUTING》 *
于雪姣: ""抖音"APP的内容运营研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑(月刊),2019年第08期》 *

Also Published As

Publication number Publication date
CN114125528B (en) 2022-11-11
WO2022042029A1 (en) 2022-03-03

Similar Documents

Publication Publication Date Title
CN106791893B (en) Video live broadcasting method and device
CN110677734B (en) Video synthesis method and device, electronic equipment and storage medium
CN111314617B (en) Video data processing method and device, electronic equipment and storage medium
CN109451341B (en) Video playing method, video playing device, electronic equipment and storage medium
CN112217990B (en) Task scheduling method, task scheduling device and storage medium
CN112929561B (en) Multimedia data processing method and device, electronic equipment and storage medium
CN104850643B (en) Picture comparison method and device
CN112217987B (en) Shooting control method and device and storage medium
CN114125528B (en) Video special effect processing method and device, electronic equipment and storage medium
CN110502993B (en) Image processing method, image processing device, electronic equipment and storage medium
CN111861942A (en) Noise reduction method and device, electronic equipment and storage medium
CN113157178B (en) Information processing method and device
CN111586296B (en) Image capturing method, image capturing apparatus, and storage medium
CN114827721A (en) Video special effect processing method and device, storage medium and electronic equipment
CN108769513B (en) Camera photographing method and device
CN107656769B (en) Application starting method and device, computer equipment and storage medium
CN113315903A (en) Image acquisition method and device, electronic equipment and storage medium
CN110784721A (en) Picture data compression method and device, electronic equipment and storage medium
CN112019680A (en) Screen brightness adjusting method and device
CN114125545B (en) Video information processing method, device, electronic equipment and storage medium
CN111462284B (en) Animation generation method, animation generation device and electronic equipment
CN112506393B (en) Icon display method and device and storage medium
CN110955328B (en) Control method and device of electronic equipment and storage medium
CN110895793B (en) Image processing method, device and storage medium
CN114282022A (en) Multimedia editing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant