CN114466232A - Video processing method, video processing device, electronic equipment and medium - Google Patents

Video processing method, video processing device, electronic equipment and medium Download PDF

Info

Publication number
CN114466232A
CN114466232A CN202210111752.XA CN202210111752A CN114466232A CN 114466232 A CN114466232 A CN 114466232A CN 202210111752 A CN202210111752 A CN 202210111752A CN 114466232 A CN114466232 A CN 114466232A
Authority
CN
China
Prior art keywords
video
frame rate
input
camera
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210111752.XA
Other languages
Chinese (zh)
Inventor
程鹏
刘文强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210111752.XA priority Critical patent/CN114466232A/en
Publication of CN114466232A publication Critical patent/CN114466232A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42653Internal components of the client ; Characteristics thereof for processing graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Abstract

The application discloses a video processing method, a video processing device, electronic equipment and a readable storage medium, and belongs to the technical field of image processing. Wherein the method comprises the following steps: acquiring a first video acquired by a first camera at a first frame rate, and acquiring a second video acquired by a second camera at a second frame rate, wherein the first frame rate is different from the second frame rate; receiving a first input of a user; and responding to the first input, and outputting a third video, wherein the third video is obtained by fusing the video of the first object in the first video and the video of the object except the first object in the second video.

Description

Video processing method, video processing device, electronic equipment and medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a video processing method and device, an electronic device and a readable storage medium.
Background
At present, the requirement of a user on a video editing function is higher and higher, and the user wants to edit a video to achieve a desired playing effect of the video.
In the video editing process, a user can multiply the playing speed of a video so as to achieve the effect of fast playing or slow playing. For example, in the process of playing a video, a user clicks the screen according to a preset input mode, and pops up options of fast forwarding by 1.5 times and fast forwarding by 2 times, such as controls of x1.5 and x2, and the user clicks the control of x1.5, so that the video is played at a speed of fast forwarding by 1.5 times from the current moment.
In the prior art, the video playing speed can only be adjusted based on the dimension of the time period, and the adjustment mode is single.
Disclosure of Invention
An object of the embodiments of the present application is to provide a video processing method, which can solve the problem that in the prior art, the video playing speed can only be adjusted based on the dimension of the time period, and the adjustment mode is single.
In a first aspect, an embodiment of the present application provides a video processing method, where the method includes: acquiring a first video acquired by a first camera at a first frame rate, and acquiring a second video acquired by a second camera at a second frame rate, wherein the first frame rate is different from the second frame rate; receiving a first input of a user; and responding to the first input, and outputting a third video, wherein the third video is obtained by fusing the video of the first object in the first video and the video of the object except the first object in the second video.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including: the first acquisition module is used for acquiring a first video acquired by a first camera at a first frame rate and a second video acquired by a second camera at a second frame rate, wherein the first frame rate is different from the second frame rate; the first receiving module is used for receiving a first input of a user; and the output module is used for responding to the first input and outputting a third video, wherein the third video is obtained by fusing the video of the first object in the first video and the video of the object except the first object in the second video.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, two groups of cameras are adopted to collect images at different frame rates respectively, so that a first video collected by a first camera and a second video collected by a second camera can be obtained respectively. Further, a first object is determined in the first video based on a first input of a user, and meanwhile, whether the first object needs to be added with a quick special effect or a slow special effect is determined based on the first input of the user; therefore, according to the frame rate relation between the first video and the second video, the first object in one video (namely, as the first video) is fused with other objects in the other video (namely, as the second video), and the third video is output. Therefore, in the output third video, the objects with different frame rates are fused together, and the effects of relative acceleration and relative deceleration between the objects can be achieved. Therefore, in the embodiment of the application, the shooting objects such as the shooting background and the shooting subject can be used as a new dimension to adjust the video playing speed, so that the independent acceleration and deceleration effects of the shooting objects are presented in the video, and the adjustment mode is enriched.
Drawings
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating an operation of a video processing method according to an embodiment of the present application;
FIG. 3 is a second schematic diagram illustrating the operation of the video processing method according to the embodiment of the present application;
fig. 4 is one of explanatory diagrams of a video processing method according to an embodiment of the present application;
FIG. 5 is a second schematic diagram illustrating a video processing method according to an embodiment of the present application;
fig. 6 is a third schematic operational diagram of a video processing method according to an embodiment of the present application;
fig. 7 is a block diagram of a video processing apparatus of an embodiment of the present application;
fig. 8 is one of the hardware configuration diagrams of the electronic device according to the embodiment of the present application;
fig. 9 is a second schematic diagram of a hardware structure of the electronic device according to the embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be described below clearly with reference to the drawings of the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments that can be derived from the embodiments of the present application by one of ordinary skill in the art are intended to be within the scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The video processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Fig. 1 shows a flowchart of a video processing method according to an embodiment of the present application, which is applied to an electronic device, and includes:
step 110: the method comprises the steps of acquiring a first video acquired by a first camera at a first frame rate, and acquiring a second video acquired by a second camera at a second frame rate, wherein the first frame rate is different from the second frame rate.
In the application, the electronic equipment comprises a first camera and a second camera, and the first camera and the second camera are used for acquiring images based on the same shooting scene.
For example, the first camera and the second camera are both front cameras of the electronic device; for another example, the first camera and the second camera are both rear cameras of the electronic device.
The first camera and the second camera can be arranged adjacently to achieve image acquisition based on the same shooting scene and at similar or even the same shooting angle.
In the step, in the shooting mode, the first camera and the second camera simultaneously enter a working state to respectively acquire images based on the same shooting scene, so that a first video acquired by the first camera and a second video acquired by the second camera are acquired.
The first camera and the second camera respectively acquire images at different frame rates, so that personalized adjustment of playing speed of the shot video is realized in the application.
Alternatively, the first frame rate and the second frame rate may be determined by a user through an input.
Alternatively, the first frame rate and the second frame rate may be determined according to a preset frame rate parameter of the electronic device.
In the application scenario of this embodiment, for example, a user clicks the "video recording" control to start video recording, and the two groups of cameras respectively acquire images at a preset frame rate.
Optionally, the images acquired by the two groups of cameras are respectively displayed in an image preview interface.
Further, after the photographing is completed, at least one of the first video and the second video is displayed on the target interface.
In an application scenario, for example, a user clicks a "video recording" control, stops recording, pops up a prompt box, enters a special effect editing mode, displays a first video or a second video on an interface shown in fig. 2, and further, clicks a "switching" control 201, and switches the second video or the first video.
Step 120: a first input is received from a user.
The first input comprises touch input performed by a user on a screen, and is not limited to input of clicking, sliding, dragging and the like; the first input may also be a blank input, a gesture action, a face action, and the like of the user, and the first input further includes an input of a physical key on the device by the user, and is not limited to an input of a press and the like. Furthermore, the first input includes one or more inputs, wherein the plurality of inputs may be continuous or intermittent.
In this step, a first input is used for a user to select a first object in a first video.
Application scenario for example, in fig. 3, in the case of displaying a first video, a first object is clicked. For example, the first object is at least one of a teacher and a student selected in a box in the graph.
Step 130: and responding to the first input, and outputting a third video, wherein the third video is obtained by video fusion of a first object in the first video and the video of the object except the first object in the second video.
In this step, the third video obtained based on the fusion is output.
The application scene is that two people recording A and B dance, two frame rates of 30fps and 60fps are respectively used, and the recording duration is 5 s. If the frame rates of the two videos are uniformly adjusted to 30fps, the action speed of the person in the video corresponding to the original 30fps is unchanged, the time length of the video corresponding to the original 60fps is changed to 10s, and the action of the corresponding person is slowed by 2 times. Based on the above, the character A in the video corresponding to 30fps is divided and fused with the part of the video corresponding to 60fps except the character A, and the finally obtained video has a special effect that the motion speed of the character A is 2 times of that of the character B.
It can be seen that, in the application scenario, based on two videos with different frame rates, an object in one video may be fused with another object in another video, so as to present special effects with different motion speeds of different objects in a finally obtained video.
Optionally, the special effects that can be presented in the video of the present application include, but are not limited to: the action speed of the same object relative to the video background becomes faster and slower along with the playing progress; and different objects, relative to the video background, the action speed becomes faster and slower along with the playing progress respectively.
In this step, the first input is also used to set the shift information for the first object.
Application scenarios for example, in FIG. 3, the user clicks on the "Main-Slow" control 301 body in setting popup.
Optionally, the video applied in this embodiment includes an object performing motion (e.g., motion) as the first object, and the motion speed is adjusted.
Thus, in the embodiment of the application, two groups of cameras are adopted to collect images at different frame rates, so that a first video collected by a first camera and a second video collected by a second camera can be respectively obtained. Further, based on a first input of a user, a first object is determined in the first video, and meanwhile, based on the first input of the user, whether the first object needs to add a speed-down special effect or a speed-down special effect is determined; therefore, according to the frame rate relation between the first video and the second video, the first object in one video (namely, as the first video) is fused with other objects in the other video (namely, as the second video) to output the finally processed third video. Therefore, in the output third video, the objects with different frame rates are fused together, and the effects of relative acceleration and relative deceleration between the objects can be achieved. Therefore, in the embodiment of the application, the shooting objects such as the shooting background and the shooting subject can be used as a new dimension to adjust the video playing speed, so that the independent acceleration and deceleration effects of the shooting objects are presented in the video, and the adjustment mode is enriched.
In another embodiment of the present application, the first frame rate is greater than the second frame rate.
The method further comprises the following steps:
step A1: and acquiring a first sub-video with a first target frame number in the first video, wherein the first target frame number is the same as the frame number of the second video, and the video duration of the first sub-video is less than the video duration of the first video.
In the present embodiment, in the case where the speed change information indicates that the first object decelerates, that is, the first object slows down the motion speed compared to other objects in the video. The adopted fusion method comprises the following steps: and dividing a first object in the high frame rate video, and fusing the first object with other objects except the first object in the low frame rate video.
Correspondingly, in this embodiment, a video with a higher frame rate is taken as the first video, and a video with a lower frame rate is taken as the first video.
In the fusion process, the nth frame of the first video is fused with the corresponding frame of the second video. Compared with the high-frame-rate video and the low-frame-rate video, the high-frame-rate video has more frames and the low-frame-rate video has fewer frames, so that in the fusion process, a part of frames of the high-frame-rate video can be discarded based on the number of frames of the low-frame-rate video. Therefore, the first sub-video for fusion in the first video has the same corresponding first target frame number as the second video for fusion.
The discarded partial frames may be continuous or discontinuous, and therefore, the retained first target frame number may be a continuous frame number or a discontinuous frame number.
Correspondingly, the first sub-video is the first video with the partial frames discarded, and the video duration of the first sub-video is smaller than that of the first video.
In further embodiments, the fusion may be performed by synthesizing new frames without discarding frames. For example, a new frame may be synthesized from adjacent frames.
Step 130 comprises:
substep A2: and outputting a third video, wherein the third video is obtained by fusing the video of the first object in the first sub-video and the video of the objects except the first object in the second video.
Alternatively, referring to fig. 4, in the fusion process, a target region where a first object (a person in the figure) is located in a first video (a left video in the figure) may be identified, so as to segment the target region and place the target region in a corresponding region of the first object in a second video (a right video in the figure).
In this embodiment, a process for presenting a first object deceleration effect is provided.
In another embodiment of the present application, the first frame rate is less than the second frame rate.
The method further comprises the following steps:
step B1: and acquiring a second sub-video with a second target frame number in the second video, wherein the second target frame number is the same as the frame number of the first video, and the video duration of the second sub-video is less than the video duration of the second video.
In the present embodiment, in the case where the speed change information indicates that the first object is accelerated, that is, the first object is accelerated in motion speed compared with other objects in the video. The adopted fusion method comprises the following steps: and dividing a first object in the low frame rate video, and fusing the first object with other objects except the first object in the high frame rate video.
Correspondingly, in the present embodiment, a video with a lower frame rate is used as the first video, and a video with a higher frame rate is used as the second video.
In the fusion process, the nth frame of the first video is fused with the corresponding frame of the second video. The high frame rate video has a larger number of frames than the low frame rate video, and the low frame rate video has a smaller number of frames, so that a part of the frames of the high frame rate video is discarded based on the number of frames of the low frame rate video in the fusion process. And the second sub video used for fusion in the second video has the same second target frame number as the first video used for fusion.
The discarded partial frames may be continuous or discontinuous, and therefore, the retained second target frame number may be a continuous frame number or a discontinuous frame number.
Correspondingly, the second sub-video is the second video with the partial frames discarded, and the video duration of the second sub-video is smaller than that of the second video.
In further embodiments, the fusion may be performed by synthesizing new frames without discarding frames. For example, a new frame may be synthesized from adjacent frames.
Step 130 comprises:
substep B2: and outputting a third video, wherein the third video is obtained by fusing the video of the first object in the first video and the video of the objects except the first object in the second sub-video.
Alternatively, referring to fig. 4, in the fusion process, a target region where a first object (a person in the figure) is located in a first video (a left video in the figure) may be identified, so as to segment the target region and place the target region in a corresponding region of the first object in a second video (a right video in the figure).
In this embodiment, a process for presenting a first object accelerated special effect is provided.
In another embodiment of the present application, a video can be divided into a plurality of playing time periods, so that the variable speed adjustment can be performed for different time periods.
In an application scene, for example, in a video, a special effect of multi-stage speed change can be realized for the same object, taking two-stage speed change (such as fast first and slow second) as an example, referring to fig. 3, a user clicks a "main body-fast" control 302, then selects a certain person in the video as a first object, and drags a video progress bar to a certain position, that is, the acceleration adjustment of the front part of the video is completed; then, clicking the 'main body-slow speed' control 301, selecting the same person in the video as a first object, and dragging the video progress bar to the end position, namely finishing the speed reduction adjustment of the rear part of the video. In the output video, the special effect that the action speed of a certain character in the first half video is higher than that of the background and the action speed of the character in the second half video is lower than that of the background can be achieved.
For another example, in an application scene, a special effect of multi-stage speed change can be realized for different objects in a video, taking two-stage speed change with each different object as an example, referring to fig. 3, a user clicks a "main body-quick" control 302, then selects a person, i.e., a teacher in the video, as a first object, and drags a video progress bar to a certain position, i.e., finishing the accelerated adjustment of the front part of the video; then, clicking the 'main body-slow speed' control 302, selecting a person of one student in the video as a first object, and dragging the video progress bar to the end position, namely finishing the variable speed adjustment of the rear part of the video. In the output video, the special effect that the teacher action speed of the first half video is higher than that of the background and the student action speed of the second half video is lower than that of the background can be achieved.
Wherein, in the above-mentioned plurality of application scenes, the background may be composed of all other objects except the first object.
In this embodiment, after the first object in a certain playing time period is subjected to variable speed adjustment, the video duration corresponding to the playing time period is increased or decreased accordingly.
For example, the acceleration adjustment is performed on the first object in a certain period of time, the duration of which is 10s, so that the action speed of the first object in the period of time is accelerated by as much as twice, and therefore, the corresponding duration is truncated from 10s to 5 s. Optionally, the starting time of the time period is reserved to the shortened ending time. If the time period is 00 of the video: 00-00: 10, after reduction, the time period corresponds to 00: 00-00: 05.
For another example, the deceleration adjustment is performed on the first object in a time period, the duration of the time period is 10s, so that the action speed of the first object in the time period is decelerated as twice, and therefore, the corresponding duration is increased from 10s to 20 s. Alternatively, the start time and the end time of the time period are retained, and the extension may be deleted.
In further arrangements, the reserved portion may also be customized by the user for splicing with other portions of the video.
In this embodiment, on the basis of performing variable speed adjustment separately for different objects, the dimension of time is also increased to perform variable speed adjustment of the objects in different time periods, so that the video special effect is better and richer.
In another embodiment of the present application, for the same time period of a certain video, different variable speed adjustments can be made for different objects.
For example, referring to fig. 3, in an application scenario, a user clicks a "main body-quick" control 301, and then selects a person, i.e., a teacher in a video, as a first object, to complete acceleration adjustment of the teacher; then, clicking the 'main body-slow speed' control 302, and then selecting one person, namely a student in the video, as a first object, namely completing the speed change adjustment of the student; finally, the video progress bar is dragged to a certain position (such as an end position). In the output video, the special effect that the teacher is faster than the background and the student is slower than the background in a certain time period of the video can be achieved.
For example, the output video includes a low frame rate video and a high frame rate video, and first, a teacher in the low frame rate video is fused with a background video except the teacher in the high frame rate video, and the fused high frame rate video is output to complete the accelerated adjustment of the teacher; and then, increasing the frame rate of the low-frame-rate video to be higher than that of the high-frame-rate video, so as to fuse the student in the video with the adjusted frame rate with the original high-frame-rate video, thereby completing the deceleration adjustment of the student.
In the flow of the video processing method according to another embodiment of the present application, before step 130, the method further includes:
step C1: relative position information of the first object and the second object in the second video is obtained.
In this embodiment, on one hand, there may be a difference in the shooting angles of the two groups of cameras; in the second aspect, after the fusion is performed, the relative position between the first object and the other object is shifted by adjusting the motion speed of the first object. Based on the above two considerations, the position of the first object may be shifted after the two videos are fused, and therefore, in order to increase the fusion accuracy between the objects, the position of the first object may be adjusted with reference to the relative position between the objects before the fusion.
Application scenario e.g., referring to fig. 3, the user clicks on the "background" control 303 so that any other object than the first object can be selected in the displayed video as the second object in this step.
Further, before the fusion, in the second video, relative position information between the first object and the second object is acquired.
Correspondingly, the video fusion of the first object in the first video and the object except the first object in the second video comprises the following steps:
and fusing the video of the first object in the first video with the video of the second object in the first video according to the relative position information.
And the second object is at least one of the objects in the second video except the first object.
In the fusion process of this embodiment, the first object in the first video and the second object in the second video may be fused according to the relative position information obtained in the foregoing steps.
Referring to fig. 5, in the second video, any object other than the first object may be the second object.
For example, referring to fig. 5, a person in a video is a first object, and then a plurality of objects are fused based on the relative position relationship between the person in a second video and a tree and the sun.
Fig. 5 is used to represent a frame of image in a video, and in the fusion process, fusion may be performed for each frame in the video.
In this embodiment, in order to improve the accuracy of fusion and ensure the reality of editing the video, the position relationship between the objects before fusion is referred to, and the fusion is performed to restore the video frame.
In another embodiment of the present application, the first object includes a third object and a fourth object;
the motion speed of the third object in the third video is a first multiple of the motion speed of the third object in the second video;
the motion speed of the fourth object in the third video is a second multiple of the motion speed of the fourth object in the second video;
wherein the first multiple is different from the second multiple.
In this embodiment, based on two videos with different frame rates, a special effect can be flexibly set in combination with user input.
For example, referring to fig. 3, a user clicks a "main body 3 times speed" control 304 in the interface, and if the frame rates corresponding to the two videos do not have a 3-time relationship, for example, 60fps and 30fps respectively, the frame rate of one video may be adjusted, so that the frame rates between the two videos satisfy the 3-time relationship. Further, based on the two adjusted videos, the fusion between the objects is entered.
Based on this, in the present embodiment, the third object and the fourth object may respectively represent one object. Therefore, based on the user input, an object (i.e., the third object) may be presented in the output third video within the same time period, and the moving speed is increased or decreased by a first multiple compared to the video background; another object (i.e., the fourth object) moves faster or slower by a second multiple compared to the video background. Namely, the running speeds of the two objects can present different times of changing effects so as to enrich the special effect of the video.
For example, referring to fig. 3, in an application scenario, a user clicks a "main body 3 × speed" control 304 and a "main body-quick" control 301, and then selects a person, i.e., a teacher in a video, as a first object, to complete acceleration adjustment of the teacher; then, the "main body 2 speed" control 305 and the "main body-fast" control 301 select a person, which is a student in the video, as a first object, so as to complete the speed change adjustment for the student. In the output video, the special effect that the teacher is faster than the background by a factor of 3 and the student is faster than the background by a factor of 3 in a certain time period of the video can be achieved.
In the flow of the video processing method according to another embodiment of the present application, before step 110, the method further includes:
step D1: a frame rate selection control is displayed.
Application scenario for example, referring to FIG. 6, a user opens the "Camera" application, enters "variable video" mode, and displays a first camera (Camera 1) control 601 and a second camera (Camera 2) control 602.
Step D2: a second input to the frame rate selection control from the user is received.
The second input comprises touch input performed by a user on the screen, and is not limited to input of clicking, sliding, dragging and the like; the second input may also be a blank input, a gesture action, a face action, etc. of the user, and the second input further includes an input of a physical key on the device by the user, and is not limited to an input of a press, etc. The second input comprises one or more inputs, wherein the plurality of inputs may be continuous or intermittent.
In this step, the second input is used to set the first frame rate and the second frame rate.
For example, referring to fig. 6, in an application scenario, a user clicks a camera 1 control 601, a frame rate setting window is displayed, and the user clicks a "60 fps" sub-control 603; the user clicks the camera 2 control 602, the frame rate setting window is displayed, and the user clicks the "30 fps" sub-control 604.
Optionally, the screen of the electronic book device displays the currently set frame rate.
Step D3: acquiring a first video at a first frame rate in response to a second input; and acquiring a second video at a second frame rate.
For example, referring to fig. 6, in an application scenario, a user clicks a "video recording" control 605 to start video recording, and a first camera collects a first video at a first frame rate; and the second camera collects a second video at a second frame rate.
Optionally, after the shooting is started, the displayed prompt text may be hidden to avoid obscuring the preview.
The first frame rate is a target multiple of the second frame rate, so that the acceleration adjustment or the deceleration adjustment can be realized for the action speed of the first object.
In this embodiment, the first frame rate and the second frame rate may be preset by a user, so that on the basis of the videos with the corresponding frame rates being respectively collected, based on a relationship between the first frame rate and the second frame rate, a speed adjustment is performed for a certain object, thereby enriching a video special effect.
In the flow of the video processing method according to another embodiment of the present application, before step 130, the method further includes:
step E1: and adjusting the video light source information of the first object in the third video according to the video light source information of the objects except the first object in the second video.
In this step, video light source information is presented in the background of the video based on the light source distribution, light brightness change, and the like of the shooting scene. Therefore, the second video can be used as a video background to adjust the video illuminant information of the fused first object according to the video illuminant information of the second video, so that the first object is more natural and not hard in combination with the video background.
Step E2: and adjusting the position information of the layer where the first object is located in the third video according to the position relation between the layer where the first object is located in the second video and the layer where the object except the first object is located.
In this step, based on the position distribution of each object of the shooting scene, the occlusion relationship between each object is presented in the video. The shielding relation can be embodied by the position relation between the layers where the objects are located. Therefore, the position information of the layer where the fused first object is located can be adjusted by taking the position relation of the layer where each object is located in the second video as a reference, so that the first object and each object are combined more naturally and are not hard.
Optionally, the position information of the layer is as the layer number.
In the embodiment, after the objects are fused and before the video is output, in combination with the actual shooting scene, some environment-related factors related to the first object are subjected to fine adjustment, so that the output video is more real, and the high quality of the video is ensured on the basis of meeting the diversified video special effects.
In another embodiment of the present application, on the premise that the parallax distance between the multiple groups of cameras is small, it may be considered to use a larger number of cameras (for example, three groups of cameras), and the multiple groups of cameras respectively acquire videos at different frame rates, so that the application of the special speed change effect is more flexible.
For example, the output video includes a low frame rate video, a medium frame rate video and a high frame rate video, and first, a teacher in the low frame rate video is fused with the medium frame rate video, and the fused medium frame rate video is output to complete acceleration adjustment of the teacher; then, the students in the high frame rate video are fused with the medium frame rate video, and the fused medium frame rate video is output again to complete the deceleration adjustment of the students.
In summary, in the embodiment of the present application, two videos recorded at different frame rates by two cameras are used, so that object fusion between the two videos can be achieved, thereby achieving a special speed-changing effect of the same main body/different main bodies and the same time period/different time periods, and increasing playability of video post-processing.
In the video processing method provided by the embodiment of the application, the execution main body can be a video processing device. In the embodiment of the present application, a video processing apparatus executing a video processing method is taken as an example, and the video processing apparatus provided in the embodiment of the present application is described.
Fig. 7 shows a block diagram of a video processing apparatus according to another embodiment of the present application, the apparatus including:
a first obtaining module 10, configured to obtain a first video acquired by a first camera at a first frame rate, and a second video acquired by a second camera at a second frame rate, where the first frame rate is different from the second frame rate;
a first receiving module 20, configured to receive a first input of a user;
and the output module 30 is configured to output, in response to the first input, a third video, where the third video is obtained by fusing a video of the first object in the first video and a video of an object other than the first object in the second video.
In the embodiment of the application, two groups of cameras are adopted to collect images at different frame rates respectively, so that a first video collected by a first camera and a second video collected by a second camera can be obtained respectively. Further, based on a first input of a user, a first object is determined in the first video, and meanwhile, based on the first input of the user, whether the first object needs to add a speed-down special effect or a speed-down special effect is determined; therefore, according to the frame rate relation between the first video and the second video, the first object in one video (namely, as the first video) is fused with other objects in the other video (namely, as the second video) to output the finally processed third video. Therefore, in the output third video, the objects with different frame rates are fused together, and the effects of relative acceleration and relative deceleration between the objects can be achieved. Therefore, in the embodiment of the application, the shooting objects such as the shooting background and the shooting subject can be used as a new dimension to adjust the video playing speed, so that the independent acceleration and deceleration effects of the shooting objects are presented in the video, and the adjustment mode is enriched.
Optionally, the first frame rate is greater than the second frame rate, and the apparatus further includes:
the second acquisition module is used for acquiring a first sub-video with a first target frame number in the first video, wherein the first target frame number is the same as the frame number of the second video, and the video time length of the first sub-video is less than the video time length of the first video;
the output module 30 includes:
and the first output subunit is used for outputting a third video, wherein the third video is obtained by fusing the video of the first object in the first sub-video and the video of the object except the first object in the second video.
Optionally, the first frame rate is less than the second frame rate, and the apparatus further includes:
the third acquisition module is used for acquiring a second sub-video with a second target frame number in the second video, wherein the second target frame number is the same as the frame number of the first video, and the video duration of the second sub-video is less than the video duration of the second video;
the output module 30 includes:
and the second output unit is used for outputting a third video, and the third video is obtained by fusing the video of the first object in the first video and the video of the object except the first object in the second sub-video.
Optionally, the apparatus further comprises:
the third acquisition module is used for acquiring the relative position information of the first object and the second object in the second video;
the fusion module is used for fusing the video of the first object in the first video with the video of the second object in the first video according to the relative position information;
and the second object is at least one of the objects in the second video except the first object.
Optionally, the first object comprises a third object and a fourth object;
the motion speed of the third object in the third video is a first multiple of the motion speed of the third object in the second video;
the motion speed of the fourth object in the third video is a second multiple of the motion speed of the fourth object in the second video;
wherein the first multiple is different from the second multiple.
Optionally, the apparatus further comprises:
the display module is used for displaying the frame rate selection control;
the second receiving module is used for receiving a second input of the frame rate selection control by the user;
the acquisition module is used for responding to a second input and acquiring a first video at a first frame rate; and acquiring a second video at a second frame rate.
Optionally, the apparatus further comprises:
the first adjusting module is used for adjusting the video light source information of the first object in the third video according to the video light source information of the objects except the first object in the second video;
and the second adjusting module is used for adjusting the position information of the layer where the first object is located in the third video according to the position relation between the layer where the first object is located in the second video and the layer where the object except the first object is located.
The video processing apparatus in the embodiment of the present application may be an electronic device, and may also be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a television (television, TV), an assistant, or a self-service machine, and the embodiments of the present application are not limited in particular.
The video processing apparatus according to the embodiment of the present application may be an apparatus having an action system. The action system may be an Android (Android) action system, an ios action system, or other possible action systems, and the embodiment of the present application is not particularly limited.
The video processing apparatus provided in the embodiment of the present application can implement each process implemented by the foregoing method embodiment, and is not described here again to avoid repetition.
Optionally, as shown in fig. 8, an electronic device 100 is further provided in this embodiment of the present application, and includes a processor 101, a memory 102, and a program or an instruction stored in the memory 102 and executable on the processor 101, where the program or the instruction is executed by the processor 101 to implement each step of any one of the above embodiments of the video processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device according to the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 9 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The processor 1010 is configured to acquire a first video acquired by a first camera at a first frame rate, and a second video acquired by a second camera at a second frame rate, where the first frame rate is different from the second frame rate; controlling the user input unit 1007 to receive a first input of a user; and responding to the first input, and outputting a third video, wherein the third video is obtained by fusing the video of the first object in the first video and the video of the object except the first object in the second video.
In the embodiment of the application, two groups of cameras are adopted to collect images at different frame rates respectively, so that a first video collected by a first camera and a second video collected by a second camera can be obtained respectively. Further, based on a first input of a user, a first object is determined in the first video, and meanwhile, based on the first input of the user, whether the first object needs to add a speed-down special effect or a speed-down special effect is determined; therefore, according to the frame rate relation between the first video and the second video, the first object in one video (namely, as the first video) is fused with other objects in the other video (namely, as the second video) to output the finally processed third video. Therefore, in the output third video, the objects with different frame rates are fused together, and the effects of relative acceleration and relative deceleration between the objects can be achieved. Therefore, in the embodiment of the application, the shooting objects such as the shooting background and the shooting subject can be used as a new dimension to adjust the video playing speed, so that the independent acceleration and deceleration effects of the shooting objects are presented in the video, and the adjustment mode is enriched.
Optionally, the first frame rate is greater than the second frame rate, and the processor 1010 is further configured to obtain a first sub-video with a first target frame number in the first video, where the first target frame number is the same as a frame number of the second video, and a video duration of the first sub-video is less than a video duration of the first video; and outputting a third video, wherein the third video is obtained by fusing the video of the first object in the first sub-video and the video of the object except the first object in the second video.
Optionally, the first frame rate is less than the second frame rate, and the processor 1010 is further configured to obtain a second sub-video with a second target frame number in the second video, where the second target frame number is the same as the frame number of the first video, and a video duration of the second sub-video is less than a video duration of the second video; and outputting a third video, wherein the third video is obtained by fusing the video of the first object in the first video and the video of the object except the first object in the second sub-video.
Optionally, the processor 1010 is further configured to obtain relative position information of the first object and the second object in the second video; according to the relative position information, fusing the video of the first object in the first video with the video of the second object in the first video; wherein the second object is at least one of the objects in the second video other than the first object.
Optionally, the first object comprises a third object and a fourth object; the motion speed of the third object in the third video is a first multiple of the motion speed of the third object in the second video; the motion speed of the fourth object in the third video is a second multiple of the motion speed of the fourth object in the second video; wherein the first multiple is different from the second multiple.
Optionally, the processor 1010 is further configured to control the display unit 1006 to display a frame rate selection control; controlling the user input unit 1007 to receive a second input of the frame rate selection control from the user; acquiring the first video at the first frame rate in response to the second input; and acquiring the second video at the second frame rate.
Optionally, the processor 1010 is further configured to adjust video illuminant information of the first object in the third video according to video illuminant information of objects in the second video other than the first object; and adjusting the position information of the layer where the first object is located in the third video according to the position relationship between the layer where the first object is located in the second video and the layer where the object except the first object is located.
In summary, in the embodiment of the present application, by using two videos respectively recorded at different frame rates, object fusion between the two videos can be achieved, so as to achieve a special speed change effect of the same subject/different subjects and the same time period/different time periods, and increase playability of video post-processing.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of a still picture or a video image obtained by an image capturing device (such as a camera) in a video image capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and an action stick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to applications and action systems. The processor 1010 may integrate an application processor, which primarily handles motion systems, user pages, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, and the like) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory x09 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 1009 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor, which primarily handles operations related to the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above video processing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing video processing method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method of video processing, the method comprising:
acquiring a first video acquired by a first camera at a first frame rate, and acquiring a second video acquired by a second camera at a second frame rate, wherein the first frame rate is different from the second frame rate;
receiving a first input of a user;
and responding to the first input, and outputting a third video, wherein the third video is obtained by fusing the video of the first object in the first video and the video of the object except the first object in the second video.
2. The method of claim 1, wherein the first frame rate is greater than the second frame rate, the method further comprising:
acquiring a first sub-video with a first target frame number in the first video, wherein the first target frame number is the same as the frame number of the second video, and the video duration of the first sub-video is less than the video duration of the first video;
the outputting a third video, the third video being obtained by video fusion of a video of a first object in the first video and video of objects other than the first object in the second video, includes:
and outputting a third video, wherein the third video is obtained by fusing the video of the first object in the first sub-video and the video of the object except the first object in the second video.
3. The method of claim 1, wherein the first frame rate is less than the second frame rate, the method further comprising:
acquiring a second sub-video with a second target frame number in the second video, wherein the second target frame number is the same as the frame number of the first video, and the video duration of the second sub-video is less than the video duration of the second video;
the outputting a third video, where the third video obtained by video fusion of a first object in the first video and video of an object other than the first object in the second video includes:
and outputting a third video, wherein the third video is obtained by fusing the video of the first object in the first video and the video of the object except the first object in the second sub-video.
4. The method of claim 1, wherein prior to outputting the third video, the method further comprises:
acquiring relative position information of the first object and the second object in the second video;
the video fusion of the first object in the first video and the video fusion of the objects except the first object in the second video comprises the following steps:
according to the relative position information, fusing the video of the first object in the first video with the video of the second object in the first video;
wherein the second object is at least one of the objects in the second video other than the first object.
5. The method of claim 1, wherein the first object comprises a third object and a fourth object;
the motion speed of the third object in the third video is a first multiple of the motion speed of the third object in the second video;
the motion speed of the fourth object in the third video is a second multiple of the motion speed of the fourth object in the second video;
wherein the first multiple is different from the second multiple.
6. The method of claim 1, wherein prior to the obtaining the first video acquired by the first camera at the first frame rate, the method further comprises:
displaying a frame rate selection control;
receiving a second input of the frame rate selection control from a user;
in response to the second input, acquiring the first video at the first frame rate and acquiring the second video at the second frame rate.
7. The method of claim 1, wherein before outputting the third video, the method further comprises at least any one of:
adjusting the video light source information of the first object in the third video according to the video light source information of the objects except the first object in the second video;
and adjusting the position information of the layer where the first object is located in the third video according to the position relationship between the layer where the first object is located in the second video and the layer where the object except the first object is located.
8. A video processing apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring a first video acquired by a first camera at a first frame rate and a second video acquired by a second camera at a second frame rate, wherein the first frame rate is different from the second frame rate;
the first receiving module is used for receiving a first input of a user;
and the output module is used for responding to the first input and outputting a third video, wherein the third video is obtained by fusing the video of the first object in the first video and the video of the object except the first object in the second video.
9. An electronic device comprising a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions when executed by the processor implementing the steps of the video processing method according to any of claims 1-7.
10. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the video processing method according to any one of claims 1 to 7.
CN202210111752.XA 2022-01-29 2022-01-29 Video processing method, video processing device, electronic equipment and medium Pending CN114466232A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210111752.XA CN114466232A (en) 2022-01-29 2022-01-29 Video processing method, video processing device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210111752.XA CN114466232A (en) 2022-01-29 2022-01-29 Video processing method, video processing device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN114466232A true CN114466232A (en) 2022-05-10

Family

ID=81412364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210111752.XA Pending CN114466232A (en) 2022-01-29 2022-01-29 Video processing method, video processing device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN114466232A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827477A (en) * 2022-05-26 2022-07-29 维沃移动通信有限公司 Time-lapse shooting method and device, electronic equipment and medium
WO2024041006A1 (en) * 2022-08-26 2024-02-29 荣耀终端有限公司 Method for controlling frame rate of camera, and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847636A (en) * 2016-06-08 2016-08-10 维沃移动通信有限公司 Video recording method and mobile terminal
CN111526314A (en) * 2020-04-24 2020-08-11 华为技术有限公司 Video shooting method and electronic equipment
CN113873319A (en) * 2021-09-27 2021-12-31 维沃移动通信有限公司 Video processing method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847636A (en) * 2016-06-08 2016-08-10 维沃移动通信有限公司 Video recording method and mobile terminal
CN111526314A (en) * 2020-04-24 2020-08-11 华为技术有限公司 Video shooting method and electronic equipment
CN113873319A (en) * 2021-09-27 2021-12-31 维沃移动通信有限公司 Video processing method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114827477A (en) * 2022-05-26 2022-07-29 维沃移动通信有限公司 Time-lapse shooting method and device, electronic equipment and medium
CN114827477B (en) * 2022-05-26 2024-03-29 维沃移动通信有限公司 Method, device, electronic equipment and medium for time-lapse photography
WO2024041006A1 (en) * 2022-08-26 2024-02-29 荣耀终端有限公司 Method for controlling frame rate of camera, and electronic device

Similar Documents

Publication Publication Date Title
KR20200128132A (en) Video production method and apparatus, computer device and storage medium
CN109275028B (en) Video acquisition method, device, terminal and medium
CN112565611B (en) Video recording method, video recording device, electronic equipment and medium
CN114466232A (en) Video processing method, video processing device, electronic equipment and medium
WO2023151611A1 (en) Video recording method and apparatus, and electronic device
CN112565868B (en) Video playing method and device and electronic equipment
WO2023151609A1 (en) Time-lapse photography video recording method and apparatus, and electronic device
CN113794829B (en) Shooting method and device and electronic equipment
CN112839190B (en) Method for synchronously recording or live broadcasting virtual image and real scene
CN113259743A (en) Video playing method and device and electronic equipment
WO2023134583A1 (en) Video recording method and apparatus, and electronic device
CN112954209B (en) Photographing method and device, electronic equipment and medium
CN111866379A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114125297B (en) Video shooting method, device, electronic equipment and storage medium
CN114025237A (en) Video generation method and device and electronic equipment
CN111757177B (en) Video clipping method and device
CN115049574A (en) Video processing method and device, electronic equipment and readable storage medium
CN112367467B (en) Display control method, display control device, electronic apparatus, and medium
CN113873319A (en) Video processing method and device, electronic equipment and storage medium
CN113438412A (en) Image processing method and electronic device
CN114390205B (en) Shooting method and device and electronic equipment
CN114827477B (en) Method, device, electronic equipment and medium for time-lapse photography
CN114520874B (en) Video processing method and device and electronic equipment
CN114500852B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114173178A (en) Video playing method, video playing device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination