CN113810624A - Video generation method and device and electronic equipment - Google Patents

Video generation method and device and electronic equipment Download PDF

Info

Publication number
CN113810624A
CN113810624A CN202111101606.0A CN202111101606A CN113810624A CN 113810624 A CN113810624 A CN 113810624A CN 202111101606 A CN202111101606 A CN 202111101606A CN 113810624 A CN113810624 A CN 113810624A
Authority
CN
China
Prior art keywords
video
video frame
target
video frames
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111101606.0A
Other languages
Chinese (zh)
Inventor
郭越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111101606.0A priority Critical patent/CN113810624A/en
Publication of CN113810624A publication Critical patent/CN113810624A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Abstract

The application discloses a video generation method and device and electronic equipment, and belongs to the technical field of computers. The method comprises the following steps: responding to a first input of a user to a first video, determining a first screening target, and extracting N first video frames containing the first screening target in the first video; responding to a second input of the user to the second video, determining a second screening target, and extracting M second video frames containing the second screening target in the second video; under the condition that the video synthesis mode is a fusion mode, determining a first video frame and a second video frame which have time corresponding relation, and fusing the first video frame and the second video frame to generate a target video frame; combining the target video frame and the first video frame and/or the second video frame which do not have the time corresponding relation to generate a target video; or, when the video synthesis mode is the insertion mode, determining the arrangement sequence of the N first video frames and the M second video frames to generate the target video.

Description

Video generation method and device and electronic equipment
Technical Field
The application belongs to the technical field of computers, and particularly relates to a video generation method and device and electronic equipment.
Background
With the continuous improvement of the performance of the intelligent equipment, the processing requirement of a user on the video is more and more. Such as subsequent rendering of the video, clipping of the captured video, and so forth.
In the prior art, most of video processing is processing of a single video, but processing of fusing two videos is rarely involved. In the face of a use scene that a user needs to fuse two videos, the prior art does not provide an effective fusion processing method.
Disclosure of Invention
The embodiment of the application aims to provide a video generation method, a video generation device and electronic equipment, and can solve the problem that an effective video fusion processing method is not provided in the prior art.
In a first aspect, an embodiment of the present application provides a method for video generation, where the method includes:
responding to a first input of a user to a first video, determining a first screening target, and extracting N first video frames containing the first screening target in the first video, wherein N is a positive integer;
responding to a second input of a user to a second video, determining a second screening target, and extracting M second video frames containing the second screening target in the second video, wherein M is a positive integer;
under the condition that the video synthesis mode is a fusion mode, determining the first video frame and the second video frame with time correspondence in the N first video frames and the M second video frames, and fusing the first video frame and the second video frame with time correspondence to generate a target video frame; combining the target video frame and the first video frame and/or the second video frame which do not have the time correspondence relationship in the N first video frames and the M second video frames to generate a target video; alternatively, the first and second electrodes may be,
and under the condition that the video synthesis mode is an interpenetration mode, determining the arrangement sequence of the N first video frames and the M second video frames, and combining the N first video frames and the M second video frames based on the arrangement sequence to generate a target video.
In a second aspect, an embodiment of the present application provides an apparatus for video generation, where the apparatus includes:
the first extraction module is used for responding to a first input of a user to a first video, determining a first screening target, and extracting N first video frames containing the first screening target in the first video, wherein N is a positive integer;
the second extraction module is used for responding to a second input of a user to a second video, determining a second screening target, and extracting M second video frames containing the second screening target in the second video, wherein M is a positive integer;
a first processing module, configured to determine, in a case that a video synthesis manner is a fusion manner, a first video frame and a second video frame having a time correspondence relationship among the N first video frames and the M second video frames, and fuse the first video frame and the second video frame having the time correspondence relationship to generate a target video frame; combining the target video frame and the first video frame and/or the second video frame which do not have the time correspondence relationship in the N first video frames and the M second video frames to generate a target video; alternatively, the first and second electrodes may be,
and the second processing module is used for determining the arrangement sequence of the N first video frames and the M second video frames under the condition that the video synthesis mode is the insertion mode, and combining the N first video frames and the M second video frames based on the arrangement sequence to generate the target video.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, a first video frame and a second video frame are obtained by respectively extracting a first video and a second video through a user, then the first video frame and the second video frame with time correspondence are fused under the condition that the video synthesis mode is a fusion mode, a target video frame is generated, and the target video frame and the first video frame and/or the second video frame without time correspondence are combined to generate a target video; and under the condition that the video synthesis mode is the insertion mode, determining the arrangement sequence of the N first video frames and the M second video frames to be combined to obtain the target video, thereby realizing the technical scheme of generating the target video in the two video synthesis modes.
Drawings
Fig. 1 is one of the flow diagrams of a method of video generation of an embodiment of the present application;
FIG. 2 is a second flowchart of a video generation method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an interface for selecting a first target video frame in the method according to an embodiment of the present application;
fig. 4 is a schematic interface diagram of generating a time bar corresponding to a first video frame in the method according to the embodiment of the present application;
FIG. 5 is a diagram illustrating an operation of selecting a second target video frame in the method according to an embodiment of the present application;
FIG. 6 is a schematic interface diagram of a time bar corresponding to the generation of a second video frame in the method according to the embodiment of the present disclosure;
fig. 7 is one of schematic interface diagrams for determining a correspondence relationship between a first video frame and a second video frame in the method according to the embodiment of the present application;
fig. 8 is a second schematic interface diagram of determining a corresponding relationship between a first video frame and a second video frame in the method according to the embodiment of the present application;
fig. 9 is a third schematic interface diagram illustrating the determination of the correspondence relationship between the first video frame and the second video frame in the method according to the embodiment of the present application;
fig. 10 is a schematic structural diagram of an apparatus for video generation according to an embodiment of the present application;
fig. 11 is a schematic hardware configuration diagram of an electronic device according to an embodiment of the present application;
fig. 12 is a hardware configuration diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The method, the apparatus and the electronic device for video generation provided by the embodiments of the present application are described in detail with reference to the accompanying drawings.
The embodiment of the application discloses a method for generating a video, which is shown in fig. 1 and comprises the following steps:
step 101, responding to a first input of a user to a first video, determining a first screening target, and extracting N first video frames containing the first screening target in the first video, wherein N is a positive integer.
The first screening target may be various, such as a person, a building, and the like.
The first input includes various kinds such as a click operation, a slide operation, a drag operation, and the like for the first video.
In a specific usage scenario, the method for determining the first screening target in step 101 includes: responding to the clicking operation of a user on the first video in the playing state, determining a first target video frame corresponding to the clicking operation, and then identifying a first screening target in the first target video frame.
In another usage scenario, the method for determining the first screening target in step 101 includes: and in response to the dragging operation of the user on the progress bar of the first video, displaying the corresponding video frame, in case of encountering the video frame comprising the first screening target, determining the first target video frame corresponding to the double-click operation by responding to the double-click operation of the user, and then identifying the first screening target in the first target video frame.
Further, in a case where the first target video frame includes a plurality of first targets, the first filtering target in the first target video frame is determined in response to a clicking operation of the first target by the user, so that the first filtering target can be further accurately determined.
Specifically, for example, by using the first screening target as a person, the device can identify the feature points of the video frame of the first target by itself, and then identify the first screening target. In the case that the video frame includes a plurality of people, the final first filtering target may be further determined by receiving a clicking operation of the user.
After the first screening target is determined, N first video frames containing the first screening target in the first video are extracted, and the corresponding first extracted video can be further generated according to the N first video frames.
For example, video segments A, B and C including a first filtering target in the first video and a first video frame D are extracted, wherein each of the video segments A, B and C includes a plurality of first video frames, and the video segments A, B and C and the first video frame D are temporally spliced to generate a first extracted video.
Through step 101, the extraction of a first video frame containing a first screening target in a first video is realized.
Step 102, responding to a second input of a user to a second video, determining a second screening target, and extracting M second video frames containing the second screening target in the second video, wherein M is a positive integer.
Wherein, the second screening target can be various, such as buildings, scenery, etc.
The second input includes a variety of kinds, such as a click operation, a slide operation, a drag operation, and the like, on the second video.
In a specific usage scenario, the method for determining a second filtering target in response to a second input of a second video from a user in step 102 includes: and responding to the clicking operation of the user on the second video in the playing state, determining a second target video frame corresponding to the clicking operation, and then identifying a second screening target in the second target video frame.
Further, in a case where the second target video frame includes a plurality of second targets, the second filtering target in the second target video frame is determined in response to a click operation of the user on the second target, so that the second filtering target can be further accurately determined.
Specifically, for example, by using the second screening target as a building, the device can identify the feature points of the video frame of the second target by itself, and then identify the second screening target. For the case where the video frame includes a plurality of buildings, the final second filtering target may be further determined by receiving a click operation of the user.
After the second screening target is determined, extracting M second video frames containing the second screening target in the second video, and further generating a corresponding second extracted video according to the M second video frames.
For example, when the second video frames E11 to E20 and E45 to E88 including the second filtering target in the second video are extracted, the second video frames E1 to E20 and E45 to E88 are temporally spliced to generate a second extracted video.
Through step 102, the extraction of the second video frame containing the second screening target in the second video is realized.
103, under the condition that the video synthesis mode is a fusion mode, determining the first video frame and the second video frame with time correspondence in the N first video frames and the M second video frames, and fusing the first video frame and the second video frame with time correspondence to generate a target video frame; and combining the target video frame and the first video frame and/or the second video frame which do not have the time corresponding relation in the N first video frames and the M second video frames to generate a target video.
Specifically, the manner of determining the first video frame and the second video frame having the time correspondence in step 103 includes three cases, and the frame number of the first video frame is greater than, equal to, or less than the frame number of the second video frame.
To enable adjusting the correspondence of the first extracted video and the second extracted video, before determining the first video frame and the second video frame having a temporal correspondence, the method further comprises: generating and displaying time bars corresponding to the N first video frames and time bars corresponding to the M second video frames;
and determining the first video frames and the second video frames with the time corresponding relation according to the time bars corresponding to the N first video frames and the time bars corresponding to the M second video frames.
In order to visually distinguish the time bar corresponding to the first video frame from the time bar corresponding to the second video frame, the time bar corresponding to the first video frame may be displayed as a thick time bar, the time bar corresponding to the second video frame may be displayed as a thin time bar, and the thick time bar and the thin time bar are vertically arranged, so that comparison is facilitated.
Specifically, the main video frame and the background video frame need to be confirmed before video generation is performed. In this embodiment, the first video frame is used as a main video frame, and the second video frame is used as a background video frame.
Correspondingly, the step 103 of determining the first video frame and the second video frame having a temporal correspondence relationship among the N first video frames and the M second video frames includes:
in case 1, if N is equal to M, corresponding the start position of the time bar corresponding to the first video frame to the start position of the time bar corresponding to the second video frame, and determining the first video frame and the second video frame having the time correspondence relationship according to the time bars corresponding to the N corresponding first video frames and the time bars corresponding to the M corresponding second video frames;
and 2, if N is less than M, determining, in response to a dragging operation of a user on time bars corresponding to the N first video frames or time bars corresponding to the M second video frames, starting corresponding positions of the time bars corresponding to the N first video frames and the time bars corresponding to the M second video frames, and determining the first video frames and the second video frames having a time correspondence relationship according to the corresponding time bars corresponding to the N first video frames and the corresponding time bars corresponding to the M second video frames.
It should be noted that, when determining the start corresponding position, it is to be ensured that the time slices corresponding to the N first video frames are located within the range of the time slices corresponding to the M second video frames, so as to avoid a situation that the time slices corresponding to the first video frames exceed the time slices corresponding to the second video frames.
And 3, if N is greater than M, determining, in response to a dragging operation of a user on time bars corresponding to N first video frames or time bars corresponding to M second video frames, starting corresponding positions of the time bars corresponding to the N first video frames and the time bars corresponding to the M second video frames, deleting a portion of the time bars corresponding to the N first video frames, which exceed the time bars corresponding to the M second video frames, and determining the first video frames and the second video frames having a time correspondence relationship according to the corresponding portion of the time bars corresponding to the first video frames and the corresponding time bars corresponding to the M second video frames.
It should be noted that, in the process of generating the video, it is necessary to ensure that the number of frames of the main video is less than or equal to the number of frames of the background video, and therefore, in case 3, a portion of the time bar corresponding to the N first video frames exceeding the time bar corresponding to the M second video frames is deleted, so as to ensure that the time bar corresponding to the first video frame is located within the range of the time bar corresponding to the second video frame.
After the time correspondence between the first video frame and the second video frame is determined, the first extracted video and the second extracted video need to be fused. Specifically, the step 103 of fusing the first video frame and the second video frame having a time correspondence to generate a target video frame includes the following steps S131 to S133:
s131, cutting the first video frame to obtain a first screening target.
Step S132, fusing the first screening target to a second video frame corresponding to the first video frame to obtain a target video frame.
It should be noted that, in the fusion process, the first filtering target is placed at a suitable position in the second video frame, for example, the position of the first filtering target in the second video frame is the same as the position of the first filtering target in the first video frame, or the first filtering target may be placed at a position staggered and misaligned with the second filtering target in the second video frame.
And S133, obtaining the fused video based on the fused video frame.
The fused video comprises fused video frames, and also can comprise unfused first video frames and/or second video frames.
For example, the first extracted video comprises first video frames A1-A100, the second extracted video comprises second video frames B1-B200, the first video frames A1-A100 are respectively aligned with the second video frames B51-150, and fused video frames C1-C100 are obtained; and then obtaining a fused video from the second video frames B1-B50, C1-C100 and B151-200.
Through steps S131 to S133, a fused video frame can be obtained according to the first video frame and the second video frame arranged in sequence, and then a fused video is obtained.
According to the video generation method disclosed by the embodiment of the application, the user extracts the first video and the second video respectively to obtain the first extracted video and the second extracted video, and then the corresponding relation between the first extracted video and the second extracted video is determined, so that the first extracted video and the second extracted video are fused to obtain the fused video, and the use scene of video generation is expanded.
Further, after determining the correspondence relationship between the first extracted video and the second extracted video, the order of the first video frame in the first extracted video and the second video frame in the second extracted video may be changed, and the method further includes: and responding to a third input of the user, and adjusting the arrangement sequence of the first video frames in the first extracted video or the arrangement sequence of the second video frames in the second extracted video to obtain fused videos of different styles.
There are various ways to adjust the arrangement order of the first video frame and the second video frame, such as reverse order, random ordering, and so on.
The third input may be a click operation of a tuning menu displayed on the page, a drag operation of a plurality of first video frames or second video frames displayed on the page, or the like.
Further, prior to responding to the first input of the first video by the user, the method further comprises: determining a degree of blurring of the first video in response to a fourth input by a user;
after extracting the first video frame, the method further comprises: and performing blurring processing on the first video frame based on the blurring degree of the first video.
Additionally, prior to responding to a second input of a second video by the user, the method further comprises:
determining a degree of blurring of the second video in response to a fifth input by a user;
after extracting the second video frame, the method further comprises: and performing blurring processing on the second video frame based on the blurring degree of the second video.
In this embodiment, the first video frame and/or the second video frame are/is processed by blurring to obtain a fused video with more virtual display effects.
In another mode, the adjusting the blurring degree of the second video frame includes: and determining the blurring degree of the second video frame according to the difference between the first relative distance between the first screening target and the camera and the second relative distance between the second video frame and the camera, and blurring the second video frame based on the blurring degree.
For example, a video shot in one scene is a garden, a video shot in a second scene is a dancer, and the user can select the video shot in the second scene as a main video and the video shot in the first scene as a background video. The definition, brightness and color saturation of the video frame included in the main video are relatively high, the video frame of the background video can be subjected to blurring processing, and the blurring degree can be selected through input of a user.
At present, blurring mainly includes separating a main body and a background by means of laser ranging or binocular ranging, performing fuzzy algorithm processing on the background, and calculating the position of a shooting device in two videos away from the main body and the position of the background in real time.
For example, for a video with a garden as a shooting background, the distance between the shooting equipment and the garden generally does not change much, and is here counted as a; while the person generally moves, this requires calculating the distance between the person and the capture device in each video frame, which is denoted as b (t). And (c) calculating the distance between the background and the character, namely b (t) -a, along with the change of the time b, and performing blurring effects to different degrees according to the distance between the background and the character.
For example, 0< b (t) -a <1m, the degree of blurring is 0; 1< b (t) -a <2m, the degree of blurring is 1; 2< b (t) -a <3m, degree of blurring is 2; 3< b (t) -a <4m, the degree of blurring is 3. The final effect is that as the video is played, the blurring effect will appear as a gradual change, such as a character running from far to near to the shooting device, and the final effect will be that the garden will be blurry as the character runs.
The specific operation comprises the following steps:
s1, responding to the click operation of the user on the camera menu bar, selecting the double video overlapping blurring, and jumping the system interface to the photo album.
And S2, responding to the click operation of the user, selecting one video as the main video, and then double-clicking the screen to indicate that the main video is completely selected.
And S3, switching back to the photo album, responding to the clicking operation of the user, selecting one video as a background video, and determining the blurring degree of the background video by dragging the blurring strip in the interface. And then double clicking the screen to show that the background video is selected completely.
And S4, fusing the main video and the background video by executing the method of the embodiment, and automatically storing the generated fused video in an album.
Through the steps S1-S4, the effect of video blurring can be achieved, so that different requirements of users are met, and the use scene of video generation is expanded.
And 104, under the condition that the video synthesis mode is a penetration mode, determining the arrangement sequence of the N first video frames and the M second video frames, and combining the N first video frames and the M second video frames based on the arrangement sequence to generate a target video.
In this embodiment, the user may select out-of-order frame insertion or alternatively insert frames, which is not listed in this embodiment.
For example, if the arrangement order of the 5 first video frames E1 to E5 and the 5 second video frames F1 to F5 is determined to be interlace, the arrangement order is determined to be E1-F1-E2-F2-E3-F3-E4-F4-E5-F5, and the target video is generated by combining 10 video frames based on the arrangement order.
According to the method for generating the video, a user extracts a first video frame and a second video frame from a first video and a second video respectively, then the first video frame and the second video frame with time correspondence are fused under the condition that the video synthesis mode is a fusion mode, a target video frame is generated, and the target video frame and the first video frame and/or the second video frame without time correspondence are/is combined to generate the target video; and under the condition that the video synthesis mode is the insertion mode, determining the arrangement sequence of the N first video frames and the M second video frames to be combined to obtain the target video, thereby realizing the technical scheme of generating the target video in the two video synthesis modes.
In order to further more intuitively understand the method for generating the video according to the embodiment, the method of the present application is described below by a specific example.
Referring to fig. 2, a method for video generation according to an embodiment of the present application includes:
step 201, responding to a click operation of a user on a first video in a playing state, determining a first target video frame corresponding to the click operation, and identifying a first screening target in the first target video frame.
As shown in fig. 3, through the clicking operation of the user, the corresponding first target video frame is determined, and then the first filtering target in the first target video frame is identified as a girl.
Step 202, extracting N first video frames containing the first screening target in the first video, and generating and displaying time bars corresponding to the N first video frames.
As shown in fig. 4, the thickened time bar in the progress bar of the first video is the time bar corresponding to the N first video frames. As can be seen from fig. 4, the time bar corresponding to the first video frame may be multiple.
Step 203, responding to the click operation of the user on the second video in the playing state, determining a second target video frame corresponding to the click operation, and identifying a second screening target in the second target video frame.
As shown in fig. 5, through the clicking operation of the user, the corresponding second target video frame is determined, and then the second filtering target in the second target video frame is identified as the house.
And 204, extracting M second video frames containing the second screening target in the second video, and generating and displaying time bars corresponding to the M second video frames.
As shown in fig. 6, the thickened time bar in the progress bar of the second video is the time bar corresponding to the M second video frames.
Step 205, determining the first video frame and the second video frame having a time correspondence relationship according to the time bars corresponding to the N first video frames and the time bars corresponding to the M second video frames.
If N is equal to M, as shown in fig. 7, corresponding the starting position of the time bar corresponding to the first video frame to the starting position of the time bar corresponding to the second video frame, and determining the first video frame and the second video frame having the time correspondence relationship according to the corresponding time bars corresponding to the N first video frames and the corresponding time bars corresponding to the M second video frames;
if N is smaller than M, as shown in fig. 8, in response to a dragging operation performed by a user on time bars corresponding to the N first video frames or time bars corresponding to the M second video frames, determining starting corresponding positions of the time bars corresponding to the N first video frames and the time bars corresponding to the M second video frames, and determining the first video frames and the second video frames having a time correspondence relationship according to the time bars corresponding to the N corresponding first video frames and the time bars corresponding to the M second video frames;
if N is greater than M, as shown in fig. 9, in response to a dragging operation performed by a user on the time bars corresponding to the N first video frames or the time bars corresponding to the M second video frames, determining initial corresponding positions of the time bars corresponding to the N first video frames and the time bars corresponding to the M second video frames, deleting a portion of the time bars corresponding to the N first video frames, which exceed the time bars corresponding to the M second video frames, and determining the first video frame and the second video frame having a time correspondence relationship according to the partial time bars corresponding to the corresponding partial first video frames and the time bars corresponding to the M second video frames.
And step 206, cutting the first video frame to obtain a first screening target, and fusing the first screening target into a second video frame corresponding to the first video frame to obtain a target video frame.
In this embodiment, the first filtering target is a character picture. In the process of video generation, the position of the first screening target fused into the second video frame is the same as the position of the first screening target in the first video frame.
And step 207, combining the target video frame and the first video frame and/or the second video frame which do not have the time correspondence relationship in the N first video frames and the M second video frames to generate the target video.
According to the method, a user extracts the first video frame and the second video frame from the first video and the second video respectively, then the first video frame and the second video frame with time correspondence are fused under the condition that the video synthesis mode is a fusion mode, a target video frame is generated, and the target video frame and the first video frame and/or the second video frame without time correspondence are/is combined, so that the target video with a character picture and a background picture played alternately is generated.
It should be noted that, in the method for video generation provided in the embodiment of the present application, the execution subject may be a device for video generation, or a control module in the device for video generation, which is used for executing the method for video generation. The embodiment of the present application takes a method for executing video generation by a video generation device as an example, and describes a video generation device provided by the embodiment of the present application.
The embodiment of the application discloses a video generation device, see fig. 10, including:
a first extraction module 1001, configured to determine a first filtering target in response to a first input to a first video by a user, and extract N first video frames including the first filtering target in the first video, where N is a positive integer;
a second extracting module 1002, configured to determine a second filtering target in response to a second input of a second video by a user, and extract M second video frames in the second video, where M is a positive integer, and the second filtering target is included in the second video;
a first processing module 1003, configured to determine, in a case that a video synthesis manner is a fusion manner, a first video frame and a second video frame having a time correspondence relationship in the N first video frames and the M second video frames, and fuse the first video frame and the second video frame having the time correspondence relationship to generate a target video frame; combining the target video frame and the first video frame and/or the second video frame which do not have the time correspondence relationship in the N first video frames and the M second video frames to generate a target video; alternatively, the first and second electrodes may be,
a second processing module 1004, configured to determine an arrangement order of the N first video frames and the M second video frames when a video synthesis mode is a puncturing mode, and combine the N first video frames and the M second video frames based on the arrangement order to generate a target video.
Optionally, the first processing module 1001 is specifically configured to: responding to a clicking operation of a user on a first video in a playing state, determining a first target video frame corresponding to the clicking operation, and determining a first screening target in the first target video frame;
the second processing module 1002 is specifically configured to: and responding to the clicking operation of the user on the second video in the playing state, determining a second target video frame corresponding to the clicking operation, and determining a second screening target in the second target video frame.
Optionally, the first processing module 1001 is specifically configured to: in the case that the first target video frame comprises a plurality of first targets, determining a first screening target in the first target video frame in response to a clicking operation of a user on the first targets;
the second processing module 1002 is specifically configured to: and in the case that the second target video frame comprises a plurality of second targets, responding to the clicking operation of the user on the second targets, and determining second screening targets in the second target video frame.
Optionally, the apparatus further comprises: a time bar generating module, configured to generate and display time bars corresponding to the N first video frames and time bars corresponding to the M second video frames before determining the first video frame and the second video frame having a time correspondence relationship in the N first video frames and the M second video frames;
the first processing module 1003 is specifically configured to: and determining the first video frames and the second video frames with the time corresponding relation according to the time bars corresponding to the N first video frames and the time bars corresponding to the M second video frames.
Optionally, the first processing module 1003 is specifically configured to:
if N is equal to M, corresponding the initial position of the time bar corresponding to the first video frame to the initial position of the time bar corresponding to the second video frame, and determining the first video frame and the second video frame with the time corresponding relation according to the corresponding time bars corresponding to the N first video frames and the corresponding time bars corresponding to the M second video frames;
if N is smaller than M, responding to the dragging operation of a user on time bars corresponding to the N first video frames or time bars corresponding to the M second video frames, determining the initial corresponding positions of the time bars corresponding to the N first video frames and the time bars corresponding to the M second video frames, and determining the first video frames and the second video frames with the time corresponding relation according to the corresponding time bars corresponding to the N first video frames and the corresponding time bars corresponding to the M second video frames;
if N is larger than M, in response to the dragging operation of a user on the time bars corresponding to the N first video frames or the time bars corresponding to the M second video frames, determining the initial corresponding positions of the time bars corresponding to the N first video frames and the time bars corresponding to the M second video frames, deleting the parts of the time bars corresponding to the N first video frames, which exceed the time bars corresponding to the M second video frames, and determining the first video frames and the second video frames with the time corresponding relation according to the corresponding parts of the time bars corresponding to the first video frames and the time bars corresponding to the M second video frames.
Optionally, the first processing module 1003 is specifically configured to:
cutting the first video frame to obtain a first screening target;
and fusing the first screening target into a second video frame corresponding to the first video frame to obtain the target video frame.
Optionally, the apparatus further comprises:
a first blurring degree determination module, configured to determine a blurring degree of the first video in response to a fourth input by a user;
and the first blurring module is used for blurring the first video frame based on the blurring degree of the first video after the first video frame is extracted.
Optionally, the apparatus further comprises:
a second virtualization degree determining module, configured to determine a virtualization degree of the second video in response to a fifth input from a user;
and the second blurring module is used for blurring the second video frame based on the blurring degree of the second video after the second video frame is extracted.
Optionally, the apparatus further comprises:
and the third blurring module is used for determining the blurring degree of the second video frame according to the difference between the first relative distance between the first screening target and the camera and the second relative distance between the second video frame and the camera after the first screening target is fused into the second video frame corresponding to the first video frame, and performing blurring processing on the second video frame based on the blurring degree.
According to the video generation device, a user extracts a first video frame and a second video frame from a first video and a second video respectively, then the first video frame and the second video frame with time correspondence are fused under the condition that the video synthesis mode is a fusion mode, a target video frame is generated, and the target video frame and the first video frame and/or the second video frame without time correspondence are combined to generate a target video; and under the condition that the video synthesis mode is the insertion mode, determining the arrangement sequence of the N first video frames and the M second video frames to be combined to obtain the target video, thereby realizing the technical scheme of generating the target video in the two video synthesis modes.
The video generation device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The video generation apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The video generation apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to 9, and is not described here again to avoid repetition.
Optionally, as shown in fig. 11, an electronic device 1100 is further provided in an embodiment of the present application, and includes a processor 1101, a memory 1102, and a program or an instruction stored in the memory 1102 and executable on the processor 1101, where the program or the instruction is executed by the processor 1101 to implement each process of the above-mentioned method for generating a video, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 12 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1200 includes, but is not limited to: radio frequency unit 1201, network module 1202, audio output unit 1203, input unit 1204, sensors 1205, display unit 1206, user input unit 1207, interface unit 1208, memory 1209, and processor 1210.
Those skilled in the art will appreciate that the electronic device 1200 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 1210 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 12 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The processor 1210 is configured to determine a first filtering target in response to a first input of a user to a first video, and extract N first video frames including the first filtering target in the first video, where N is a positive integer;
responding to a second input of a user to a second video, determining a second screening target, and extracting M second video frames containing the second screening target in the second video, wherein M is a positive integer;
under the condition that the video synthesis mode is a fusion mode, determining the first video frame and the second video frame with time correspondence in the N first video frames and the M second video frames, and fusing the first video frame and the second video frame with time correspondence to generate a target video frame; combining the target video frame and the first video frame and/or the second video frame which do not have the time correspondence relationship in the N first video frames and the M second video frames to generate a target video; alternatively, the first and second electrodes may be,
and under the condition that the video synthesis mode is an interpenetration mode, determining the arrangement sequence of the N first video frames and the M second video frames, and combining the N first video frames and the M second video frames based on the arrangement sequence to generate a target video.
According to the electronic equipment disclosed by the embodiment of the application, a user extracts a first video frame and a second video frame from a first video and a second video respectively, then the first video frame and the second video frame with time correspondence are fused under the condition that the video synthesis mode is a fusion mode, a target video frame is generated, and the target video frame and the first video frame and/or the second video frame without time correspondence are/is combined to generate a target video; and under the condition that the video synthesis mode is the insertion mode, determining the arrangement sequence of the N first video frames and the M second video frames to be combined to obtain the target video, thereby realizing the technical scheme of generating the target video in the two video synthesis modes.
Optionally, the processor 1210 is further configured to: responding to a clicking operation of a user on a first video in a playing state, determining a first target video frame corresponding to the clicking operation, and determining a first screening target in the first target video frame;
the processor 1210 is further configured to: and responding to the clicking operation of the user on the second video in the playing state, determining a second target video frame corresponding to the clicking operation, and determining a second screening target in the second target video frame.
Optionally, the processor 1210 is further configured to: in the case where the first target video frame includes a plurality of first targets, a first filtering target in the first target video frame is determined in response to a user's clicking operation on the first target, so that the first target can be selected more accurately.
The processor 1210 is further configured to: in the case where the second target video frame includes a plurality of second targets, in response to a user's clicking operation on the second targets, a second filtering target in the second target video frame is determined, so that the second target can be selected more accurately.
Optionally, the processor 1210 is further configured to: generating time bars corresponding to the N first video frames and time bars corresponding to the M second video frames; determining the first video frames and the second video frames with time correspondence according to the time bars corresponding to the N first video frames and the time bars corresponding to the M second video frames;
the display unit 1206 is further configured to: and displaying time bars corresponding to the N first video frames and time bars corresponding to the M second video frames so as to adjust the corresponding relation between the first video frames and the second video frames in the subsequent steps.
Optionally, the processor 1210 is further configured to:
if N is equal to M, corresponding the initial position of the time bar corresponding to the first video frame to the initial position of the time bar corresponding to the second video frame, and determining the first video frame and the second video frame with the time corresponding relation according to the corresponding time bars corresponding to the N first video frames and the corresponding time bars corresponding to the M second video frames;
if N is smaller than M, responding to the dragging operation of a user on time bars corresponding to the N first video frames or time bars corresponding to the M second video frames, determining the initial corresponding positions of the time bars corresponding to the N first video frames and the time bars corresponding to the M second video frames, and determining the first video frames and the second video frames with the time corresponding relation according to the corresponding time bars corresponding to the N first video frames and the corresponding time bars corresponding to the M second video frames;
if N is larger than M, in response to the dragging operation of a user on the time bars corresponding to the N first video frames or the time bars corresponding to the M second video frames, determining the initial corresponding positions of the time bars corresponding to the N first video frames and the time bars corresponding to the M second video frames, deleting the parts of the time bars corresponding to the N first video frames, which exceed the time bars corresponding to the M second video frames, and determining the first video frames and the second video frames with the time corresponding relation according to the corresponding parts of the time bars corresponding to the first video frames and the time bars corresponding to the M second video frames.
Optionally, the processor 1210 is further configured to: cutting the first video frame to obtain a first screening target; and fusing the first screening target into a second video frame which has a corresponding relation with the first video frame to obtain the target video frame, so that the fusion effect of the first video frame and the second video frame can be realized.
Optionally, the processor 1210 is further configured to: determining a degree of blurring of a first video in response to a fourth input by a user prior to a first input by the user to the first video;
after the first video frame is extracted, blurring processing is performed on the first video frame based on the blurring degree of the first video, so that a blurred first video can be obtained, more use scenes generated by the video can be expanded, and the use experience of a user is improved.
Optionally, the processor 1210 is further configured to: determining a degree of blurring of a second video in response to a fifth input by a user before responding to a second input by the user to the second video;
after the second video frame is extracted, based on the blurring degree of the second video, blurring processing is performed on the second video frame, so that a blurred second video can be obtained, more use scenes generated by the video can be expanded, and use experience of a user is improved.
Optionally, the processor 1210 is further configured to: after the first screening target is fused into a second video frame corresponding to the first video frame, determining the blurring degree of the second video frame according to the difference between the first relative distance between the first screening target and the camera and the second relative distance between the second video frame and the camera, and blurring the second video frame based on the blurring degree.
It should be understood that, in the embodiment of the present application, the input Unit 1204 may include a Graphics Processing Unit (GPU) 12041 and a microphone 12042, and the Graphics Processing Unit 12041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1206 may include a display panel 12061, and the display panel 12061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1207 includes a touch panel 12071 and other input devices 12072. A touch panel 12071, also referred to as a touch screen. The touch panel 12071 may include two parts of a touch detection device and a touch controller. Other input devices 12072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1209 may be used to store software programs as well as various data, including but not limited to application programs and an operating system. Processor 1210 may integrate an application processor, which handles primarily the operating system, user interface, applications, etc., and a modem processor, which handles primarily wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 1210.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned method for generating a video, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above-mentioned method for generating a video, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. A method of video generation, the method comprising:
responding to a first input of a user to a first video, determining a first screening target, and extracting N first video frames containing the first screening target in the first video, wherein N is a positive integer;
responding to a second input of a user to a second video, determining a second screening target, and extracting M second video frames containing the second screening target in the second video, wherein M is a positive integer;
under the condition that the video synthesis mode is a fusion mode, determining the first video frame and the second video frame with time correspondence in the N first video frames and the M second video frames, and fusing the first video frame and the second video frame with time correspondence to generate a target video frame; combining the target video frame and the first video frame and/or the second video frame which do not have the time correspondence relationship in the N first video frames and the M second video frames to generate a target video; alternatively, the first and second electrodes may be,
and under the condition that the video synthesis mode is an interpenetration mode, determining the arrangement sequence of the N first video frames and the M second video frames, and combining the N first video frames and the M second video frames based on the arrangement sequence to generate a target video.
2. The method of claim 1, wherein determining the first filtering objective in response to a first input to the first video by the user comprises:
responding to a clicking operation of a user on a first video in a playing state, determining a first target video frame corresponding to the clicking operation, and determining a first screening target in the first target video frame;
in response to a second input by the user to the second video, determining a second filtering target, comprising:
and responding to the clicking operation of the user on the second video in the playing state, determining a second target video frame corresponding to the clicking operation, and determining a second screening target in the second target video frame.
3. The method of video generation according to claim 2, wherein determining the first filtering target in the first target video frame comprises: in the case that the first target video frame comprises a plurality of first targets, determining a first screening target in the first target video frame in response to a clicking operation of a user on the first targets;
determining a second filtering target in the second target video frame, comprising: and in the case that the second target video frame comprises a plurality of second targets, responding to the clicking operation of the user on the second targets, and determining second screening targets in the second target video frame.
4. The method of video generation according to claim 1, wherein before determining the first video frame and the second video frame having a temporal correspondence among the N first video frames and the M second video frames, the method further comprises:
generating and displaying time bars corresponding to the N first video frames and time bars corresponding to the M second video frames;
determining the first video frame and the second video frame having a temporal correspondence among the N first video frames and the M second video frames, including:
and determining the first video frames and the second video frames with the time corresponding relation according to the time bars corresponding to the N first video frames and the time bars corresponding to the M second video frames.
5. The method of generating video according to claim 4, wherein determining the first video frame and the second video frame having time correspondence according to the time slices corresponding to the N first video frames and the time slices corresponding to the M second video frames comprises:
if N is equal to M, corresponding the initial position of the time bar corresponding to the first video frame to the initial position of the time bar corresponding to the second video frame, and determining the first video frame and the second video frame with the time corresponding relation according to the corresponding time bars corresponding to the N first video frames and the corresponding time bars corresponding to the M second video frames;
if N is smaller than M, responding to the dragging operation of a user on time bars corresponding to the N first video frames or time bars corresponding to the M second video frames, determining the initial corresponding positions of the time bars corresponding to the N first video frames and the time bars corresponding to the M second video frames, and determining the first video frames and the second video frames with the time corresponding relation according to the corresponding time bars corresponding to the N first video frames and the corresponding time bars corresponding to the M second video frames;
if N is larger than M, in response to the dragging operation of a user on the time bars corresponding to the N first video frames or the time bars corresponding to the M second video frames, determining the initial corresponding positions of the time bars corresponding to the N first video frames and the time bars corresponding to the M second video frames, deleting the parts of the time bars corresponding to the N first video frames, which exceed the time bars corresponding to the M second video frames, and determining the first video frames and the second video frames with the time corresponding relation according to the corresponding parts of the time bars corresponding to the first video frames and the time bars corresponding to the M second video frames.
6. The method of video generation according to claim 1, wherein fusing the first video frame and the second video frame having a corresponding relationship to generate a target video frame comprises:
cutting the first video frame to obtain a first screening target;
and fusing the first screening target into a second video frame corresponding to the first video frame to obtain the target video frame.
7. The method of video generation of claim 1, wherein prior to responding to a first input to the first video by a user, the method further comprises:
determining a degree of blurring of the first video in response to a fourth input by a user;
after extracting the first video frame, the method further comprises:
and performing blurring processing on the first video frame based on the blurring degree of the first video.
8. The method of video generation of claim 1, wherein prior to responding to a second input to the second video by the user, the method further comprises:
determining a degree of blurring of the second video in response to a fifth input by a user;
after extracting the second video frame, the method further comprises:
and performing blurring processing on the second video frame based on the blurring degree of the second video.
9. The method of video generation according to claim 6, wherein after fusing the first filtering target into a second video frame having a corresponding relationship with the first video frame, the method further comprises:
and determining the blurring degree of the second video frame according to the difference between the first relative distance between the first screening target and the camera and the second relative distance between the second video frame and the camera, and blurring the second video frame based on the blurring degree.
10. An apparatus for video generation, the apparatus comprising:
the first extraction module is used for responding to a first input of a user to a first video, determining a first screening target, and extracting N first video frames containing the first screening target in the first video, wherein N is a positive integer;
the second extraction module is used for responding to a second input of a user to a second video, determining a second screening target, and extracting M second video frames containing the second screening target in the second video, wherein M is a positive integer;
a first processing module, configured to determine, in a case that a video synthesis manner is a fusion manner, a first video frame and a second video frame having a time correspondence relationship among the N first video frames and the M second video frames, and fuse the first video frame and the second video frame having the time correspondence relationship to generate a target video frame; combining the target video frame and the first video frame and/or the second video frame which do not have the time correspondence relationship in the N first video frames and the M second video frames to generate a target video; alternatively, the first and second electrodes may be,
and the second processing module is used for determining the arrangement sequence of the N first video frames and the M second video frames under the condition that the video synthesis mode is the insertion mode, and combining the N first video frames and the M second video frames based on the arrangement sequence to generate the target video.
11. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the method of video generation according to any of claims 1-9.
CN202111101606.0A 2021-09-18 2021-09-18 Video generation method and device and electronic equipment Pending CN113810624A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111101606.0A CN113810624A (en) 2021-09-18 2021-09-18 Video generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111101606.0A CN113810624A (en) 2021-09-18 2021-09-18 Video generation method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113810624A true CN113810624A (en) 2021-12-17

Family

ID=78895983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111101606.0A Pending CN113810624A (en) 2021-09-18 2021-09-18 Video generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113810624A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114520875A (en) * 2022-01-28 2022-05-20 西安维沃软件技术有限公司 Video processing method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103442252A (en) * 2013-08-21 2013-12-11 宇龙计算机通信科技(深圳)有限公司 Method and device for processing video
CN104333700A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Image blurring method and image blurring device
US20160210998A1 (en) * 2015-01-21 2016-07-21 Google Inc. Techniques for creating a composite image
CN108769801A (en) * 2018-05-28 2018-11-06 广州虎牙信息科技有限公司 Synthetic method, device, equipment and the storage medium of short-sighted frequency
CN110290425A (en) * 2019-07-29 2019-09-27 腾讯科技(深圳)有限公司 A kind of method for processing video frequency, device and storage medium
CN112991245A (en) * 2021-02-03 2021-06-18 无锡闻泰信息技术有限公司 Double-shot blurring processing method and device, electronic equipment and readable storage medium
CN113207038A (en) * 2021-04-21 2021-08-03 维沃移动通信(杭州)有限公司 Video processing method, video processing device and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103442252A (en) * 2013-08-21 2013-12-11 宇龙计算机通信科技(深圳)有限公司 Method and device for processing video
CN104333700A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Image blurring method and image blurring device
US20160210998A1 (en) * 2015-01-21 2016-07-21 Google Inc. Techniques for creating a composite image
CN108769801A (en) * 2018-05-28 2018-11-06 广州虎牙信息科技有限公司 Synthetic method, device, equipment and the storage medium of short-sighted frequency
CN110290425A (en) * 2019-07-29 2019-09-27 腾讯科技(深圳)有限公司 A kind of method for processing video frequency, device and storage medium
CN112991245A (en) * 2021-02-03 2021-06-18 无锡闻泰信息技术有限公司 Double-shot blurring processing method and device, electronic equipment and readable storage medium
CN113207038A (en) * 2021-04-21 2021-08-03 维沃移动通信(杭州)有限公司 Video processing method, video processing device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114520875A (en) * 2022-01-28 2022-05-20 西安维沃软件技术有限公司 Video processing method and device and electronic equipment
CN114520875B (en) * 2022-01-28 2024-04-02 西安维沃软件技术有限公司 Video processing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
CN111757175A (en) Video processing method and device
CN113093968A (en) Shooting interface display method and device, electronic equipment and medium
CN112954210A (en) Photographing method and device, electronic equipment and medium
CN112911147B (en) Display control method, display control device and electronic equipment
CN113794834B (en) Image processing method and device and electronic equipment
CN112532882B (en) Image display method and device
CN113794829A (en) Shooting method and device and electronic equipment
CN113596555B (en) Video playing method and device and electronic equipment
CN113207038B (en) Video processing method, video processing device and electronic equipment
CN113194256B (en) Shooting method, shooting device, electronic equipment and storage medium
CN113596574A (en) Video processing method, video processing apparatus, electronic device, and readable storage medium
CN113810624A (en) Video generation method and device and electronic equipment
CN114187392A (en) Virtual even image generation method and device and electronic equipment
CN112511743A (en) Video shooting method and device
CN113010738A (en) Video processing method and device, electronic equipment and readable storage medium
CN111641868A (en) Preview video generation method and device and electronic equipment
CN117152660A (en) Image display method and device
CN114466140B (en) Image shooting method and device
CN113271494B (en) Video frame processing method and device and electronic equipment
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN112486650B (en) Operation path switching method and device and electronic equipment
CN114245017A (en) Shooting method and device and electronic equipment
CN112261483A (en) Video output method and device
CN113709565A (en) Method and device for recording facial expressions of watching videos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination