CN113794831B - Video shooting method, device, electronic equipment and medium - Google Patents

Video shooting method, device, electronic equipment and medium Download PDF

Info

Publication number
CN113794831B
CN113794831B CN202110932865.1A CN202110932865A CN113794831B CN 113794831 B CN113794831 B CN 113794831B CN 202110932865 A CN202110932865 A CN 202110932865A CN 113794831 B CN113794831 B CN 113794831B
Authority
CN
China
Prior art keywords
video
image
preview
preview window
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110932865.1A
Other languages
Chinese (zh)
Other versions
CN113794831A (en
Inventor
李涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202110932865.1A priority Critical patent/CN113794831B/en
Publication of CN113794831A publication Critical patent/CN113794831A/en
Application granted granted Critical
Publication of CN113794831B publication Critical patent/CN113794831B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a video shooting method, a video shooting device, electronic equipment and a video shooting medium, and belongs to the technical field of shooting. Wherein the method comprises the following steps: displaying a first preview window and a second preview window in a shooting preview interface, wherein the first preview window is used for displaying a first video background image, and the second preview window is used for displaying a video object image; receiving a first input of a user; and responding to the first input, performing image synthesis on the images displayed by the first preview window and the second preview window to obtain a first target video image, and displaying the first target video image in a third preview window.

Description

Video shooting method, device, electronic equipment and medium
Technical Field
The application belongs to the technical field of image pickup, and particularly relates to a video shooting method, a video shooting device, electronic equipment and a medium.
Background
In general, when a user performs video shooting, the user needs to trigger the electronic device to display a shooting preview interface, adjust a frame composition of a preview image in the shooting preview interface to obtain a background image satisfied by the user, then notify the user to move so that the image of the person to be shot is located at a proper position in the background image, and finally trigger the electronic device to perform shooting operation. However, multiple shots are often required to obtain satisfactory video.
Disclosure of Invention
The embodiment of the application aims to provide a video shooting method, a video shooting device, electronic equipment and a video shooting medium, which can solve the problem that the user operation is complicated in the video shooting process.
In a first aspect, an embodiment of the present application provides a video capturing method, including: displaying a first preview window and a second preview window in a shooting preview interface, wherein the first preview window is used for displaying a first video background image, and the second preview window is used for displaying a video object image; receiving a first input of a user; and responding to the first input, performing image synthesis on the images displayed by the first preview window and the second preview window to obtain a first target video image, and displaying the first target video image in a third preview window.
In a second aspect, an embodiment of the present application provides a video capturing apparatus, including: the device comprises a display module, a receiving module and a processing module. The display module is used for displaying a first preview window and a second preview window in the shooting preview interface, wherein the first preview window is used for displaying a first video background image, and the second preview window is used for displaying a video object image. And the receiving module is used for receiving the first input of the user. And the processing module is used for responding to the first input received by the receiving module, and performing image synthesis on the images displayed by the first preview window and the second preview window to obtain a first target video image. The display module is further configured to display the first target video image synthesized by the processing module in a third preview window.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instruction stored on the memory and executable on the processor, the program or instruction implementing the steps of the method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In the embodiment of the application, in the case that a shooting preview interface of the electronic device comprises a first preview window for displaying a first video background image and a second preview window for displaying a video object image, the electronic device can synthesize images displayed by the first preview window and the second preview window according to a first input of a user so as to obtain a first target video image, and the first target video image is displayed in a third preview window in the shooting preview interface. The electronic equipment can display the video background image (namely the first video background image) and the video object image which are satisfied by the user in the shooting preview interface, directly synthesizes the video background image and the video object image which are satisfied by the user according to one-time input of the user, and displays the synthesized first target video image in the third preview window so as to obtain the video which is satisfied by the user, and the user does not need to adjust the picture composition of the preview image for many times in the shooting process, so that the operation steps of the user can be reduced, and the time consumption is reduced.
Drawings
Fig. 1 is a schematic diagram of a video shooting method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a video capturing apparatus according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an example of an interface of a mobile phone according to an embodiment of the present application;
FIG. 4 is a diagram illustrating a second example of an interface of a mobile phone according to an embodiment of the present application;
FIG. 5 is a third exemplary diagram of an interface of a mobile phone according to an embodiment of the present application;
FIG. 6 is a second schematic diagram of a video shooting method according to an embodiment of the present application;
FIG. 7 is a third schematic diagram of a video capturing method according to an embodiment of the present application;
FIG. 8 is a diagram illustrating an example of an interface of a mobile phone according to an embodiment of the present application;
FIG. 9 is a fifth exemplary diagram of an interface of a mobile phone according to an embodiment of the present application;
FIG. 10 is a diagram illustrating an example of an interface of a mobile phone according to an embodiment of the present application;
FIG. 11 is a diagram of an example of an interface of a mobile phone according to an embodiment of the present application;
FIG. 12 is a diagram illustrating a video capturing method according to an embodiment of the present application;
FIG. 13 is a diagram illustrating an example of an interface of a mobile phone according to an embodiment of the present application;
FIG. 14 is a diagram showing a video shooting method according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a video capturing apparatus according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 17 is a schematic hardware diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The video shooting method provided by the embodiment of the application is described in detail through specific embodiments and application scenes thereof with reference to the accompanying drawings.
The video shooting method provided by the embodiment of the application can be applied to video shooting scenes.
Assuming that a user wants to take video for a person to be taken through an electronic device (or a video taking device), the user needs to trigger the electronic device to display a taking preview interface, and adjust the picture composition of a preview image in the taking preview interface to obtain a background image satisfactory to the user, in this way, the user can notify the person to be taken to move so that the image of the person to be taken is located at a proper position in the background image, and finally the user triggers the electronic device to execute video taking operation. However, multiple shots are often required to obtain a satisfactory video, so that the operation for obtaining the video satisfactory to the user is complicated and takes a long time.
However, in the embodiment of the present application, the electronic device may display a plurality of preview windows in the capturing preview interface, where the preview window a in the plurality of preview windows is used to display the video background image a, and the preview window b in the plurality of preview windows is used to display the video object image a corresponding to the person to be captured, so that the electronic device may perform image synthesis on the video background image a and the video object image a according to one input of the user to obtain a video image, and display the video image in the preview window c in the plurality of preview windows, so that the user does not need to perform image composition on the preview image multiple times during the capturing process, and multiple capturing operations are avoided, thereby reducing operation steps of the user and reducing time consumption.
Fig. 1 shows a flowchart of a video shooting method according to an embodiment of the present application. As shown in fig. 1, the video capturing method provided by the embodiment of the present application may include the following steps 101 to 103.
Step 101, the video shooting device displays a first preview window and a second preview window in a shooting preview interface.
In the embodiment of the present application, the first preview window is used for displaying a first video background image, and the second preview window is used for displaying a video object image.
Optionally, in an embodiment of the present application, the first video background image may be: an image, or a video frame in a video; the video object image may be: an image, or a video frame in a video.
Optionally, in an embodiment of the present application, the first video background image is an image captured by a camera in real time during a video capturing process, or the first video background image is an image obtained from at least one preset background image.
Further optionally, in an embodiment of the present application, the at least one preset background image may specifically be: the image stored in advance in the video capturing apparatus or may be an image downloaded from the server by the video capturing apparatus.
Specifically, in the case where the first video background image is an image stored in advance in the video capturing device, the first video background image may be an image of a system of the video capturing device itself, or may be an image acquired by a user through a camera of the video capturing device.
Therefore, the video shooting device can synthesize the image shot by the camera (or the image downloaded from the server) with the video object image, so that the diversity of the obtained video image can be increased, the user does not need to carry out picture composition on the preview image for many times in the shooting process, and the shooting operation for many times is avoided, so that the operation steps of the user can be reduced, and the time consumption is reduced.
Optionally, in an embodiment of the present application, the video object image is an image obtained according to the first object image. The first object image is an image shot by a camera in the video shooting process, or the first object image is an image obtained from at least one preset object image.
Optionally, in the embodiment of the present application, in the case where the first video background image and the first object image are both images captured by a camera, the first video background image and the first object image may be images acquired by the same camera or images acquired by different cameras.
Further alternatively, in the embodiment of the present application, in a case where the first video background image and the first object image are images acquired by different cameras, hardware parameters of the different cameras may be different. Wherein the hardware parameters may include at least one of: exposure, sensitivity, aperture, white balance, focal length, etc.
It should be noted that, under the condition that the hardware parameters of different cameras are different, the picture composition of the images acquired by different cameras is also different, so that the situation that the first video background image and the first object image are the same image can be avoided.
In particular, when the video camera has only one screen, different cameras may be provided on the back of the video camera. Wherein, the connection line between the center of one camera of the different cameras and the center of the other cameras (i.e. the cameras except the one camera of the different cameras) is parallel to the non-adjacent two edge lines of the video shooting device. When the video photographing apparatus has two screens, different cameras may be provided at the back sides of the different screens.
For example, a video camera is taken as a mobile phone for illustration. As shown in fig. 2, the mobile phone includes different cameras (e.g., a camera 10 and a camera 11), the camera 10 and the camera 11 are disposed on a back housing 12 of the mobile phone, and a line connecting a center of the camera 10 and a center of the camera 11 is parallel to two non-adjacent edge lines (e.g., an edge line 13 and an edge line 14) of the mobile phone.
Optionally, in the embodiment of the present application, when the video capturing device displays the interface of the "set" application, the video capturing device may start the "video picture composition" function according to the click input of the user on the "video picture composition" function option in the interface, so that the video capturing device may display the capturing preview interface according to the click input of the user on the identifier (for example, the icon) of the target application in the desktop of the video capturing device, where the capturing preview interface displays the first preview window and the second preview window.
Further optionally, in an embodiment of the present application, the target application may specifically be any one of the following: a shooting class application, an image processing class application, a web page class application, a social class application, and the like.
In the following, three different examples will be given to illustrate how the video capturing device displays the first preview window and the second preview window.
In one example, the first video background image and the first object image are both images captured by a camera (e.g., a different camera) during video capture. In this way, the video capturing apparatus may display a capturing preview interface according to a click input of a user on an identification (for example, an icon) of a target application in a desktop of the video capturing apparatus, and start different cameras, and display a first preview window for displaying a first video background image acquired by one of the different cameras and a second preview window for displaying a video object image in the capturing preview interface, where the video object image is: and obtaining according to the first object image acquired by another camera in the different cameras.
For example, as shown in fig. 3, the mobile phone displays a shooting preview interface (for example, interface 15), where a first preview window (for example, window 16) and a second preview window (for example, window 17) are displayed in the interface 15, where the window 16 is used for displaying a first video background image acquired by one camera of the mobile phone, and the window 17 is used for displaying a video object image, where the video object image is: and obtaining according to the first object image acquired by the other camera of the mobile phone.
In another example, the first video background image and the first object image are both images captured by the same camera. In this way, the video shooting device can display a shooting preview interface according to click input of a user on an identification (such as an icon) of a target application in a desktop of the video shooting device, start a camera, sequentially collect a video object image and a first video background image, and display a first preview window and a second preview window in the shooting preview interface.
In yet another example, the first video background image is an image obtained from at least one preset background image, and the video object image is an image captured by a camera in real time during video capturing. In this way, the video capturing device may display a capturing preview interface according to a click input of a user on an identifier (for example, an icon) of a target application in a desktop of the video capturing device, and start a camera, and display a first control and a second preview window in the capturing preview interface, where the second preview window is used to display a video object image, and the video object image is: the first control is used for selecting a first video background image according to a first object image acquired by the camera, so that the video shooting device can display a first interface according to input of a user to the first control, the first interface comprises at least one image identifier, each image identifier indicates a preset background image respectively, a shooting preview interface is displayed according to click input of the user to the first image identifier in the at least one image identifier, and a first preview window and a second preview window are displayed in the shooting preview interface, wherein the first preview window is used for displaying the first video background image indicated by the first image identifier.
For example, as shown in fig. 4 (a), the mobile phone displays a shooting preview interface (e.g., interface 18), in which interface 18 a first control (e.g., control 19) and a second preview window (e.g., window 20) are displayed, the control 19 is used to select a first video background image, and the window 20 is used to display a video object image, where the video object image is: obtained according to a first object image acquired by a camera of the mobile phone, so that a user can click and input the control 19; as shown in (B) of fig. 4, after the user performs the click input, the mobile phone may display a first interface (e.g., interface 21), including at least one image identifier (e.g., image identifier 22, image identifier 23, image identifier 24, and image identifier 25) in the interface 21, so that the user may perform the click input on the first image identifier (e.g., image identifier 22); as shown in fig. 4 (C), after the user has made a click input, the handset may display an interface 18, in which interface 18 a first preview window (e.g., window 26) is displayed, the window 26 being for displaying a first video background image indicated by the image identification 22.
Step 102, the video capturing device receives a first input from a user.
In the embodiment of the present application, the first input is used to trigger the video capturing device to synthesize a first target video image.
Optionally, in an embodiment of the present application, the first input may specifically be: a click input of a control in the shooting preview interface by a user, a press input of a display screen of the video shooting device by a user (such as a double click input), a press input of a physical key of the video shooting device by a user (such as a click input), or the like, or a gesture input of a display screen of the video shooting device by a user (such as dragging a video object image onto a first video background image, or dragging a video object image and a first video background image to a certain area of the display screen by two fingers at the same time).
And step 103, the video shooting device responds to the first input, synthesizes the images displayed by the first preview window and the second preview window to obtain a first target video image, and displays the first target video image in a third preview window.
It can be appreciated that the video capturing device may perform image synthesis on the first video background image and the video object image to obtain a first target video image, and display the first target video image in the third preview window.
Optionally, in the embodiment of the present application, the video capturing device may obtain the video object image according to the pre-stored object feature information (for example, the target object feature information in the embodiment described below) and the first object image, and then perform image synthesis on the first video background image and the video object image to obtain the first target video image.
Further optionally, in the embodiment of the present application, the video capturing device may determine, according to the object feature information, a first area where the video object is located from the first object image, and perform transparency processing on other areas to obtain the video object image. Wherein the other region is: in the first object image, regions other than the first region are formed.
Further optionally, in an embodiment of the present application, the video capturing device may superimpose the video object image on the first video background image to obtain the first target video image.
It will be appreciated that the video capture device may "scratch" the appearance of the video object from the first object image based on the object feature information, and superimpose the appearance of the video object on the first video background image.
For example, as shown in fig. 3 and 5, after the user inputs the mobile phone, the mobile phone may image-synthesize the images displayed in the windows 16 and 17 to obtain a first target video image, and display the first target video image in a third preview window (e.g., window 26).
In the following, an example will be given in which the first video background image and the video object image are both video, and how the video capturing apparatus performs image composition will be described.
Optionally, in the embodiment of the present application, the first preview window includes a first video preview interface (an interface in a video recording state) of the first camera, and the first video background image is at least one frame of video preview image displayed in the first video preview interface; the second preview window includes a second video preview interface (an interface in a video recording state) of the second camera, and the video object image is at least one frame of video preview image displayed in the second video preview interface; the first target video image includes at least one frame of target sub-video image. Specifically, as shown in fig. 1 and 6, the above step 103 may be specifically implemented by the following step 103 a.
Step 103a, the video shooting device responds to the first input, superimposes the ith frame of video preview image displayed on the second video preview interface on the ith frame of video preview image displayed on the first video preview interface, obtains the ith frame of target sub-video image, and displays the ith frame of target sub-video image in the third preview window.
In the embodiment of the application, i is a positive integer.
Further optionally, in an embodiment of the present application, the first video preview interface may specifically be: acquiring an interface of a first video background image through a first camera; the second video preview interface may specifically be: and acquiring an interface of the video object image through the second camera.
Specifically, the first video preview interface and the second video preview interface may be: the same interface or a different interface.
In the embodiment of the present application, the i-th frame video preview image displayed on the second video preview interface may specifically be: and obtaining according to the object characteristic information and the video object image of the ith frame.
Further optionally, in the embodiment of the present application, the video capturing device may superimpose the ith frame of video preview image displayed on the second video preview interface on the target preset area on the ith frame of video preview image displayed on the first video preview interface, so as to obtain the ith frame of target sub-video image.
Specifically, the target preset area may be: the video capturing device determines an area according to the image content of the first video background image. The target preset area may specifically be: the area where the video object can move.
It should be noted that, the "movable area" can be understood as: and the video object image is a movable area in the ith frame of video preview image displayed on the first video preview interface.
For example, assuming that the i-th frame video preview image displayed on the first video preview interface is sea, the target preset area may be sea level. While the target preset area does not include the mountain of the sea edge, other objects on the sea level (e.g. ships, etc.), i.e. the mountain of the sea edge, other objects on the sea level do not allow the placement of video objects.
Therefore, the video shooting device can respectively acquire the first video background image and the video object image through different cameras, and respectively superimpose each frame of video preview image displayed on the second video preview interface on each frame of video preview image displayed on the first video preview interface to obtain each frame of target sub-video image of the first target video image, so that the user operation steps can be reduced, and the video satisfactory to the user can be quickly obtained.
Optionally, the video shooting method provided by the embodiment of the present application further includes the following step 201.
Step 201, the video shooting device obtains a first video according to the first target video image displayed in the third preview window.
It should be noted that, the embodiment of the present application is not limited to the execution sequence of "displaying the first target video image in the third preview window" in step 201 and step 103.
In one possible implementation, the video capturing apparatus may perform step 201 first, and then perform "display the first target video image in the third preview window" in step 103.
In another possible implementation, the video capturing apparatus may perform "display the first target video image in the third preview window" in step 103 before performing step 201.
In yet another possible implementation, the video capturing apparatus may perform "display the first target video image in the third preview window" in step 201 and step 103 at the same time.
Further alternatively, in the embodiment of the present application, the video capturing device may synthesize a plurality of frames of the first target video image to package the first video.
Therefore, the video shooting device can synthesize the images displayed by the first preview window and the second preview window to obtain the video image (namely the first target video image) satisfactory to the user, so that the video (namely the first video) satisfactory to the user is obtained, the user does not need to adjust the picture composition of the preview image for multiple times, and then the operation is carried out for multiple times, so that the operation steps of the user can be reduced, and the time consumption is reduced.
In the embodiment of the application, a user can trigger the video shooting device to display a first preview window and a second preview window in a shooting preview interface, wherein the first preview window is used for displaying a video background image (namely a first video background image) satisfactory to the user, the second preview window is used for displaying a video object image, and then the video shooting device is input once, so that the video shooting device can perform image synthesis on the video background image satisfactory to the user and the video object image to obtain a video image satisfactory to the user (namely a first target video image), and the user can view the video image in a third preview window.
In the video shooting method provided by the embodiment of the application, under the condition that the shooting preview interface of the video shooting device comprises the first preview window for displaying the first video background image and the second preview window for displaying the video object image, the video shooting device can synthesize the images displayed by the first preview window and the second preview window according to the first input of the user so as to obtain the first target video image, and the first target video image is displayed in the third preview window in the shooting preview interface. The video shooting device can display a video background image (namely a first video background image) and a video object image which are satisfied by a user in a shooting preview interface, directly synthesizes the video background image and the video object image which are satisfied by the user according to one-time input of the user, and displays the synthesized first target video image in a third preview window so as to obtain the video which is satisfied by the user, and in the shooting process, the user does not need to adjust the picture composition of the preview image for many times, so that the operation steps of the user can be reduced, and the time consumption is reduced.
Of course, after the first target video image is displayed in the third preview window of the video capturing apparatus, there may be a case where the user is not satisfied with the position of the video object image on the first video background image, and thus, the user may adjust the position of the video object image.
Optionally, in the embodiment of the present application, after the step 103, the video capturing method provided in the embodiment of the present application may further include the following steps 301 and 302.
Step 301, the video capturing apparatus receives a fifth input from a user.
In the embodiment of the present application, the fifth input is used to adjust the position of the video object image on the first video background image.
Further alternatively, in an embodiment of the present application, the fifth input may specifically be: and dragging and inputting the video object image in the target preset area.
It should be noted that, the above "within the target preset area" may be understood as: the initial input position and the end input position are both within the target preset area.
In step 302, the video capturing apparatus responds to the fifth input, and adjusts the display position of the video object image on the first video background image to the target display position according to the input parameter of the fifth input.
Further optionally, in an embodiment of the present application, the input parameter may specifically be: ending the input position.
Specifically, after displaying the ith frame of target sub-video image in the third preview window of the video capturing device, the user may input the ith frame of target sub-video image in the third preview window, so that the video capturing device may adjust, according to the input parameter of the input, the overlapping position of the ith frame of video preview image displayed on the second video preview interface on the ith frame of video preview image displayed on the first video preview interface.
Therefore, the user can perform the fifth input on the third preview window, so that the video shooting device can directly adjust the display position of the video object image on the first video background image to the target display position, and the user does not need to trigger the video shooting device to perform shooting operation again, thereby reducing operation steps of the user and reducing time consumption.
Optionally, in the embodiment of the present application, the video capturing device may further set different background music for the first target video image according to different regions where the target display positions are located. Specifically, the target preset area includes at least one sub-area, and each sub-area in the at least one sub-area may be respectively and correspondingly provided with an audio; the target display position is: a location in a target sub-area in the at least one sub-area. Specifically, the above step 302 may be specifically implemented by the following step 302 a.
In step 302a, the video shooting device obtains a fifth target video image according to the first target video image and the target audio corresponding to the target sub-region.
Further optionally, in an embodiment of the present application, the at least one sub-area may be: sub-areas preset by users. The at least one audio may be: audio preset by the user.
It will be appreciated that the user may have previously set a sequence of audio frames for each sub-region separately.
Further optionally, in the embodiment of the present application, the video capturing device may determine the target audio from the at least one audio according to the target sub-region, and then obtain the fifth target video image according to the first target video image and the target audio.
In the embodiment of the present application, the playing time stamp of the first frame of video preview image of the fifth target video image is matched with the playing time stamp of the first frame of audio of the target audio.
It should be noted that, the above "matching" can be understood as: the difference in identity, or phase difference, is less than or equal to a preset threshold.
Further optionally, in the embodiment of the present application, the video capturing device may synthesize the fifth target video image according to each frame of the video preview image of the fifth target video image and each frame of the audio of the target audio.
In the embodiment of the application, because the situation that the background music is set for the video may occur when the user needs, each sub-area of the target preset area corresponds to one audio respectively, so that the video shooting device can set the background music (i.e., the target audio) for the first target video image according to the target sub-area where the target display position is located.
Therefore, as each sub-area of the target preset area corresponds to one audio, the video shooting device can obtain a fifth target video image, namely a video image provided with background music, according to the target audio corresponding to the target sub-area where the target display position is located and the first target video image, without the need of multiple operations of a user, so that the time consumption for obtaining the video satisfactory to the user can be reduced.
Optionally, in the embodiment of the present application, as shown in fig. 1 and fig. 7, before the step 101, the video shooting method provided in the embodiment of the present application may further include the following step 401 and step 402.
Step 401, a video shooting device acquires a target object image.
Further optionally, in the embodiment of the present application, when the video capturing device starts the "video frame synthesis" function and displays the capturing preview interface, the video capturing device may control the video capturing device to enter the "video capturing" mode according to the click input of the "video" mode control in the capturing preview interface by the user, so that the user may input the video capturing device (for example, the click input of the "capturing" control) to enable the video capturing device to acquire the target object image.
Step 402, the video shooting device obtains target object characteristic information according to the target object image.
In the embodiment of the present application, the target object feature information indicates a video object in a video object image.
Further optionally, in an embodiment of the present application, the target object feature information may include at least one of the following: object region characteristic information, object contour information, object color information, and the like.
For example, in the case where the video object is a person, the object part characteristic information may be face characteristic information
For example, as shown in fig. 8 (a), the mobile phone turns on the "video picture composition" function, and displays a shooting preview interface (e.g., interface 27), where the interface 27 includes a "shooting" control (e.g., control 28), so that the user can make a click input to the control 28; as shown in fig. 8 (B), after the user performs the click input, the mobile phone may acquire a target object image, and determine target object feature information, that is, object feature information of the object 29, that is, the video object, according to the target object image.
Therefore, the video shooting device can acquire the target object image and determine the target object characteristic information according to the target object image, namely the video shooting device can determine the characteristic information of the video object required by the user as the target object characteristic information, so that the video shooting device can accurately determine the video object image required by the user, and the accuracy of the video shooting device for obtaining the video object image required by the user can be improved.
In the following, it will be exemplified how the video photographing device determines the target preset area.
Optionally, in the embodiment of the present application, before the step 101, the video capturing method provided in the embodiment of the present application may further include the following steps 501 to 503.
Step 501, a video shooting device collects a target background image.
It should be noted that, with respect to the execution sequence of step 501 and step 401, embodiments of the present application are not limited herein. In one possible implementation, the video capturing apparatus may perform step 401 first, and then perform step 501; in another possible implementation, the video capturing apparatus may perform step 501 first and then perform step 401; in yet another possible implementation, the video capturing apparatus may perform step 401 while performing step 501.
Further optionally, in the embodiment of the present application, when the video capturing device starts the "video frame synthesis" function and displays the capturing preview interface, the video capturing device may control the video capturing device to enter the "video capturing" mode according to the click input of the "video" mode control in the capturing preview interface by the user, so that the user may input the video capturing device (for example, the click input of the "capturing" control) to enable the video capturing device to collect the target background image.
Step 502, the video shooting device determines a target preset area according to the target background image.
Further optionally, in an embodiment of the present application, the video capturing device may input the target background image into the target neural network to obtain an output of the target neural network, so as to determine the target preset area.
Further alternatively, in an embodiment of the present application, the target neural network may specifically be: the video shooting device trains the obtained neural network in advance.
It should be noted that, for the description of the training method of the target neural network, reference may be made to the specific description in the related art, which is not repeated in the embodiments of the present application.
Further optionally, in the embodiment of the present application, after the video capturing device determines the target preset area, the video capturing device may display the target preset area in the capturing preview interface, so that the user may input (e.g. drag and input) the target preset area to adjust the area size of the target preset area.
For example, as shown in fig. 8 (a) and fig. 9, after the user clicks the control 28, the mobile phone may capture a background image of the target and display a preset target area (e.g., area 30, shown by a dashed box) in the interface 27, so that the user may input (e.g., drag input) to the area 30 to adjust the area size of the area 30.
Therefore, the video shooting device can collect the target background image and determine the target preset area according to the target background image, so that the video shooting device can superimpose the ith frame of video preview image displayed on the second video preview interface on the target preset area on the ith frame of video preview image displayed on the first video preview interface to obtain a video frame satisfactory to a user, and therefore the operation steps of the user can be reduced, and the time consumption is reduced.
Further optionally, in an embodiment of the present application, after the video capturing device determines the target preset area, the video capturing device may display at least one partition control on the target preset area, where each partition control is used to determine a sub-area, so that a user may input (e.g. drag input) the at least one partition control, so that the video capturing device may determine, according to the input of the user, an area size of at least one sub-area of the target preset area.
For example, as shown in fig. 9 and 10, after the cell phone displays the region 30, the cell phone may display at least one split control (e.g., control 31 and control 32) on the region 30, such that a user may input (e.g., drag input) the control 31 and control 32, such that the cell phone may determine a region size of at least one sub-region (e.g., sub-region 33, sub-region 34, and sub-region 35) of the region 30 according to the user's input.
Further optionally, in the embodiment of the present application, after the video capturing device determines the area size of at least one sub-area of the target preset area, the video capturing device may display a second interface according to an input (for example, a click input) for a certain sub-area of the preset area, where the second interface includes at least one audio identifier, so that the video capturing device may determine, according to an input (for example, a click input) of a certain audio identifier of the at least one audio identifier by a user, a certain audio indicated by the certain audio identifier as an audio corresponding to the certain sub-area.
For example, as shown in fig. 10 and (a) in fig. 11, after the cell phone determines the region sizes of the sub-regions 33, 34, and 35, the user may input (e.g., click input) to a certain sub-region (e.g., sub-region 35); as shown in fig. 11 (B), after the user makes a click input to the sub-area 35, the mobile phone may display a second interface (e.g., interface 36), and at least one audio identifier (e.g., audio identifier 37, audio identifier 38, and audio identifier 39) is included in the interface 36, so that the user may identify a certain audio identifier (e.g., audio identifier 39) among the audio identifier 37, the audio identifier 38, and the audio identifier 39, so that the mobile phone may determine the audio indicated by the audio identifier 39 (e.g., music 3) as the audio corresponding to the sub-area 35.
It should be noted that, in the embodiment of the present application, the identifier is a text, a symbol, a pattern, an image, etc. used for indicating information, and a control or other containers may be used as a carrier for displaying information, including but not limited to a text identifier, a symbol identifier, and an image identifier.
Of course, the video capture device may also adjust the video background of the first target video image.
In one example, a user may trigger a video capture device to adjust the video background of a first target video image:
optionally, in an embodiment of the present application, the shooting preview interface further includes a fourth preview window, where the fourth preview window is used to display a second video background image. Specifically, as shown in fig. 1 and 12, after the step 103, the video capturing method provided by the embodiment of the present application may further include the following steps 601 to 603.
Step 601, the video capturing apparatus receives a second input from the user to the fourth preview window.
Further optionally, in an embodiment of the present application, the second video background image may be: an image, or a video frame in a video.
Further optionally, in an embodiment of the present application, the second video background image is an image captured by a camera in a video capturing process, or the second video background image is an image obtained from at least one preset background image.
Further optionally, in an embodiment of the present application, the second input may specifically be: the user drags the video object to the drag input of the fourth preview window or the click input of the fourth preview window by the user.
In step 602, the video capturing device responds to the second input, and performs image synthesis on the fourth preview window and the images displayed by the second preview window to obtain a second target video image, and displays the second target video image in the third preview window.
Further alternatively, in the embodiment of the present application, the video capturing device may replace the first target video image with the second target video image according to the input type of the second input, so as to display the second target video image in the third preview window, or display the first target video image and the second target video image in different areas in the third preview window respectively.
For example, in the case where the second input is a drag input by which the user drags the video object to the fourth preview window, the video capturing apparatus may replace the first target video image with the second target video image; in the case where the second input is a click input of the user to the fourth preview window, the video capturing device may display the first target video image and the second target video image in different areas in the third preview window, respectively.
For example, when the video photographing apparatus is a mobile terminal, such as a mobile phone, as shown in (a) of fig. 13, the mobile phone displays a photographing preview interface (e.g., interface 40), in which a first preview window (e.g., window 41), a second preview window (e.g., window 42), a third preview window (e.g., window 43), and a fourth preview window (e.g., window 44) are displayed, so that a user can make a second input, for example, a drag input in which the user drags a video object to window 44; as shown in (B) of fig. 13, after the user performs a drag input, the mobile phone may replace the second target video image displayed in the window 44 with the first target video image displayed in the window 43 to display the second target video image in the window 43.
And 603, the video shooting device obtains a second video according to the second target video image displayed in the third preview window.
It should be noted that, for the description of the video capturing device obtaining the second video according to the second target video image, reference may be made to the specific description of the video capturing device obtaining the first video according to the first target video image, which is not repeated herein.
Therefore, the video shooting device can obtain the second target video image with different video background images according to the second input of the user, and obtain the second video with different video background images according to the second target video image, so that the operation steps of the user can be reduced, and the time consumption is reduced.
In another example, the video capture device may automatically adjust the video background of the first target video image:
optionally, in an embodiment of the present application, the shooting preview interface further includes a seventh preview window, where the seventh preview window is used to display a fourth video background image. Specifically, as shown in fig. 1 and 14, after the step 103, the video capturing method provided by the embodiment of the present application may further include the following steps 701 and 702.
And 701, after a preset time interval, the video shooting device synthesizes images displayed by the seventh preview window and the second preview window to obtain a fourth target video image, and displays the fourth target video image in the third preview window.
In the embodiment of the application, after obtaining the preset duration of the first target video image, the video shooting device can perform image synthesis on the images displayed by the seventh preview window and the second preview window to obtain the fourth target video image.
Further optionally, in an embodiment of the present application, the fourth video background image may be: an image, or a video frame in a video.
Further optionally, in an embodiment of the present application, the fourth video background image is an image captured by a camera in a video capturing process, or the fourth video background image is an image obtained from at least one preset background image.
Further alternatively, in the embodiment of the present application, the video capturing device may replace the first target video image with the fourth target video image, so as to display the fourth target video image in the third preview window, or display the first target video image and the fourth target video image in different areas in the third preview window respectively.
Step 702, the video capturing apparatus obtains a fourth video according to the fourth target video image displayed in the seventh preview window.
Therefore, the video shooting device can obtain the fourth target video images of the different video background images after the preset time interval, and obtain the fourth video of the different video background images according to the fourth target video images, so that the video shooting device can obtain the final video according to the first video and the fourth video, namely the final video is formed by splicing two sections of videos with different backgrounds, and therefore the operation steps of a user can be reduced, and the time consumption is reduced.
Of course, the video capturing apparatus may also obtain multiple target video images simultaneously according to one input (i.e., the first input) of the user.
Optionally, in an embodiment of the present application, the shooting preview interface further includes a fifth preview window, where the fifth preview window is used to display a third video background image. Specifically, after the step 102, the video capturing method provided in the embodiment of the present application may further include the following steps 801 and 802.
In step 801, the video capturing device responds to the first input, and performs image synthesis on the images displayed in the fifth preview window and the second preview window to obtain a third target video image, and displays the third target video image in the sixth preview window.
Further optionally, in an embodiment of the present application, the third video background image may be: an image, or a video frame in a video.
It will be appreciated that the third video background image (or video object image) may be a static image or may be a dynamic video.
Further optionally, in an embodiment of the present application, the third video background image is an image captured by a camera in a video capturing process, or the third video background image is an image obtained from at least one preset background image.
Step 802, the video shooting device obtains a third video according to the third target video image displayed in the sixth preview window.
It should be noted that, for the description of the video capturing device obtaining the third video according to the third target video image, reference may be made to the specific description of the video capturing device obtaining the first video according to the first target video image, which is not repeated herein.
Therefore, the video shooting device can obtain the first target video image and the third target video image of different video backgrounds according to the first input of the user, and accordingly the video shooting device can obtain the videos of different video backgrounds, namely the first video and the third video, according to the first target video image and the third target video image, and the first video and the third video can be stored respectively, operation steps of the user can be reduced, and time consumption is reduced.
Fig. 15 shows a schematic diagram of a possible configuration of a video capturing apparatus according to an embodiment of the present application. As shown in fig. 15, the video photographing device 60 may include: a display module 61, a receiving module 62 and a processing module 63.
The display module 61 is configured to display a first preview window and a second preview window in the shooting preview interface, where the first preview window is used to display a first video background image, and the second preview window is used to display a video object image. The receiving module 62 is configured to receive a first input from a user. The processing module 63 is configured to perform image synthesis on the images displayed in the first preview window and the second preview window in response to the first input received by the receiving module 62, so as to obtain a first target video image. The display module 61 is further configured to display the first target video image synthesized by the processing module 63 in a third preview window.
According to the video shooting device provided by the embodiment of the application, the video background image (namely the first video background image) and the video object image which are satisfied by the user can be displayed in the shooting preview interface, the image synthesis is directly carried out on the video background image and the video object image which are satisfied by the user according to one-time input of the user, and the synthesized first target video image is displayed in the third preview window, so that the video which is satisfied by the user is obtained, the user does not need to adjust the picture composition of the preview image for many times in the shooting process, the operation for many times is avoided, the operation steps of the user can be reduced, and the time consumption is reduced.
In a possible implementation manner, the processing module 63 is further configured to obtain the first video according to the first target video image displayed in the third preview window.
Therefore, the video shooting device can synthesize the images displayed by the first preview window and the second preview window to obtain the video image (namely the first target video image) satisfactory to the user, so that the video (namely the first video) satisfactory to the user is obtained, the user does not need to adjust the picture composition of the preview image for multiple times, and then the operation is carried out for multiple times, so that the operation steps of the user can be reduced, and the time consumption is reduced.
In one possible implementation manner, the first video background image is an image captured by a camera during a video capturing process, or the first video background image is an image obtained from at least one preset background image.
Therefore, the video shooting device can synthesize the image shot by the camera (or the image downloaded from the server) with the video object image, so that the diversity of the obtained video image can be increased, the user does not need to carry out picture composition on the preview image for many times in the shooting process, and the shooting operation for many times is avoided, so that the operation steps of the user can be reduced, and the time consumption is reduced.
In a possible implementation manner, the shooting preview interface further includes a fourth preview window, where the fourth preview window is used to display the second video background image. The receiving module 62 is further configured to receive a second input from the user for the fourth preview window. The processing module 63 is further configured to, in response to the second input received by the receiving module 62, perform image synthesis on the images displayed in the fourth preview window and the second preview window, so as to obtain a second target video image. The display module 61 is further configured to display the second target video image obtained by processing by the processing module 63 in a third preview window. The processing module 63 is further configured to obtain a second video according to the second target video image displayed by the display module 61 in the third preview window.
Therefore, the video shooting device can obtain the second target video image with different video background images according to the second input of the user, and obtain the second video with different video background images according to the second target video image, so that the operation steps of the user can be reduced, and the time consumption is reduced.
In one possible implementation manner, the first preview window includes a first video preview interface of the first camera, and the first video background image is at least one frame of video preview image displayed in the first video preview interface; the second preview window comprises a second video preview interface of a second camera, and the video object image is at least one frame of video preview image displayed in the second video preview interface; the first target video image includes at least one frame of target sub-video image. The processing module 63 is specifically configured to superimpose the ith frame of video preview image displayed on the second video preview interface on the ith frame of video preview image displayed on the first video preview interface to obtain an ith frame of target sub-video image; wherein i is a positive integer.
Therefore, the video shooting device can respectively acquire the first video background image and the video object image through different cameras, and respectively superimpose each frame of video preview image displayed on the second video preview interface on each frame of video preview image displayed on the first video preview interface to obtain each frame of target sub-video image of the first target video image, so that the user operation steps can be reduced, and the video satisfactory to the user can be quickly obtained.
In a possible implementation manner, the shooting preview interface further includes a fifth preview window, where the fifth preview window is used to display a third video background image. The processing module 63 is further configured to perform image synthesis on the images displayed in the fifth preview window and the second preview window in response to the first input, so as to obtain a third target video image. The display module 61 is further configured to display the third target video image obtained by processing by the processing module 63 in a sixth preview window. The processing module 63 is further configured to obtain a third video according to the third target video image displayed by the display module 61 in the sixth preview window.
Therefore, the video shooting device can obtain the first target video image and the third target video image of different video backgrounds according to the first input of the user, and accordingly the video shooting device can obtain videos of different video backgrounds, namely the first video and the third video, according to the first target video image and the third target video image, and the first video and the third video can be stored respectively, operation steps of the user can be reduced, and time consumption is reduced.
In one possible implementation manner, the shooting preview interface further includes a seventh preview window, where the seventh preview window is used to display a fourth video background image. The processing module 63 is further configured to perform image synthesis on the images displayed in the seventh preview window and the second preview window after the interval is set for a predetermined period of time, so as to obtain a fourth target video image. The display module 61 is further configured to display the fourth target video image obtained by processing by the processing module 63 in the third preview window. The processing module 63 is further configured to obtain a fourth video according to the fourth target video image displayed by the display module 61 in the seventh preview window.
Therefore, the video shooting device can obtain the fourth target video images of different video background images after the preset time interval, and obtain the fourth video of different video background images according to the fourth target video images, so that the video shooting device can obtain the final video according to the first video and the fourth video, namely the final video is formed by splicing two sections of videos with different backgrounds, and therefore the operation steps of a user can be reduced, and the time consumption is reduced.
In one possible implementation manner, the video capturing apparatus 60 provided in the embodiment of the present application may further include: and an acquisition module. The acquisition module is used for acquiring the target object image. The processing module 63 is further configured to obtain target object feature information according to the target object image acquired by the acquisition module, where the target object feature information indicates a video object in the video object image.
Therefore, the video shooting device can acquire the target object image and determine the target object characteristic information according to the target object image, namely the video shooting device can determine the characteristic information of the video object required by the user as the target object characteristic information, so that the video shooting device can accurately determine the video object image required by the user, and the accuracy of the video shooting device for obtaining the video object image required by the user can be improved.
The video shooting device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (network attached storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and embodiments of the present application are not limited in particular.
The video shooting device in the embodiment of the application can be a device with an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The video shooting device provided by the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 14, and in order to avoid repetition, a detailed description is omitted here.
Optionally, in the embodiment of the present application, as shown in fig. 16, the embodiment of the present application further provides an electronic device 70, which includes a processor 71, a memory 72, and a program or an instruction stored in the memory 72 and capable of running on the processor 71, where the program or the instruction implements each process of the embodiment of the video shooting method when executed by the processor 71, and the process can achieve the same technical effect, and is not repeated here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 17 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, and processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 110 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 17 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown in the drawings, or may combine some components, or may be arranged in different components, which will not be described in detail herein.
The display unit 106 is configured to display a first preview window and a second preview window in the shooting preview interface, where the first preview window is used to display a first video background image, and the second preview window is used to display a video object image.
A user input unit 107 for receiving a first input of a user.
And the processor 110 is configured to perform image synthesis on the images displayed in the first preview window and the second preview window in response to the first input, so as to obtain a first target video image.
The display unit 107 is further configured to display the first target video image in the third preview window.
According to the electronic device provided by the embodiment of the application, as the electronic device can display the video background image (namely the first video background image) and the video object image which are satisfied by the user in the shooting preview interface, and directly synthesize the video background image and the video object image which are satisfied by the user according to one-time input of the user, and display the synthesized first target video image in the third preview window, so that the video which is satisfied by the user is obtained, the user does not need to adjust the picture composition of the preview image for many times in the shooting process, the operation for many times is avoided, the operation steps of the user can be reduced, and the time consumption is reduced.
Optionally, in an embodiment of the present application, the processor 110 is further configured to obtain the first video according to the first target video image displayed in the third preview window.
Therefore, the electronic device can synthesize the images displayed by the first preview window and the second preview window to obtain the video image (namely the first target video image) satisfactory to the user, so that the video (namely the first video) satisfactory to the user is obtained, the user is not required to adjust the picture composition of the preview image for multiple times, and the operation is performed for multiple times, so that the operation steps of the user can be reduced, and the time consumption is reduced.
Optionally, in an embodiment of the present application, the shooting preview interface further includes a fourth preview window, where the fourth preview window is used to display a second video background image.
The user input unit 107 is further configured to receive a second input from the user to the fourth preview window.
The processor 110 is further configured to, in response to the second input, perform image synthesis on the images displayed in the fourth preview window and the second preview window, to obtain a second target video image.
The display unit 106 is further configured to display the second target video image in the third preview window.
The processor 110 is further configured to obtain a second video according to the second target video image displayed in the third preview window.
Therefore, the electronic device can obtain the second target video image with different video background images according to the second input of the user, and obtain the second video with different video background images according to the second target video image, so that the operation steps of the user can be reduced, and the time consumption is reduced.
Optionally, in an embodiment of the present application, the first preview window includes a first video preview interface of the first camera, and the first video background image is at least one frame of video preview image displayed in the first video preview interface; the second preview window comprises a second video preview interface of a second camera, and the video object image is at least one frame of video preview image displayed in the second video preview interface; the first target video image includes at least one frame of target sub-video image.
The processor 110 is specifically configured to superimpose the ith frame of video preview image displayed on the second video preview interface on the ith frame of video preview image displayed on the first video preview interface, so as to obtain an ith frame of target sub-video image.
Wherein i is a positive integer.
Therefore, the electronic device can acquire the first video background image and the video object image through different cameras respectively, and superimpose each frame of video preview image displayed on the second video preview interface on each frame of video preview image displayed on the first video preview interface respectively to obtain each frame of target sub-video image of the first target video image, so that the user operation steps can be reduced, and the video satisfactory to the user can be obtained quickly.
Optionally, in an embodiment of the present application, the shooting preview interface further includes a fifth preview window, where the fifth preview window is used to display a third video background image.
The processor 110 is further configured to, in response to the first input, perform image synthesis on the images displayed in the fifth preview window and the second preview window, to obtain a third target video image.
The display unit 106 is further configured to display a third target video image in a sixth preview window.
The processor 110 is further configured to obtain a third video according to the third target video image displayed in the sixth preview window.
Therefore, the electronic device can obtain the first target video image and the third target video image of different video backgrounds according to the first input of the user, and accordingly the electronic device can obtain the videos of different video backgrounds, namely the first video and the third video, according to the first target video image and the third target video image, and the first video and the third video can be stored respectively, operation steps of the user can be reduced, and time consumption is reduced.
Optionally, in an embodiment of the present application, the shooting preview interface further includes a seventh preview window, where the seventh preview window is used to display a fourth video background image.
The processor 110 is further configured to image-synthesize the images displayed in the seventh preview window and the second preview window after the interval is set for a predetermined period of time, so as to obtain a fourth target video image.
The display unit 106 is further configured to display a fourth target video image in the third preview window.
The processor 110 is further configured to obtain a fourth video according to the fourth target video image displayed in the seventh preview window.
Therefore, the electronic device can obtain the fourth target video images of the different video background images after the preset time interval, and obtain the fourth video of the different video background images according to the fourth target video images, so that the electronic device can obtain the final video according to the first video and the fourth video, namely the final video is formed by splicing two sections of videos with different backgrounds, and therefore the operation steps of a user can be reduced, and the time consumption is reduced.
Optionally, in an embodiment of the present application, the input unit 104 acquires the target object image.
The processor 110 is further configured to obtain target object feature information from the target object image, where the target object feature information indicates a video object in the video object image.
Therefore, the electronic equipment can acquire the target object image and determine the target object characteristic information according to the target object image, namely, the electronic equipment can determine the characteristic information of the video object required by the user as the target object characteristic information, so that the electronic equipment can accurately determine the video object image required by the user, and the accuracy of the electronic equipment for obtaining the video object image required by the user can be improved.
It should be appreciated that in embodiments of the present application, the input unit 104 may include a graphics processor (graphics processing unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the embodiment of the video shooting method, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the video shooting method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (14)

1. A video capturing method, the method comprising:
displaying a first preview window and a second preview window in a shooting preview interface, wherein the first preview window is used for displaying a first video background image, and the second preview window is used for displaying a video object image;
receiving a first input of a user;
responding to the first input, performing image synthesis on the images displayed by the first preview window and the second preview window to obtain a first target video image, and displaying the first target video image in a third preview window;
the shooting preview interface further comprises a fourth preview window, wherein the fourth preview window is used for displaying a second video background image;
the responding to the first input, performing image synthesis on the images displayed by the first preview window and the second preview window to obtain a first target video image, and after displaying the first target video image in a third preview window, further comprising:
receiving a second input of a user to the fourth preview window;
responding to the second input, performing image synthesis on the images displayed by the fourth preview window and the second preview window to obtain a second target video image, and respectively displaying the first target video image and the second target video image in different areas in the third preview window;
And obtaining a second video according to the second target video image displayed in the third preview window.
2. The video photographing method of claim 1, further comprising:
and obtaining a first video according to the first target video image displayed in the third preview window.
3. The video capturing method according to claim 1, wherein the first video background image is an image captured by a camera during video capturing, or the first video background image is an image obtained from at least one preset background image.
4. The video shooting method of claim 1, wherein the first preview window comprises a first video preview interface of a first camera, and the first video background image is at least one frame of video preview image displayed in the first video preview interface; the second preview window comprises a second video preview interface of a second camera, and the video object image is at least one frame of video preview image displayed in the second video preview interface; the first target video image comprises at least one frame of target sub-video image;
The image synthesis of the images displayed by the first preview window and the second preview window to obtain a first target video image includes:
superposing the ith frame of video preview image displayed on the second video preview interface on the ith frame of video preview image displayed on the first video preview interface to obtain an ith frame of target sub-video image;
wherein i is a positive integer.
5. The video photographing method of claim 1, wherein the photographing preview interface further comprises a fifth preview window for displaying a third video background image;
after receiving the first input of the user, the method further comprises:
responding to the first input, performing image synthesis on the images displayed by the fifth preview window and the second preview window to obtain a third target video image, and displaying the third target video image in a sixth preview window;
and obtaining a third video according to the third target video image displayed in the sixth preview window.
6. The video photographing method of claim 1, wherein before displaying the first preview window and the second preview window in the photographing preview interface, the method further comprises:
Collecting a target object image;
and obtaining target object characteristic information according to the target object image, wherein the target object characteristic information indicates a video object in the video object image.
7. A video capturing apparatus, the video capturing apparatus comprising: the device comprises a display module, a receiving module and a processing module;
the display module is used for displaying a first preview window and a second preview window in the shooting preview interface, wherein the first preview window is used for displaying a first video background image, and the second preview window is used for displaying a video object image;
the receiving module is used for receiving a first input of a user;
the processing module is used for responding to the first input received by the receiving module, and performing image synthesis on the images displayed by the first preview window and the second preview window to obtain a first target video image;
the display module is further configured to display the first target video image synthesized by the processing module in a third preview window;
the display module is further configured to display a fourth preview window in the shooting preview interface, where the fourth preview window is used to display a fourth video background image;
The receiving module is further configured to receive a second input from a user to the fourth preview window;
the processing module is further configured to perform image synthesis on the images displayed by the fourth preview window and the second preview window in response to the second input received by the receiving module, so as to obtain a second target video image;
the display module is further configured to display the first target video image and the second target video image in different areas in the third preview window respectively;
and the processing module is further used for obtaining a second video according to the second target video image displayed in the third preview window.
8. The video capture device of claim 7, wherein the processing module is further configured to obtain a first video from the first target video image displayed in the third preview window.
9. The video capturing apparatus according to claim 7, wherein the first video background image is an image captured by a camera during video capturing, or the first video background image is an image obtained from at least one preset background image.
10. The video capture device of claim 7, wherein the first preview window comprises a first video preview interface of a first camera, the first video background image being at least one frame of video preview image displayed in the first video preview interface; the second preview window comprises a second video preview interface of a second camera, and the video object image is at least one frame of video preview image displayed in the second video preview interface; the first target video image comprises at least one frame of target sub-video image;
The processing module is specifically configured to superimpose the ith frame of video preview image displayed on the second video preview interface on the ith frame of video preview image displayed on the first video preview interface, so as to obtain an ith frame of target sub-video image;
wherein i is a positive integer.
11. The video capture device of claim 7, wherein the capture preview interface further comprises a fifth preview window for displaying a third video background image;
the processing module is further configured to perform image synthesis on the images displayed by the fifth preview window and the second preview window in response to the first input, so as to obtain a third target video image;
the display module is further configured to display the third target video image obtained by the processing module in a sixth preview window;
and the processing module is further configured to obtain a third video according to the third target video image displayed by the display module in the sixth preview window.
12. The video capture device of claim 7, wherein the video capture device further comprises: an acquisition module;
the acquisition module is used for acquiring the target object image;
The processing module is further configured to obtain target object feature information according to the target object image acquired by the acquisition module, where the target object feature information indicates a video object in the video object image.
13. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the video capture method of any one of claims 1 to 6.
14. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the video capturing method according to any one of claims 1 to 6.
CN202110932865.1A 2021-08-13 2021-08-13 Video shooting method, device, electronic equipment and medium Active CN113794831B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110932865.1A CN113794831B (en) 2021-08-13 2021-08-13 Video shooting method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110932865.1A CN113794831B (en) 2021-08-13 2021-08-13 Video shooting method, device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN113794831A CN113794831A (en) 2021-12-14
CN113794831B true CN113794831B (en) 2023-08-25

Family

ID=79181844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110932865.1A Active CN113794831B (en) 2021-08-13 2021-08-13 Video shooting method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN113794831B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023155143A1 (en) * 2022-02-18 2023-08-24 北京卓越乐享网络科技有限公司 Video production method and apparatus, electronic device, storage medium, and program product

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106375662A (en) * 2016-09-22 2017-02-01 宇龙计算机通信科技(深圳)有限公司 Photographing method and device based on double cameras, and mobile terminal
CN107767430A (en) * 2017-09-21 2018-03-06 努比亚技术有限公司 One kind shooting processing method, terminal and computer-readable recording medium
CN109525884A (en) * 2018-11-08 2019-03-26 北京微播视界科技有限公司 Video paster adding method, device, equipment and storage medium based on split screen
CN111010506A (en) * 2019-11-15 2020-04-14 华为技术有限公司 Shooting method and electronic equipment
CN111050070A (en) * 2019-12-19 2020-04-21 维沃移动通信有限公司 Video shooting method and device, electronic equipment and medium
CN111756995A (en) * 2020-06-17 2020-10-09 维沃移动通信有限公司 Image processing method and device
CN112511741A (en) * 2020-11-25 2021-03-16 努比亚技术有限公司 Image processing method, mobile terminal and computer storage medium
CN112702517A (en) * 2020-12-24 2021-04-23 维沃移动通信(杭州)有限公司 Display control method and device and electronic equipment
CN112714255A (en) * 2020-12-30 2021-04-27 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and readable storage medium
CN112752116A (en) * 2020-12-30 2021-05-04 广州繁星互娱信息科技有限公司 Display method, device, terminal and storage medium of live video picture
CN112954196A (en) * 2021-01-27 2021-06-11 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium
CN112954210A (en) * 2021-02-08 2021-06-11 维沃移动通信(杭州)有限公司 Photographing method and device, electronic equipment and medium
CN112995500A (en) * 2020-12-30 2021-06-18 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106375662A (en) * 2016-09-22 2017-02-01 宇龙计算机通信科技(深圳)有限公司 Photographing method and device based on double cameras, and mobile terminal
CN107767430A (en) * 2017-09-21 2018-03-06 努比亚技术有限公司 One kind shooting processing method, terminal and computer-readable recording medium
CN109525884A (en) * 2018-11-08 2019-03-26 北京微播视界科技有限公司 Video paster adding method, device, equipment and storage medium based on split screen
CN111010506A (en) * 2019-11-15 2020-04-14 华为技术有限公司 Shooting method and electronic equipment
CN111050070A (en) * 2019-12-19 2020-04-21 维沃移动通信有限公司 Video shooting method and device, electronic equipment and medium
CN111756995A (en) * 2020-06-17 2020-10-09 维沃移动通信有限公司 Image processing method and device
CN112511741A (en) * 2020-11-25 2021-03-16 努比亚技术有限公司 Image processing method, mobile terminal and computer storage medium
CN112702517A (en) * 2020-12-24 2021-04-23 维沃移动通信(杭州)有限公司 Display control method and device and electronic equipment
CN112714255A (en) * 2020-12-30 2021-04-27 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and readable storage medium
CN112752116A (en) * 2020-12-30 2021-05-04 广州繁星互娱信息科技有限公司 Display method, device, terminal and storage medium of live video picture
CN112995500A (en) * 2020-12-30 2021-06-18 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and medium
CN112954196A (en) * 2021-01-27 2021-06-11 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium
CN112954210A (en) * 2021-02-08 2021-06-11 维沃移动通信(杭州)有限公司 Photographing method and device, electronic equipment and medium

Also Published As

Publication number Publication date
CN113794831A (en) 2021-12-14

Similar Documents

Publication Publication Date Title
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
CN111654635A (en) Shooting parameter adjusting method and device and electronic equipment
CN113766129B (en) Video recording method, video recording device, electronic equipment and medium
CN112738402B (en) Shooting method, shooting device, electronic equipment and medium
CN112637500B (en) Image processing method and device
CN112911147B (en) Display control method, display control device and electronic equipment
CN111722775A (en) Image processing method, device, equipment and readable storage medium
CN112532881A (en) Image processing method and device and electronic equipment
CN112702531B (en) Shooting method and device and electronic equipment
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN112181252B (en) Screen capturing method and device and electronic equipment
CN114025092A (en) Shooting control display method and device, electronic equipment and medium
CN112734661A (en) Image processing method and device
US20230274388A1 (en) Photographing Method, and Electronic Device and Non-Transitory Readable Storage Medium
CN114466140B (en) Image shooting method and device
CN113852757B (en) Video processing method, device, equipment and storage medium
CN114025100A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114245017A (en) Shooting method and device and electronic equipment
CN113873168A (en) Shooting method, shooting device, electronic equipment and medium
CN113923392A (en) Video recording method, video recording device and electronic equipment
CN112165584A (en) Video recording method, video recording device, electronic equipment and readable storage medium
CN112702524A (en) Image generation method and device and electronic equipment
CN112887623A (en) Image generation method and device and electronic equipment
CN112261483A (en) Video output method and device
CN113489901B (en) Shooting method and device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant