CN114125297B - Video shooting method, device, electronic equipment and storage medium - Google Patents

Video shooting method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114125297B
CN114125297B CN202111421290.3A CN202111421290A CN114125297B CN 114125297 B CN114125297 B CN 114125297B CN 202111421290 A CN202111421290 A CN 202111421290A CN 114125297 B CN114125297 B CN 114125297B
Authority
CN
China
Prior art keywords
image
target
frame
video
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111421290.3A
Other languages
Chinese (zh)
Other versions
CN114125297A (en
Inventor
冀晓风
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111421290.3A priority Critical patent/CN114125297B/en
Publication of CN114125297A publication Critical patent/CN114125297A/en
Priority to PCT/CN2022/133206 priority patent/WO2023093669A1/en
Application granted granted Critical
Publication of CN114125297B publication Critical patent/CN114125297B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a video shooting method, a video shooting device, electronic equipment and a storage medium. The method comprises the following steps: under the condition that at least one frame of first image is displayed on a shooting preview interface, controlling a camera to acquire at least one frame of second image; a target video is generated based on the at least one frame of the first image and the at least one frame of the second image.

Description

Video shooting method, device, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of image pickup, and particularly relates to a video shooting processing method, a video shooting processing device, electronic equipment and a storage medium.
Background
Currently, with the popularity of electronic devices, more and more users use electronic devices to capture video.
After shooting the video, the user can use a post-processing tool to correspondingly process the video to obtain the special effect video. However, the above-described operation of processing video using the post-processing tool is very cumbersome and takes a long time.
Disclosure of Invention
The embodiment of the application aims to provide a video shooting method, a video shooting device, electronic equipment and a storage medium, which can solve the problem of complex video shooting operation in the related technology.
In a first aspect, an embodiment of the present application provides a video capturing method, including:
under the condition that at least one frame of first image is displayed on a shooting preview interface, controlling a camera to acquire at least one frame of second image;
a target video is generated based on the at least one first image and the at least one second image.
In a second aspect, an embodiment of the present application provides a video capturing apparatus, including:
the control module is used for controlling the camera to acquire at least one frame of second image under the condition that the shooting preview interface displays at least one frame of first image;
and the generating module is used for generating a target video based on the at least one frame of first image and the at least one frame of second image.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, under the condition that at least one frame of first image is displayed on a shooting preview interface, a camera is controlled to acquire at least one frame of second image; a target video is generated based on the at least one frame of the first image and the at least one frame of the second image. In the embodiment of the application, under the condition that the first image is displayed on the shooting preview interface, the camera is controlled to acquire the second image, and further, the target video is generated based on the first image and the second image. In the implementation process, a user does not need to use a post-processing tool to correspondingly process the image acquired by the camera, so that the complexity of manufacturing the special effect video is reduced, and the manufacturing of the special effect video is more convenient.
Drawings
Fig. 1 is a flowchart of a video shooting method provided in an embodiment of the present application;
fig. 2 is one of application scene graphs of the video shooting method provided in the embodiment of the present application;
FIG. 3 is a second application scenario diagram of a video capturing method according to an embodiment of the present disclosure;
fig. 4 is a third application scenario diagram of the video shooting method provided in the embodiment of the present application;
fig. 5 is a fourth application scenario diagram of the video shooting method provided in the embodiment of the present application;
fig. 6 is a fifth application scenario diagram of the video shooting method provided in the embodiment of the present application;
fig. 7 is a sixth application scenario diagram of the video shooting method provided in the embodiment of the present application;
fig. 8 is a block diagram of a video capturing apparatus according to an embodiment of the present application;
fig. 9 is a block diagram of an electronic device provided in an embodiment of the present application;
fig. 10 is a hardware configuration diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The video shooting method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of a video shooting method according to an embodiment of the present application. The video shooting method provided by the embodiment of the application comprises the following steps:
s101, under the condition that at least one frame of first image is displayed on a shooting preview interface, controlling a camera to acquire at least one frame of second image.
In this step, at least one frame of first image may be obtained by reading a local video or downloading from the internet, and the at least one frame of first image is displayed on the shooting preview interface, that is, the first image is a preset or recorded image. It should be appreciated that in the case where the capture preview interface displays at least one frame of the first image, the camera is controlled to capture at least one frame of the second image, that is, the second image is the image captured by the camera in real time.
In this embodiment, when there is only one frame of the first image, i.e., a still image is displayed, and when at least two frames of the first image are included, a moving image or video is displayed.
In the implementation scenario shown in fig. 2, when the user opens the camera, the user may click on the "more" control 201, and then "person replace" mode, and after entering the above mode, at least one frame of the first image is displayed on the shooting preview interface.
S102, generating a target video based on the at least one frame of first image and the at least one frame of second image.
In this step, after the at least one first image and the at least one second image are obtained, corresponding processing is performed on the first image and the second image, so as to generate a target video, where the target video is a special effect video. In detail, please refer to the following embodiments for a technical solution of generating a target video.
In the embodiment of the application, under the condition that at least one frame of first image is displayed on a shooting preview interface, a camera is controlled to acquire at least one frame of second image; a target video is generated based on the at least one frame of the first image and the at least one frame of the second image. In the embodiment of the application, under the condition that the first image is displayed on the shooting preview interface, the camera is controlled to acquire the second image, and further, the target video is generated based on the first image and the second image. In the implementation process, a user does not need to use a post-processing tool to correspondingly process the image acquired by the camera, so that the complexity of manufacturing the special effect video is reduced, and the manufacturing of the special effect video is more convenient.
Optionally, the shooting preview interface includes a first display area and a second display area, where the first display area is used to display the at least one frame of first image, and the second display area is used to display the at least one frame of second image acquired by the camera.
In this embodiment, the shooting preview interface includes a first display area and a second display area, where the first display area displays at least one frame of a first image and the second display area displays at least one frame of a second image.
As shown in fig. 3, in an implementation scenario, the first display area 301 is located at the lower left corner of the shooting preview interface, the second display area 302 is located at the middle position of the shooting preview interface, and the area of the first display area 301 is smaller than the area of the second display area 302.
In this embodiment, the first image is a preset image, and the user is guided to record by using the camera according to the image content in the first image by displaying the first image in the first display area of the shooting preview interface, so as to obtain the second image.
Referring to fig. 4, as shown in fig. 4, an implementation scenario of the present embodiment is that a first image is displayed on a shooting preview interface 401, and text information of "please watch the following person actions and learn" is displayed. The user may click on the "continue" control 402 to enter the shot page and enter the implementation scenario shown in fig. 3.
Optionally, the generating the target video based on the at least one frame of the first image and the at least one frame of the second image includes:
performing image segmentation processing on the at least one frame of first image to obtain at least one frame of target background image, and performing image segmentation processing on the at least one frame of second image to obtain at least one frame of target foreground image;
And carrying out image fusion processing on the at least one frame of target background image and the at least one frame of target foreground image to obtain the target video.
In this embodiment, the image segmentation process may be performed on the first image to obtain the target background image; and performing image segmentation processing on the second image to obtain a target foreground image. The image segmentation process may be an image edge detection algorithm, a maximum inter-class variance algorithm, or other algorithms, the target foreground image includes a shooting subject, and the target background image includes a background portion other than the shooting subject.
And after at least one frame of target background image and at least one frame of target foreground image are obtained, carrying out image fusion processing on the target background image and the target foreground image to obtain a target video. The image corresponding to each video frame of the target video comprises a background and a shooting subject.
In an alternative implementation manner, under the condition that the number of the first images is one frame, image segmentation processing is performed on one frame of the first images to obtain one frame of target background images, and image fusion processing is performed on one frame of target background images and multiple frames of target foreground images to obtain a target video.
In another alternative implementation manner, under the condition that the number of the first images is multiple frames, image segmentation processing is performed on the multiple frames of first images to obtain multiple frames of target background images, and image fusion processing is performed on the multiple frames of target background images and the multiple frames of target foreground images to obtain a target video. In this embodiment, the first image and the second image are respectively subjected to image segmentation to obtain the target background image and the target foreground image, and further, the target background image and the target foreground image are subjected to image fusion to obtain the target video with the special effect.
Optionally, before the image fusion processing is performed on the at least one frame of target background image and the at least one frame of target foreground image, the method further includes:
displaying a first target background image and a first target foreground image;
receiving a first input of a user to a first target image;
a first editing process is performed on the first target image in response to the first input.
In this embodiment, after the target background image and the target foreground image are obtained, a first target background image and a first target foreground image are displayed, where the first target background image is a frame of image in the target background image, and the first target foreground image is a frame of image in the target foreground image.
In a case where a first input of a user to a first target image is received, a first editing process is performed on the first target image. Wherein the first target image comprises at least one of: a first target background image and a first target foreground image. The first editing process includes, but is not limited to, adding text information to the first target image, adding a filter effect to the first target image, adding audio information to the first target image, changing the size of the first target image, and adjusting the display position of the first target image.
The first input may be a click input of the first target image by the user, or a voice command input by the user, or a specific gesture input by the user, which may be specifically determined according to the actual use requirement, and the embodiment is not specifically limited herein.
An alternative embodiment is that the first input is an input for a first target background image, in which case a first editing process is performed on the first target background image in response to the first input.
Another alternative embodiment is that the first input is an input for a first target foreground image, in which case a first editing process is performed on the first target foreground image in response to the first input.
Another alternative embodiment is that the first input is an input for a first target background image and a first target foreground image, in which case a first editing process is performed on the first target background image and the first target foreground image in response to the first input.
For ease of understanding, referring to fig. 5, in the implementation scenario shown in fig. 5, a first target background image and a first target foreground image are displayed on the shooting preview interface 501, and a user may adjust a size of a shooting subject in the images, so as to implement editing processing of the first target image.
For example, when the user considers that the position of the first target foreground image in the first target background image is inappropriate, a sliding operation may be performed on the first target foreground image, thereby adjusting the display position of the first target foreground image. For example, in fig. 5, if the first target foreground image and the first target background image have a partial region overlapping, a fused image with a reasonable composition can be obtained by adjusting the display position of the first target foreground image or the first target background image.
In this embodiment, by performing the first editing process on the first target image, the display effect of the target video can be adjusted, and a video satisfying the user can be obtained.
Optionally, before the image fusion processing is performed on the at least one frame of target background image and the at least one frame of target foreground image, the method further includes:
displaying at least one target background thumbnail and at least one target foreground thumbnail;
receiving a second input of a target thumbnail by a user;
and in response to the second input, performing a second editing process on an image corresponding to the target thumbnail.
In this embodiment, before the image fusion processing is performed on the target background image and the target foreground image, a target background thumbnail and a target foreground thumbnail are displayed, where the target background thumbnail is a thumbnail of the target background image and the target foreground thumbnail is a thumbnail of the target foreground image.
In a case where a second input of the user to the target thumbnail is received, a second editing process is performed on the image corresponding to the target thumbnail. Wherein the target thumbnail includes at least one of: at least one of the target background thumbnails, at least one of the target foreground thumbnails. The second editing process includes, but is not limited to, adding text information to an image corresponding to the target thumbnail, adding a filter effect to an image corresponding to the target thumbnail, adding audio information to an image corresponding to the target thumbnail, changing a size of a subject in an image corresponding to the target thumbnail, and adjusting a display position of the subject in an image corresponding to the target thumbnail.
The second input may be a click input of the target thumbnail by the user, or a voice command input by the user, or a specific gesture input by the user, which may be specifically determined according to the actual use requirement, and the embodiment is not specifically limited herein.
An alternative embodiment is that the second input is an input for a target background thumbnail, in which case a second editing process is performed on the image corresponding to the target background thumbnail in response to the second input.
In another alternative embodiment, the second input is an input for a target foreground thumbnail, in which case a second editing process is performed on an image corresponding to the target foreground thumbnail in response to the second input.
In another alternative embodiment, the second input is an input for a target background thumbnail and a target foreground thumbnail, in which case a second editing process is performed on an image corresponding to the target background thumbnail and an image corresponding to the target foreground thumbnail in response to the second input.
For ease of understanding, referring to fig. 6, in the implementation scenario illustrated in fig. 6, in the photographing preview interface 601, an image corresponding to a target foreground thumbnail and an image corresponding to a target background thumbnail are displayed, and a user may perform input on the target foreground thumbnail and/or the target background thumbnail to edit the image corresponding to the target foreground thumbnail and/or the image corresponding to the target background thumbnail.
In this embodiment, by performing the second editing process on the image corresponding to the target thumbnail, the display effect of the target video can be adjusted, and a video satisfying the user can be obtained.
Optionally, the shooting preview interface includes a third display area and a fourth display area, where the third display area is used to display the at least one frame of first image, the first image is a background image, and the fourth display area is used to display at least one frame of front Jing Yulan image acquired by the camera.
In this embodiment, the shooting preview interface further includes a third display area and a fourth display area, where the third display area displays at least one frame of first image, and the fourth display area displays at least one frame of front Jing Yulan image, where the first image is a background image, and the front Jing Yulan image is an image acquired by the camera.
For example, the first image displayed in the third display area may include only the photographing background, that is, the photographing subject is not included in the first image, and the first image includes a preset area set for the photographing subject; the foreground preview image displayed in the fourth display area is a shooting subject image shot by the user by using the camera. In this case, when the user uses the camera, the display position of the photographing subject needs to be adjusted to a preset area in the first image, for example, when the outline of the preset area is the outline of the whole body illumination, the user can put out a posture conforming to the outline; and when the preset area is the outline of the head portrait, adjusting the head portrait to the preset area in the third display area. Further, the user can operate related controls in the shooting preview interface to complete shooting.
Optionally, the method further comprises:
receiving a third input of a user under the condition that the third display area displays a second target image in the at least one frame of first images and the fourth display area displays a target foreground preview image;
and responding to the third input, and obtaining a first video image.
In this embodiment, the second target image may be understood as an image corresponding to a pause frame in a video frame sequence corresponding to the first image. It should be appreciated that the video frame sequence stops playing when the current frame of the video frame sequence is a pause frame. Updating the first image is paused while the third display region displays the second target image. That is, when the image displayed in the third display area is an image corresponding to a pause frame in the video frame sequence, updating of the background image is paused in the third display area. When the pause frame only comprises a background image, the first image is the pause frame; when the pause frame includes both a background image and a panoramic image, the first image may be a background image that is scratched out of the pause frame.
And receiving a third input of the user when the third display area displays the second target image and the fourth display area displays the foreground preview image.
The third input may be a click input of the user on the target control in the shooting preview interface, or a voice command input by the user, or a specific gesture input by the user, which may be specifically determined according to the actual use requirement, and the embodiment is not specifically limited herein.
And after receiving a third input, responding to the third input, and performing image fusion processing on the second target image and the target foreground preview image to obtain a first video image. The first video image is a part of the target video, that is, the target video includes the first video image.
Illustratively, a user wants to take a dance video, but cannot consistently complete the dance motion because there is no substantial work. In this case, a plurality of pause frames may be determined from the dance video, and when the dance video is played to the target pause frame, the image segmentation process is performed on the target pause frame, the segmented background image is used as a second target image, and the second target image is displayed on the shooting preview interface. And the user puts out dance motions according to the displayed second target image to obtain a target foreground preview image, and the user executes third input to generate a first video image. Further, after shooting is completed, the next pause frame is continuously played, then the user can continuously shoot to obtain a second video image, and based on the multi-frame video image obtained by shooting, dance video taking the user as a shooting main body can be obtained.
In this embodiment, under the condition that a third input of the user is received, image fusion processing is performed on the second target image and the shot target foreground preview image, so as to obtain a first video image, and further generate a target video with special effect.
Optionally, after receiving the third input from the user, the method further includes:
and displaying a third target image in the at least one frame of first image in the third display area in response to the third input.
In this embodiment, the third target image may be understood as a pause frame in the video frame sequence corresponding to the first image. And after receiving the third input, continuing to display the first images according to the sequence of the first images in the video frame sequence until the first images are displayed to a pause frame, namely a third target image, wherein the third display area pauses the display of the first images, and after shooting to obtain the target foreground preview image and receiving the input of a user, continuing to display the first images.
In this embodiment, a third target image is displayed in a third display area, and further, an image fusion process is performed on the third target image and a captured target foreground preview image, so as to generate a target video with a special effect.
Optionally, after the obtaining the first video image, the method further includes:
displaying the first video image;
receiving a fourth input from the user;
and responding to the fourth input, and performing third editing processing on the first video image.
In this embodiment, after the first video image is obtained, the first video image is displayed, and a fourth input of the first video image by the user is received.
The fourth input may be a click input of the first video image by the user, or a voice command input by the user, or a specific gesture input by the user, which may be specifically determined according to the actual use requirement, and the embodiment is not specifically limited herein.
And in response to the fourth input, performing a third editing process on the first video image. The third editing process includes, but is not limited to, adding text information to the first video image, adding a filter effect to the first video image, adding audio information to the first video image, changing the size of a subject in the first video image, and adjusting the display position of the first video image.
Referring to fig. 7, in the implementation scenario illustrated in fig. 7, a control display field 701 in fig. 7 displays a plurality of controls, such as "composition", "repair", "adjustment", "graffiti", "filter" and "text". The user may perform a touch input on the controls to edit the first video image.
In this embodiment, by performing the third editing process on the first video image, the display effect of the target video can be adjusted, and a video satisfying the user can be obtained.
In the video shooting method provided by the embodiment of the present application, the execution subject may be a video shooting device, or a control module in the video shooting device for executing the video shooting method. In the embodiment of the present application, a video capturing device executes a video capturing method as an example, and the video capturing device provided in the embodiment of the present application is described.
Referring to fig. 8, fig. 8 is a block diagram of a video capturing apparatus according to an embodiment of the present application. As shown in fig. 8, the video photographing apparatus 800 includes:
the control module 801 is configured to control the camera to acquire at least one frame of second image when the at least one frame of first image is displayed on the shooting preview interface;
a generating module 802, configured to generate a target video based on the at least one frame of first image and the at least one frame of second image.
In this embodiment, the user does not need to use a post-processing tool to perform corresponding processing on the image acquired by the camera, so that the complexity of manufacturing the special effect video is reduced, and the manufacturing of the special effect video is more convenient.
Optionally, the generating module 802 is specifically configured to:
performing image segmentation processing on the at least one frame of first image to obtain at least one frame of target background image, and performing image segmentation processing on the at least one frame of second image to obtain at least one frame of target foreground image;
and carrying out image fusion processing on the at least one frame of target background image and the at least one frame of target foreground image to obtain the target video.
In this embodiment, the first image and the second image are respectively subjected to image segmentation to obtain the target background image and the target foreground image, and further, the target background image and the target foreground image are subjected to image fusion to obtain the target video with the special effect.
Optionally, the video capturing apparatus 800 further includes:
the first display module is used for displaying a first target background image and a first target foreground image, wherein the first target background image is one frame of image in the at least one frame of target background image, and the first target foreground image is one frame of image in the at least one frame of target foreground image;
a first receiving module for receiving a first input by a user of a first target image, the first target image comprising at least one of: a first target background image, a first target foreground image;
And a first processing module for executing a first editing process on the first target image in response to the first input.
In this embodiment, by performing the first editing process on the first target image, the display effect of the target video can be adjusted, and a video satisfying the user can be obtained.
Optionally, the video capturing apparatus 800 further includes:
the second display module is used for displaying at least one target background thumbnail and at least one target foreground thumbnail, wherein the at least one target background thumbnail is a thumbnail of the at least one frame of target background image, and the at least one target foreground thumbnail is a thumbnail of the at least one frame of target foreground image;
a second receiving module for receiving a second input of a target thumbnail by a user, the target thumbnail including at least one of: at least one of the target background thumbnails, at least one of the target foreground thumbnails;
and a second processing module, configured to execute a second editing process on an image corresponding to the target thumbnail in response to the second input.
In this embodiment, by performing the second editing process on the target thumbnail, the display effect of the target video can be adjusted, and a video satisfying the user can be obtained.
Optionally, the video capturing apparatus 800 further includes:
a third receiving module, configured to receive a third input from a user when the third display area displays a second target image in the at least one frame of the first image and the fourth display area displays a target foreground preview image;
and the response module is used for responding to the third input to obtain a first video image, and the target video comprises the first video image.
In this embodiment, under the condition that a third input of the user is received, image fusion processing is performed on the second target image and the shot target foreground preview image, so as to obtain a first video image, and further generate a target video with special effect.
Optionally, the video capturing apparatus 800 further includes:
and a third display module, configured to display a third target image in the at least one frame of the first image in the third display area in response to the third input.
In this embodiment, a third target image is displayed in a third display area, and further, an image fusion process is performed on the third target image and a captured target foreground preview image, so as to generate a target video with a special effect.
Optionally, the video capturing apparatus 800 further includes:
a fourth display module for displaying the first video image;
a fourth receiving module for receiving a fourth input of the user;
and a third processing module, configured to perform a third editing process on the first video image in response to the fourth input.
In this embodiment, by performing the third editing process on the first video image, the display effect of the target video can be adjusted, and a video satisfying the user can be obtained.
The video shooting device in the embodiment of the application may be an electronic device, or may be a component in the electronic device, for example, an integrated circuit, or a chip. The electronic device may be a terminal, or may be other devices than a terminal. Illustratively, the electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a mobile internet appliance (Mobile Internet Device, MID), an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a robot, a wearable device, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook or a personal digital assistant (personal digital assistant, PDA), or the like, and may also be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, or the like, which is not particularly limited in the embodiments of the present application.
The video capturing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The video shooting device provided in the embodiment of the present application can implement each process implemented by the embodiment of the method of fig. 1, and achieve the same technical effects, so as to avoid repetition, and will not be described herein again.
Optionally, as shown in fig. 9, the embodiment of the present application further provides an electronic device 900, including a processor 901 and a memory 902, where a program or an instruction capable of being executed on the processor 901 is stored in the memory 902, and the program or the instruction when executed by the processor 901 implements each step of the embodiment of the video shooting method, and the steps can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 10 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, and processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 1010 is further configured to control the camera to acquire at least one frame of second image when the at least one frame of first image is displayed on the shooting preview interface;
a target video is generated based on the at least one first image and the at least one second image.
In this embodiment, the user does not need to use a post-processing tool to perform corresponding processing on the image acquired by the camera, so that the complexity of manufacturing the special effect video is reduced, and the manufacturing of the special effect video is more convenient.
The processor 1010 is further configured to perform image segmentation processing on the at least one first image to obtain at least one target background image, and perform image segmentation processing on the at least one second image to obtain at least one target foreground image;
And carrying out image fusion processing on the at least one frame of target background image and the at least one frame of target foreground image to obtain the target video.
In this embodiment, the first image and the second image are respectively subjected to image segmentation to obtain the target background image and the target foreground image, and further, the target background image and the target foreground image are subjected to image fusion to obtain the target video with the special effect.
The display unit 1006 is further configured to display a first target background image and a first target foreground image;
a user input unit 1007 also for receiving a first input of a first target image by a user;
the processor 1010 is further configured to perform a first editing process on the first target image in response to the first input.
In this embodiment, by performing the first editing process on the first target image, the display effect of the target video can be adjusted, and a video satisfying the user can be obtained.
Wherein, the display unit 1006 is further configured to display at least one target background thumbnail and at least one target foreground thumbnail;
a user input unit 1007 also for receiving a second input of a target thumbnail by a user;
the processor 1010 is further configured to perform a second editing process on an image corresponding to the target thumbnail in response to the second input.
In this embodiment, by performing the second editing process on the target thumbnail, the display effect of the target video can be adjusted, and a video satisfying the user can be obtained.
Wherein, the user input unit 1007 is further configured to receive a third input of the user when the third display region displays a second target image in the at least one frame of the first image and the fourth display region displays a target foreground preview image;
the processor 1010 is further configured to obtain a first video image in response to the third input.
In this embodiment, under the condition that a third input of the user is received, image fusion processing is performed on the second target image and the shot target foreground preview image, so as to obtain a first video image, and further generate a target video with special effect.
Wherein the display unit 1006 is further configured to display a third target image in the at least one frame of the first image in the third display area in response to the third input.
In this embodiment, a third target image is displayed in a third display area, and further, an image fusion process is performed on the third target image and a captured target foreground preview image, so as to generate a target video with a special effect.
Wherein, the display unit 1006 is further configured to display the first video image;
a user input unit 1007 also for receiving a fourth input of a user;
the processor 1010 is further configured to perform a third editing process on the first video image in response to the fourth input.
In this embodiment, by performing the third editing process on the first video image, the display effect of the target video can be adjusted, and a video satisfying the user can be obtained.
It should be understood that in the embodiment of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1009 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the processes of the embodiment of the video shooting method are implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a computer Read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the embodiment of the video shooting method, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the video capturing method and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (15)

1. A video photographing method, comprising:
under the condition that at least one frame of first image is displayed on a shooting preview interface, controlling a camera to acquire at least one frame of second image;
generating a target video based on the at least one first image and the at least one second image;
the shooting preview interface comprises a third display area and a fourth display area, the third display area is used for displaying the at least one frame of first image, the first image is a background image, the fourth display area is used for displaying at least one frame of second image acquired by the camera, and the second image comprises a foreground preview image; the generating the target video based on the at least one first image and the at least one second image includes:
receiving a third input of a user under the condition that a second target image in the at least one frame of first image is displayed in the third display area and a target foreground preview image is displayed in the fourth display area, wherein the second target image is an image corresponding to a pause frame in a video frame sequence corresponding to the first image;
and responding to the third input, performing image fusion processing on the second target image and the target foreground preview image to obtain a first video image, and generating a target video based on the first video image.
2. The method of claim 1, wherein the capture preview interface includes a first display area for displaying the at least one frame of first image and a second display area for displaying the at least one frame of second image captured by the camera.
3. The method of claim 1, wherein generating the target video based on the at least one frame of the first image and the at least one frame of the second image comprises:
performing image segmentation processing on the at least one frame of first image to obtain at least one frame of target background image, and performing image segmentation processing on the at least one frame of second image to obtain at least one frame of target foreground image;
and carrying out image fusion processing on the at least one frame of target background image and the at least one frame of target foreground image to obtain the target video.
4. A method according to claim 3, wherein said image fusion processing of said at least one frame of target background image and said at least one frame of target foreground image is preceded by:
displaying a first target background image and a first target foreground image, wherein the first target background image is one frame of image in the at least one frame of target background image, and the first target foreground image is one frame of image in the at least one frame of target foreground image;
Receiving a first input from a user to a first target image, the first target image comprising at least one of: a first target background image, a first target foreground image;
a first editing process is performed on the first target image in response to the first input.
5. A method according to claim 3, wherein said image fusion processing of said at least one frame of target background image and said at least one frame of target foreground image is preceded by:
displaying at least one target background thumbnail and at least one target foreground thumbnail, wherein the at least one target background thumbnail is a thumbnail of the at least one frame of target background image, and the at least one target foreground thumbnail is a thumbnail of the at least one frame of target foreground image;
receiving a second input from a user to a target thumbnail, the target thumbnail comprising at least one of: at least one of the target background thumbnails, at least one of the target foreground thumbnails;
and in response to the second input, performing a second editing process on an image corresponding to the target thumbnail.
6. The method of claim 1, further comprising, after receiving the third input from the user:
And displaying a third target image in the at least one frame of first image in the third display area in response to the third input.
7. The method of claim 1, wherein after the obtaining the first video image, further comprising:
displaying the first video image;
receiving a fourth input from the user;
and responding to the fourth input, and performing third editing processing on the first video image.
8. A video photographing apparatus, comprising:
the control module is used for controlling the camera to acquire at least one frame of second image under the condition that the shooting preview interface displays at least one frame of first image;
the generation module is used for generating a target video based on the at least one frame of first image and the at least one frame of second image;
the shooting preview interface comprises a third display area and a fourth display area, the third display area is used for displaying the at least one frame of first image, the first image is a background image, the fourth display area is used for displaying at least one frame of second image acquired by the camera, and the second image comprises a foreground preview image; the generation module comprises:
A third receiving module, configured to receive a third input from a user when the second target image in the at least one frame of the first image is displayed in the third display area and the target foreground preview image is displayed in the fourth display area, where the second target image is an image corresponding to a pause frame in a video frame sequence corresponding to the first image;
and the response module is used for responding to the third input, carrying out image fusion processing on the second target image and the target foreground preview image to obtain a first video image, and generating a target video based on the first video image.
9. The apparatus of claim 8, wherein the generating module is specifically configured to:
performing image segmentation processing on the at least one frame of first image to obtain at least one frame of target background image, and performing image segmentation processing on the at least one frame of second image to obtain at least one frame of target foreground image;
and carrying out image fusion processing on the at least one frame of target background image and the at least one frame of target foreground image to obtain the target video.
10. The apparatus of claim 9, wherein the video capture apparatus further comprises:
The first display module is used for displaying a first target background image and a first target foreground image, wherein the first target background image is one frame of image in the at least one frame of target background image, and the first target foreground image is one frame of image in the at least one frame of target foreground image;
a first receiving module for receiving a first input by a user of a first target image, the first target image comprising at least one of: a first target background image, a first target foreground image;
and a first processing module for executing a first editing process on the first target image in response to the first input.
11. The apparatus of claim 9, wherein the video capture apparatus further comprises:
the second display module is used for displaying at least one target background thumbnail and at least one target foreground thumbnail, wherein the at least one target background thumbnail is a thumbnail of the at least one frame of target background image, and the at least one target foreground thumbnail is a thumbnail of the at least one frame of target foreground image;
a second receiving module for receiving a second input of a target thumbnail by a user, the target thumbnail including at least one of: at least one of the target background thumbnails, at least one of the target foreground thumbnails;
And a second processing module, configured to execute a second editing process on an image corresponding to the target thumbnail in response to the second input.
12. The apparatus of claim 8, wherein the video capture apparatus further comprises:
and a third display module, configured to display a third target image in the at least one frame of the first image in the third display area in response to the third input.
13. The apparatus of claim 12, wherein the video capture apparatus further comprises:
a fourth display module for displaying the first video image;
a fourth receiving module for receiving a fourth input of the user;
and a third processing module, configured to perform a third editing process on the first video image in response to the fourth input.
14. An electronic device comprising a processor and a memory storing a computer executable program executable on the processor, which when executed by the processor performs the steps of the video capture method of any of claims 1-7.
15. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer executable program, which when executed by a processor, implements the steps of the video capturing method according to any of claims 1 to 7.
CN202111421290.3A 2021-11-26 2021-11-26 Video shooting method, device, electronic equipment and storage medium Active CN114125297B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111421290.3A CN114125297B (en) 2021-11-26 2021-11-26 Video shooting method, device, electronic equipment and storage medium
PCT/CN2022/133206 WO2023093669A1 (en) 2021-11-26 2022-11-21 Video filming method and apparatus, and electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111421290.3A CN114125297B (en) 2021-11-26 2021-11-26 Video shooting method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114125297A CN114125297A (en) 2022-03-01
CN114125297B true CN114125297B (en) 2024-04-09

Family

ID=80369974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111421290.3A Active CN114125297B (en) 2021-11-26 2021-11-26 Video shooting method, device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114125297B (en)
WO (1) WO2023093669A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125297B (en) * 2021-11-26 2024-04-09 维沃移动通信有限公司 Video shooting method, device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055834A (en) * 2009-10-30 2011-05-11 Tcl集团股份有限公司 Double-camera photographing method of mobile terminal
CN103856617A (en) * 2012-12-03 2014-06-11 联想(北京)有限公司 Photographing method and user terminal
CN104349065A (en) * 2014-10-29 2015-02-11 宇龙计算机通信科技(深圳)有限公司 Picture shooting method, picture shooting device and intelligent terminal
CN107767430A (en) * 2017-09-21 2018-03-06 努比亚技术有限公司 One kind shooting processing method, terminal and computer-readable recording medium
CN109413330A (en) * 2018-11-07 2019-03-01 深圳市博纳思信息技术有限公司 A kind of certificate photograph intelligently changes background method
CN111405199A (en) * 2020-03-27 2020-07-10 维沃移动通信(杭州)有限公司 Image shooting method and electronic equipment
CN112511741A (en) * 2020-11-25 2021-03-16 努比亚技术有限公司 Image processing method, mobile terminal and computer storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125297B (en) * 2021-11-26 2024-04-09 维沃移动通信有限公司 Video shooting method, device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055834A (en) * 2009-10-30 2011-05-11 Tcl集团股份有限公司 Double-camera photographing method of mobile terminal
CN103856617A (en) * 2012-12-03 2014-06-11 联想(北京)有限公司 Photographing method and user terminal
CN104349065A (en) * 2014-10-29 2015-02-11 宇龙计算机通信科技(深圳)有限公司 Picture shooting method, picture shooting device and intelligent terminal
CN107767430A (en) * 2017-09-21 2018-03-06 努比亚技术有限公司 One kind shooting processing method, terminal and computer-readable recording medium
CN109413330A (en) * 2018-11-07 2019-03-01 深圳市博纳思信息技术有限公司 A kind of certificate photograph intelligently changes background method
CN111405199A (en) * 2020-03-27 2020-07-10 维沃移动通信(杭州)有限公司 Image shooting method and electronic equipment
CN112511741A (en) * 2020-11-25 2021-03-16 努比亚技术有限公司 Image processing method, mobile terminal and computer storage medium

Also Published As

Publication number Publication date
CN114125297A (en) 2022-03-01
WO2023093669A1 (en) 2023-06-01

Similar Documents

Publication Publication Date Title
CN112714253B (en) Video recording method and device, electronic equipment and readable storage medium
CN113596294A (en) Shooting method and device and electronic equipment
WO2023174223A1 (en) Video recording method and apparatus, and electronic device
CN112333382A (en) Shooting method and device and electronic equipment
CN114125297B (en) Video shooting method, device, electronic equipment and storage medium
CN114143455B (en) Shooting method and device and electronic equipment
CN114500852B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114025237B (en) Video generation method and device and electronic equipment
CN114390205B (en) Shooting method and device and electronic equipment
CN115631109A (en) Image processing method, image processing device and electronic equipment
CN115499589A (en) Shooting method, shooting device, electronic equipment and medium
CN112367467B (en) Display control method, display control device, electronic apparatus, and medium
CN115037874A (en) Photographing method and device and electronic equipment
CN114245017A (en) Shooting method and device and electronic equipment
CN114285922A (en) Screenshot method, screenshot device, electronic equipment and media
CN114157810B (en) Shooting method, shooting device, electronic equipment and medium
CN115103112B (en) Lens control method and electronic equipment
CN114173178B (en) Video playing method, video playing device, electronic equipment and readable storage medium
CN116527829A (en) Video generation method and device
CN116847187A (en) Shooting method, shooting device, electronic equipment and storage medium
CN116156305A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN116389665A (en) Video recording method and device, electronic equipment and readable storage medium
CN115174812A (en) Video generation method, video generation device and electronic equipment
CN115589459A (en) Video recording method and device
CN114745506A (en) Video processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant