WO2023093669A1 - 视频拍摄方法、装置、电子设备及存储介质 - Google Patents

视频拍摄方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2023093669A1
WO2023093669A1 PCT/CN2022/133206 CN2022133206W WO2023093669A1 WO 2023093669 A1 WO2023093669 A1 WO 2023093669A1 CN 2022133206 W CN2022133206 W CN 2022133206W WO 2023093669 A1 WO2023093669 A1 WO 2023093669A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
frame
video
input
Prior art date
Application number
PCT/CN2022/133206
Other languages
English (en)
French (fr)
Inventor
冀晓风
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2023093669A1 publication Critical patent/WO2023093669A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters

Definitions

  • the present application belongs to the technical field of photography, and specifically relates to a video photography processing method, device, electronic equipment and storage medium.
  • the purpose of the embodiment of the present application is a video shooting method, device, electronic equipment and storage medium, which can solve the problem of cumbersome video shooting operations in the related art.
  • the embodiment of the present application provides a video shooting method, the method including:
  • the shooting preview interface displays at least one frame of the first image, controlling the camera to capture at least one frame of the second image;
  • the embodiment of the present application provides a video shooting device, which includes:
  • a control module configured to control the camera to capture at least one frame of the second image when the shooting preview interface displays at least one frame of the first image
  • a generating module configured to generate a target video based on the at least one frame of the first image and the at least one frame of the second image.
  • the embodiment of the present application provides an electronic device, the electronic device includes a processor and a memory, the memory stores programs or instructions that can run on the processor, and the programs or instructions are processed by the The steps of the method described in the first aspect are realized when the controller is executed.
  • an embodiment of the present application provides a readable storage medium, on which a program or an instruction is stored, and when the program or instruction is executed by a processor, the steps of the method described in the first aspect are implemented .
  • the embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions, so as to implement the first aspect the method described.
  • an embodiment of the present application provides a computer program product, where the computer program product is stored in a storage medium, and the computer program product is executed by at least one processor to implement the method described in the first aspect.
  • the camera when at least one frame of the first image is displayed on the shooting preview interface, the camera is controlled to capture at least one frame of the second image; based on the at least one frame of the first image and at least one frame of the second image, a target video is generated.
  • the camera when the first image is displayed on the shooting preview interface, the camera is controlled to capture the second image, and further, the target video is generated based on the first image and the second image.
  • the user does not need to use post-processing tools to process the images collected by the camera, which reduces the complexity of making special effect videos and makes the production of special effect videos more convenient.
  • FIG. 1 is a flowchart of a video shooting method provided in an embodiment of the present application
  • FIG. 2 is one of the application scene diagrams of the video shooting method provided by the embodiment of the present application.
  • FIG. 3 is the second application scene diagram of the video shooting method provided by the embodiment of the present application.
  • FIG. 4 is the third application scene diagram of the video shooting method provided by the embodiment of the present application.
  • FIG. 5 is the fourth application scene diagram of the video shooting method provided by the embodiment of the present application.
  • FIG. 6 is the fifth application scene diagram of the video shooting method provided by the embodiment of the present application.
  • Fig. 7 is the sixth application scene diagram of the video shooting method provided by the embodiment of the present application.
  • FIG. 8 is a structural diagram of a video capture device provided in an embodiment of the present application.
  • FIG. 9 is a structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 10 is a hardware structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 1 is a flowchart of a video shooting method provided by an embodiment of the present application.
  • the video shooting method provided in the embodiment of the present application includes the following steps:
  • At least one frame of the first image can be obtained by reading the local video or downloaded from the Internet, and display the at least one frame of the first image on the shooting preview interface, that is to say, the first image is Preset or recorded images. It should be understood that, when at least one frame of the first image is displayed on the shooting preview interface, the camera is controlled to capture at least one frame of the second image, that is, the second image is an image captured by the camera in real time.
  • a static image is displayed; when at least two frames of the first image are included, a dynamic image or video is displayed.
  • this step after at least one frame of the first image and at least one frame of the second image are obtained, corresponding processing is performed on the first image and the second image to generate a target video, wherein the target video is a special effect video.
  • the target video is a special effect video.
  • the camera when at least one frame of the first image is displayed on the shooting preview interface, the camera is controlled to capture at least one frame of the second image; based on the at least one frame of the first image and at least one frame of the second image, a target video is generated.
  • the camera when the first image is displayed on the shooting preview interface, the camera is controlled to capture the second image, and further, the target video is generated based on the first image and the second image.
  • the user does not need to use post-processing tools to process the images collected by the camera, which reduces the complexity of making special effect videos and makes the production of special effect videos more convenient.
  • the shooting preview interface includes a first display area and a second display area, the first display area is used to display the at least one frame of the first image, and the second display area is used to display the camera The at least one frame of the second image collected.
  • the shooting preview interface includes a first display area and a second display area, wherein at least one frame of the first image is displayed in the first display area, and at least one frame of the second image is displayed in the second display area.
  • the first display area 301 is located in the lower left corner of the shooting preview interface
  • the second display area 302 is located in the middle of the shooting preview interface
  • the area of the first display area 301 is smaller than that of the second display area.
  • the area of the region 302 is smaller than that of the second display area.
  • the first image is a preset image.
  • the user is instructed to use the camera to record the second image according to the image content in the first image.
  • an implementation scenario of this embodiment is that the first image is displayed on the shooting preview interface 401 , and the text information "Please watch and learn from the actions of the following characters" is displayed.
  • the user can click the "continue" control 402 to enter the shooting page and enter the implementation scene shown in FIG. 3 .
  • the generating the target video based on the at least one frame of the first image and the at least one frame of the second image includes:
  • image segmentation processing may be performed on the first image to obtain a target background image; image segmentation processing may be performed on the second image to obtain a target foreground image.
  • the above-mentioned image segmentation processing may be an image edge detection algorithm or a maximum inter-class variance algorithm or other algorithms, the above-mentioned target foreground image includes the subject, and the above-mentioned target background image includes the background part except the subject.
  • image fusion processing is performed on the target background image and the target foreground image to obtain the target video.
  • the image corresponding to each video frame of the target video includes a background and a subject.
  • An optional implementation manner is, when the quantity of the first image is one frame, image segmentation processing is performed on one frame of the first image to obtain one frame of target background image, and one frame of target background image is combined with multiple frames of target The foreground image is subjected to image fusion processing to obtain the target video.
  • Another optional implementation manner is, when the number of the first image is multiple frames, image segmentation processing is performed on the multiple frames of the first image to obtain multiple frames of the target background image, and the multiple frames of the target background image are combined with the multiple frames
  • the target foreground image is subjected to image fusion processing to obtain the target video.
  • image fusion processing by performing image segmentation on the first image and the second image respectively, the target background image and the target foreground image are obtained, and further, image fusion is performed on the target background image and the target foreground image to obtain the target video with special effects .
  • a first editing process is performed on the first target image.
  • the first target background image and the first target foreground image are displayed, wherein the first target background image is a frame image in the target background image, and the first target foreground image The image is a frame image in the target foreground image.
  • a first editing process is performed on the first target image.
  • the first target image includes at least one of the following: a first target background image and a first target foreground image.
  • the first editing process includes but is not limited to adding text information to the first target image, adding filter effects to the first target image, adding audio information to the first target image, changing the size of the first target image and adjusting The display position of the first target image.
  • the above-mentioned first input may be the user's click input on the first target image, or a voice command input by the user, or a specific gesture input by the user, which may be determined according to actual usage requirements, and is not specifically limited in this embodiment. .
  • first input is an input for the first target background image, in this case, in response to the first input, the first editing process is performed on the first target background image.
  • the first input is an input for the first target foreground image, in this case, in response to the first input, the first editing process is performed on the first target foreground image.
  • the first input is an input for the first target background image and the first target foreground image, in this case, in response to the first input, the first target background image and the first target The foreground image is subjected to the first editing process.
  • the first target background image and the first target foreground image are displayed on the shooting preview interface 501, and the user can determine the size of the subject in the above image Adjustments are made to implement editing processing on the first target image.
  • the user may perform a sliding operation on the first target foreground image, thereby adjusting the display position of the first target foreground image.
  • the first target foreground image and the first target background image partially overlap, and a fused image with a reasonable composition can be obtained by adjusting the display position of the first target foreground image or the first target background image.
  • the display effect of the target video can be adjusted to obtain a video that satisfies the user.
  • a second editing process is performed on the image corresponding to the target thumbnail.
  • a target background thumbnail and a target foreground thumbnail are displayed, wherein the target background thumbnail is a thumbnail of the target background image, and the target foreground thumbnail Thumbnail for the target foreground image.
  • a second editing process is performed on the image corresponding to the target thumbnail.
  • the target thumbnail includes at least one of the following: at least one of the target background thumbnails, and at least one of the target foreground thumbnails.
  • the second editing process includes but is not limited to adding text information to the image corresponding to the target thumbnail, adding a filter effect to the image corresponding to the target thumbnail, adding audio information to the image corresponding to the target thumbnail, changing the target thumbnail The size of the subject in the image corresponding to the thumbnail, and the display position of the subject in the image corresponding to the adjustment target thumbnail.
  • the above-mentioned second input may be the user's click input on the target thumbnail, or a voice command input by the user, or a specific gesture input by the user, which may be determined according to actual usage requirements, and is not specifically limited in this embodiment.
  • the second input is an input for the target background thumbnail, in this case, in response to the second input, the second editing process is performed on the image corresponding to the target background thumbnail.
  • the second input is an input for the target foreground thumbnail, in this case, in response to the second input, the second editing process is performed on the image corresponding to the target foreground thumbnail.
  • the second input is an input for the target background thumbnail and the target foreground thumbnail.
  • the image corresponding to the target background thumbnail and the target foreground thumbnail is subjected to the second editing process.
  • the image corresponding to the target foreground thumbnail and the image corresponding to the target background thumbnail are displayed on the shooting preview interface 601. and/or the target background thumbnail to edit the image corresponding to the target foreground thumbnail and/or the image corresponding to the target background thumbnail.
  • the display effect of the target video can be adjusted to obtain a video that satisfies the user.
  • the shooting preview interface includes a third display area and a fourth display area
  • the third display area is used to display the at least one frame of the first image
  • the first image is a background image
  • the first The four display areas are used to display at least one frame of foreground preview image collected by the camera.
  • the shooting preview interface further includes a third display area and a fourth display area, the third display area displays at least one frame of the first image, and the fourth display area displays at least one frame of the foreground preview image, wherein the above-mentioned first One image is a background image, and the above-mentioned foreground preview image is an image collected by a camera.
  • the first image displayed in the third display area may only include the shooting background, that is, the first image does not include the subject, and the first image includes a preset area set for the subject; the foreground preview displayed in the fourth display area
  • the image is the subject image captured by the user using the camera.
  • the user when using the camera, the user needs to adjust the display position of the subject to the preset area in the first image.
  • the outline of the preset area is the outline of a full-body photo
  • the user can pose A corresponding posture
  • the preset area is the outline of the avatar, adjust the own avatar to the preset area in the third display area.
  • the user can operate related controls in the shooting preview interface to complete the shooting.
  • the method also includes:
  • the third display area displays the second target image in the at least one frame of the first image
  • the fourth display area displays the target foreground preview image, receiving a third input from the user
  • the above-mentioned second target image may be understood as an image corresponding to a pause frame in the sequence of video frames corresponding to the first image. It should be understood that when the current frame of the video frame sequence is a pause frame, the video frame sequence stops playing.
  • the second target image is displayed in the third display area, updating the first image is suspended. That is to say, when the image displayed in the third display area is the image corresponding to the pause frame in the sequence of video frames, updating the background image in the third display area is paused.
  • the first image is the pause frame; when the pause frame includes both the background image and the panoramic image, the first image may be a background image cut out from the pause frame.
  • the above-mentioned third input may be the user's click input on the target control in the shooting preview interface, or a voice command input by the user, or a specific gesture input by the user, which may be specifically determined according to actual usage requirements.
  • This embodiment is here Not specifically limited.
  • image fusion processing is performed on the second target image and the target foreground preview image to obtain the first video image.
  • the above-mentioned first video image is a part of the target video, that is, the target video includes the first video image.
  • the user wants to shoot a dance video, but cannot complete the dance action coherently because of lack of basic skills.
  • multiple pause frames can be determined from the dance video, and when the target pause frame is played, image segmentation processing is performed on the target pause frame, and the segmented background image is used as the second target image, and displayed on the shooting preview interface Display the second target image.
  • the user performs a dance action according to the displayed second target image to obtain a preview image of the target foreground, and the user performs a third input to generate the first video image.
  • the shooting is completed, continue to play the next paused frame, and then the user can continue to shoot to obtain a second video image, and based on the multiple frames of video images obtained by shooting, a dance video with the user as the subject can be obtained.
  • image fusion processing is performed on the second target image and the captured target foreground preview image to obtain the first video image, and then generate the target video with special effects.
  • the method further includes:
  • the third target image may be understood as a pause frame in the video frame sequence corresponding to the first image. After receiving the third input, continue to display the first image according to the sequence of the first image in the video frame sequence, until the pause frame is displayed, that is, the third target image, the third display area pauses to display the first image, and when the shooting is obtained.
  • the third target image is displayed in the third display area, and further, image fusion processing is performed on the third target image and the captured target foreground preview image to generate a target video with special effects.
  • the first video image after the first video image is obtained, it also includes:
  • a third editing process is performed on the first video image.
  • the first video image is displayed, and a fourth input from the user on the first video image is received.
  • the above-mentioned fourth input may be the user's click input on the first video image, or a voice command input by the user, or a specific gesture input by the user, which may be determined according to actual usage requirements, and will not be discussed here in this embodiment. Specific limits.
  • a third editing process is performed on the first video image.
  • the third editing process includes but is not limited to adding text information in the first video image, adding filter effects in the first video image, adding audio information in the first video image, changing the image of the object in the first video image size, and adjust the display position of the first video image.
  • control display bar 701 in FIG. 7 displays multiple controls, such as “composition”, “repair”, “adjustment”, “graffiti”, and “filter” and “text”.
  • the user can perform touch input on these controls to edit the first video image.
  • the display effect of the target video can be adjusted to obtain a video that satisfies the user.
  • the video shooting method provided in the embodiment of the present application may be executed by a video shooting device, or a control module in the video shooting device for executing the video shooting method.
  • the video capturing device provided in the embodiment of the present application is described by taking the video capturing method performed by the video capturing device as an example.
  • FIG. 8 is a structural diagram of a video capture device provided by an embodiment of the present application. As shown in Figure 8, the video capture device 800 includes:
  • the control module 801 is configured to control the camera to capture at least one frame of the second image when the shooting preview interface displays at least one frame of the first image;
  • a generating module 802 configured to generate a target video based on the at least one frame of the first image and the at least one frame of the second image.
  • the user does not need to use a post-processing tool to process the images collected by the camera, which reduces the complexity of making special effect videos and makes the production of special effect videos more convenient.
  • the generating module 802 is specifically configured to:
  • the target background image and the target foreground image are obtained, and further, image fusion is performed on the target background image and the target foreground image to obtain the target video with special effects .
  • the video capture device 800 also includes:
  • a first display module configured to display a first target background image and a first target foreground image
  • the first target background image is one frame of the at least one frame of target background image
  • the first target foreground image is one frame of the at least one target foreground image
  • the first receiving module is configured to receive a first input from a user on a first target image, where the first target image includes at least one of the following: a first target background image, a first target foreground image;
  • a first processing module configured to perform a first editing process on the first target image in response to the first input.
  • the display effect of the target video can be adjusted to obtain a video that satisfies the user.
  • the video capture device 800 also includes:
  • the second display module is used to display at least one target background thumbnail and at least one target foreground thumbnail, the at least one target background thumbnail is a thumbnail of the at least one frame target background image, and the at least one target foreground thumbnail The thumbnail is a thumbnail of the at least one frame of the target foreground image;
  • the second receiving module is configured to receive a user's second input on the target thumbnail, and the target thumbnail includes at least one of the following: at least one of the target background thumbnails, at least one of the target foreground thumbnails ;
  • the second processing module is configured to execute a second editing process on the image corresponding to the target thumbnail in response to the second input.
  • the display effect of the target video can be adjusted to obtain a video that satisfies the user.
  • the video capture device 800 also includes:
  • the third receiving module is configured to receive the user's second target image when the third display area displays the second target image in the at least one frame of the first image, and the fourth display area displays the target foreground preview image.
  • a response module configured to obtain a first video image in response to the third input, and the target video includes the first video image.
  • image fusion processing is performed on the second target image and the captured target foreground preview image to obtain the first video image, and then a target video with special effects is generated.
  • the video capture device 800 also includes:
  • a third display module configured to display a third target image in the at least one frame of first images in the third display area in response to the third input.
  • the third target image is displayed in the third display area, and further, image fusion processing is performed on the third target image and the captured target foreground preview image to generate a target video with special effects.
  • the video capture device 800 also includes:
  • a fourth display module configured to display the first video image
  • a fourth receiving module configured to receive a fourth input from the user
  • a third processing module configured to perform a third editing process on the first video image in response to the fourth input.
  • the display effect of the target video can be adjusted to obtain a video that satisfies the user.
  • the video capture device in the embodiment of the present application may be an electronic device, or may be a component in the electronic device, such as an integrated circuit, or a chip.
  • the electronic device may be a terminal, or other devices other than the terminal.
  • the electronic device may be a mobile phone, a tablet computer, a notebook computer, a handheld computer, a vehicle electronic device, a mobile Internet device (Mobile Internet Device, MID), an augmented reality (augmented reality, AR)/virtual reality (virtual reality, VR) ) equipment, robots, wearable devices, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc.
  • the video capture device in the embodiment of the present application may be a device with an operating system.
  • the operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
  • the video capture device provided in the embodiment of the present application can implement the various processes implemented in the method embodiment in FIG. 1 and achieve the same technical effect. To avoid repetition, details are not repeated here.
  • the embodiment of the present application also provides an electronic device 900, including a processor 901 and a memory 902, and the memory 902 stores programs or instructions that can run on the processor 901, the When the programs or instructions are executed by the processor 901, the various steps of the above-mentioned video shooting method embodiments can be realized, and the same technical effect can be achieved. To avoid repetition, details are not repeated here.
  • the electronic devices in the embodiments of the present application include the above-mentioned mobile electronic devices and non-mobile electronic devices.
  • FIG. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010, etc. part.
  • the electronic device 1000 can also include a power supply (such as a battery) for supplying power to various components, and the power supply can be logically connected to the processor 1010 through the power management system, so that the management of charging, discharging, and function can be realized through the power management system. Consumption management and other functions.
  • a power supply such as a battery
  • the structure of the electronic device shown in FIG. 10 does not constitute a limitation to the electronic device.
  • the electronic device may include more or fewer components than shown in the figure, or combine certain components, or arrange different components, and details will not be repeated here. .
  • the processor 1010 is further configured to control the camera to capture at least one frame of the second image when the shooting preview interface displays at least one frame of the first image;
  • the user does not need to use a post-processing tool to process the images collected by the camera, which reduces the complexity of making special effect videos and makes the production of special effect videos more convenient.
  • the processor 1010 is further configured to perform image segmentation processing on the at least one frame of the first image to obtain at least one frame of the target background image, and perform image segmentation processing on the at least one frame of the second image to obtain at least one frame of the target background image. foreground image;
  • the target background image and the target foreground image are obtained, and further, image fusion is performed on the target background image and the target foreground image to obtain the target video with special effects .
  • the display unit 1006 is further configured to display the first target background image and the first target foreground image
  • the user input unit 1007 is further configured to receive a user's first input on the first target image
  • the processor 1010 is further configured to, in response to the first input, execute a first editing process on the first target image.
  • the display effect of the target video can be adjusted to obtain a video that satisfies the user.
  • the display unit 1006 is further configured to display at least one target background thumbnail and at least one target foreground thumbnail;
  • the user input unit 1007 is further configured to receive a second input from the user on the target thumbnail;
  • the processor 1010 is further configured to, in response to the second input, execute a second editing process on the image corresponding to the target thumbnail.
  • the display effect of the target video can be adjusted to obtain a video that satisfies the user.
  • the user input unit 1007 is further configured to receive a the user's third input
  • the processor 1010 is further configured to obtain a first video image in response to the third input.
  • image fusion processing is performed on the second target image and the captured target foreground preview image to obtain the first video image, and then generate the target video with special effects.
  • the display unit 1006 is further configured to display a third target image in the at least one frame of the first image in the third display area in response to the third input.
  • the third target image is displayed in the third display area, and further, image fusion processing is performed on the third target image and the captured target foreground preview image to generate a target video with special effects.
  • the display unit 1006 is also configured to display the first video image
  • the user input unit 1007 is also configured to receive a fourth input from the user;
  • the processor 1010 is further configured to, in response to the fourth input, perform a third editing process on the first video image.
  • the display effect of the target video can be adjusted to obtain a video that satisfies the user.
  • the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 is used for the image capture device (such as the image data of the still picture or video obtained by the camera) for processing.
  • the display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072 .
  • the touch panel 10071 is also called a touch screen.
  • the touch panel 10071 may include two parts, a touch detection device and a touch controller.
  • Other input devices 10072 may include, but are not limited to, physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, and joysticks, which will not be repeated here.
  • the memory 1009 can be used to store software programs as well as various data.
  • the memory 1009 may mainly include a first storage area for storing programs or instructions and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or instructions required by at least one function (such as a sound playing function, image playback function, etc.), etc.
  • memory 1009 may include volatile memory or nonvolatile memory, or, memory 1009 may include both volatile and nonvolatile memory.
  • the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electronically programmable Erase Programmable Read-Only Memory (Electrically EPROM, EEPROM) or Flash.
  • ROM Read-Only Memory
  • PROM programmable read-only memory
  • Erasable PROM Erasable PROM
  • EPROM erasable programmable read-only memory
  • Electrical EPROM Electrical EPROM
  • EEPROM electronically programmable Erase Programmable Read-Only Memory
  • Volatile memory can be random access memory (Random Access Memory, RAM), static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random access memory (Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDRSDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), synchronous connection dynamic random access memory (Synch link DRAM , SLDRAM) and Direct Memory Bus Random Access Memory (Direct Rambus RAM, DRRAM).
  • RAM Random Access Memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM Double Data Rate SDRAM
  • DDRSDRAM double data rate synchronous dynamic random access memory
  • Enhanced SDRAM, ESDRAM enhanced synchronous dynamic random access memory
  • Synch link DRAM , SLDRAM
  • Direct Memory Bus Random Access Memory Direct Rambus
  • the processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor and a modem processor, wherein the application processor mainly handles operations related to the operating system, user interface, and application programs, etc., Modem processors mainly process wireless communication signals, such as baseband processors. It can be understood that the foregoing modem processor may not be integrated into the processor 1010 .
  • the embodiment of the present application also provides a readable storage medium.
  • the readable storage medium stores programs or instructions.
  • the program or instructions are executed by the processor, the various processes of the above-mentioned video shooting method embodiments can be realized, and the same can be achieved. To avoid repetition, the technical effects will not be repeated here.
  • the processor is the processor in the electronic device described in the above embodiments.
  • the readable storage medium includes computer readable storage medium, such as computer read only memory (ROM), random access memory (RAM), magnetic disk or optical disk, and the like.
  • the embodiment of the present application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the above video shooting method embodiment Each process can achieve the same technical effect, so in order to avoid repetition, it will not be repeated here.
  • chips mentioned in the embodiments of the present application may also be called system-on-chip, system-on-chip, system-on-a-chip, or system-on-a-chip.
  • the embodiment of the present application provides a computer program product, the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the various processes in the above video shooting method embodiment, and can achieve the same technical effect , to avoid repetition, it will not be repeated here.
  • the term “comprising”, “comprising” or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article or apparatus comprising a set of elements includes not only those elements, It also includes other elements not expressly listed, or elements inherent in the process, method, article, or device. Without further limitations, an element defined by the phrase “comprising a " does not preclude the presence of additional identical elements in the process, method, article, or apparatus comprising that element.
  • the scope of the methods and devices in the embodiments of the present application is not limited to performing functions in the order shown or discussed, and may also include performing functions in a substantially simultaneous manner or in reverse order according to the functions involved. Functions are performed, for example, the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.

Abstract

本申请提供了一种视频拍摄方法、装置、电子设备和存储介质。该方法包括:在拍摄预览界面显示至少一帧第一图像的情况下,控制摄像头采集至少一帧第二图像;基于至少一帧第一图像和至少一帧第二图像,生成目标视频。

Description

视频拍摄方法、装置、电子设备及存储介质
相关申请的交叉引用
本申请主张在2021年11月26日在中国提交的中国专利申请No.202111421290.3的优先权,其全部内容通过引用包含于此。
技术领域
本申请属于摄像技术领域,具体涉及视频拍摄处理方法、装置、电子设备及存储介质。
背景技术
目前,随着电子设备的普及,越来越多的用户使用电子设备拍摄视频。
用户可以在拍摄视频后,使用后期处理工具对视频进行相应的处理,得到特效视频。然而,上述使用后期处理工具处理视频的操作非常繁琐,且操作耗时长。
发明内容
本申请实施例的目的是一种视频拍摄方法、装置、电子设备及存储介质,能够解决相关技术中进行视频拍摄操作繁琐的问题。
第一方面,本申请实施例提供了一种视频拍摄方法,该方法包括:
在拍摄预览界面显示至少一帧第一图像的情况下,控制摄像头采集至少一帧第二图像;
基于所述至少一帧第一图像和所述至少一帧第二图像,生成目标视频。
第二方面,本申请实施例提供了一种视频拍摄装置,该装置包括:
控制模块,用于在拍摄预览界面显示至少一帧第一图像的情况下,控制摄像头采集至少一帧第二图像;
生成模块,用于基于所述至少一帧第一图像和所述至少一帧第二图像,生成目标视频。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。
第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。
第六方面,本申请实施例提供一种计算机程序产品,该计算机程序产品被存储在存储介质中,该计算机程序产品被至少一个处理器执行以实现如第一方面所述的方法。
本申请实施例中,在拍摄预览界面显示至少一帧第一图像的情况下,控制摄像头采集至少一帧第二图像;基于至少一帧第一图像和至少一帧第二图像,生成目标视频。本申请实施例中,在拍摄预览界面显示第一图像的情况下,控制摄像头采集第二图像,进一步地,基于该第一图像和第二图像生成目标视频。上述实施过程中,用户不需要使用后期处理工具对摄像头采集的图像进行相应的处理,降低了制作特效视频的复杂度,使得特效视频的制作更为便捷。
附图说明
图1是本申请实施例提供的视频拍摄方法的流程图;
图2是本申请实施例提供的视频拍摄方法的应用场景图之一;
图3是本申请实施例提供的视频拍摄方法的应用场景图之二;
图4是本申请实施例提供的视频拍摄方法的应用场景图之三;
图5是本申请实施例提供的视频拍摄方法的应用场景图之四;
图6是本申请实施例提供的视频拍摄方法的应用场景图之五;
图7是本申请实施例提供的视频拍摄方法的应用场景图之六;
图8是本申请实施例提供的视频拍摄装置的结构图;
图9是本申请实施例提供的电子设备的结构图;
图10是本申请实施例提供的电子设备的硬件结构图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的视频拍摄方法进行详细地说明。
请参阅图1,图1是本申请实施例提供的视频拍摄方法的流程图。本申请实施例提供的视频拍摄方法包括以下步骤:
S101,在拍摄预览界面显示至少一帧第一图像的情况下,控制摄像头采集至少一帧第二图像。
本步骤中,可以通过读取本地视频的方式,或者从互联网下载的方式,得到至少一帧第一图像,并在拍摄预览界面显示该至少一帧第一图像,也就是说,第一图像是预先设置或录制好的图像。应理解,在拍摄预览界面显示至少一帧第一图像的情况下,控制摄像头采集至少一帧第二图像,也就是说,第二图像是摄像头实时采集到的图像。
本实施例中,当只有一帧第一图像时,即显示的是静态图像,当包括至少两帧第一图像时,则显示的是动态图像或视频。
在图2示出的实施场景中,在用户打开摄像头的情况下,可以点击“更多”控件201,进入“人物替换”模式,在进入上述模式后,在拍摄预览界面显示至少一帧第一图像。
S102,基于所述至少一帧第一图像和所述至少一帧第二图像,生成目标视频。
本步骤中,在得到上述至少一帧第一图像和至少一帧第二图像之后,对该第一图像和第二图像进行相应的处理,生成目标视频,其中,目标视频是特效类视频。具体如何生成目标视频的技术方案,请参阅后续实施例。
本申请实施例中,在拍摄预览界面显示至少一帧第一图像的情况下,控制摄像头采集至少一帧第二图像;基于至少一帧第一图像和至少一帧第二图像,生成目标视频。本申请实施例中,在拍摄预览界面显示第一图像的情况下,控制摄像头采集第二图像,进一步地,基于该第一图像和第二图像生成目标视频。上述实施过程中,用户不需要使用后期处理工具对摄像头采集的图像进行相应的处理,降低了制作特效视频的复杂度,使得特效视频的制作更为便捷。
可选地,所述拍摄预览界面包括第一显示区域和第二显示区域,所述第一显示区域用于显示所述至少一帧第一图像,所述第二显示区域用于显示所述摄像头采集的所述至少一帧第二图像。
本实施例中,拍摄预览界面包括第一显示区域和第二显示区域,其中,第一显示区域显示有至少一帧第一图像,第二显示区域显示有至少一帧第二图像。
如图3所示,在一实施场景中,第一显示区域301位于拍摄预览界面的左下角,第二显示区域302位于拍摄预览界面的中间位置,且第一显示区域301的面积小于第二显示区域302的面积。
本实施例中,第一图像是预先设置的图像,通过在拍摄预览界面的第一 显示区域显示第一图像,指导用户按照第一图像中的图像内容,使用摄像头录制得到第二图像。
请参阅图4,如图4所示,本实施例的一种实施场景为,在拍摄预览界面401显示第一图像,并显示“请观看以下人物动作并学习”的文字信息。用户可以点击“继续”控件402,进入拍摄页面,进入图3所示的实施场景。
可选地,所述基于所述至少一帧第一图像和所述至少一帧第二图像,生成目标视频,包括:
对所述至少一帧第一图像进行图像分割处理,得到至少一帧目标背景图像,对所述至少一帧第二图像进行图像分割处理,得到至少一帧目标前景图像;
对所述至少一帧目标背景图像和所述至少一帧目标前景图像进行图像融合处理,得到所述目标视频。
本实施例中,可以对第一图像进行图像分割处理,得到目标背景图像;对第二图像进行图像分割处理,得到目标前景图像。其中,上述图像分割处理可以是图像边缘检测算法或最大类间方差算法或其他算法,上述目标前景图像中包括拍摄主体,上述目标背景图像中包括除拍摄主体外的背景部分。
在得到至少一帧目标背景图像和至少一帧目标前景图像后,对上述目标背景图像和目标前景图像进行图像融合处理,得到目标视频。其中,目标视频的每一视频帧对应的图像包括背景和拍摄主体。
一种可选地实施方式为,在第一图像的数量为一帧的情况下,对一帧第一图像进行图像分割处理,得到一帧目标背景图像,将一帧目标背景图像与多帧目标前景图像进行图像融合处理,得到目标视频。
另一种可选地实施方式为,在第一图像的数量为多帧的情况下,对多帧第一图像进行图像分割处理,得到多帧目标背景图像,将多帧目标背景图像与多帧目标前景图像进行图像融合处理,得到目标视频。本实施例中,通过分别对第一图像和第二图像进图像分割,得到目标背景图像和目标前景图像,进一步地,对目标背景图像和目标前景图像进行图像融合,得到具备特效效 果的目标视频。
可选地,所述对所述至少一帧目标背景图像和所述至少一帧目标前景图像进行图像融合处理之前,还包括:
显示第一目标背景图像和第一目标前景图像;
接收用户对第一目标图像的第一输入;
响应于所述第一输入,对所述第一目标图像执行第一编辑处理。
本实施例中,在得到目标背景图像和目标前景图像之后,显示第一目标背景图像和第一目标前景图像,其中,第一目标背景图像为目标背景图像中的一帧图像,第一目标前景图像为目标前景图像中的一帧图像。
在接收到用户对第一目标图像的第一输入的情况下,对第一目标图像执行第一编辑处理。其中,第一目标图像包括以下至少一项:第一目标背景图像、第一目标前景图像。其中,第一编辑处理包括但不限于在第一目标图像中添加文字信息,在第一目标图像中添加滤镜效果,在第一目标图像中添加音频信息,改变第一目标图像的大小以及调整第一目标图像的显示位置。
上述第一输入可以是用户对第一目标图像的点击输入,或者为用户输入的语音指令,或者为用户输入的特定手势,具体的可以根据实际使用需求确定,本实施例在此不做具体限定。
一种可选的实施方式为,第一输入为针对第一目标背景图像的输入,这种情况下,响应该第一输入,对第一目标背景图像执行第一编辑处理。
另一种可选的实施方式为,第一输入为针对第一目标前景图像的输入,这种情况下,响应该第一输入,对第一目标前景图像执行第一编辑处理。
另一种可选地实施方式为,第一输入为针对第一目标背景图像和第一目标前景图像的输入,这种情况下,响应该第一输入,对第一目标背景图像和第一目标前景图像执行第一编辑处理。
为便于理解,请参阅图5,在图5示出的实施场景中,在拍摄预览界面501,显示有第一目标背景图像和第一目标前景图像,用户可以对上述图像中的拍摄主体的尺寸进行调整,以此实现对第一目标图像的编辑处理。
例如,当用户认为第一目标前景图像在第一目标背景图像中的位置不合适时,可以对第一目标前景图像执行滑动操作,以此调整第一目标前景图像的显示位置。例如在图5中,第一目标前景图像和第一目标背景图像有部分区域重叠,则可通过调整第一目标前景图像或者第一目标背景图像的显示位置来得到构图合理的融合图像。
本实施例中,通过对第一目标图像执行第一编辑处理,可以调整目标视频的显示效果,得到让用户满意的视频。
可选地,所述对所述至少一帧目标背景图像和所述至少一帧目标前景图像进行图像融合处理之前,还包括:
显示至少一个目标背景缩略图和至少一个目标前景缩略图;
接收用户对目标缩略图的第二输入;
响应于所述第二输入,对所述目标缩略图对应的图像执行第二编辑处理。
本实施例中,在对目标背景图像和目标前景图像进行图像融合处理之前,显示有目标背景缩略图和目标前景缩略图,其中,目标背景缩略图为目标背景图像的缩略图,目标前景缩略图为目标前景图像的缩略图。
在接收到用户对目标缩略图的第二输入的情况下,对目标缩略图对应的图像执行第二编辑处理。其中,目标缩略图包括以下至少一项:目标背景缩略图中的至少一个、目标前景缩略图中的至少一个。其中,第二编辑处理包括但不限于在目标缩略图对应的图像中添加文字信息,在目标缩略图对应的图像中添加滤镜效果,在目标缩略图对应的图像中添加音频信息、改变目标缩略图对应的图像中拍摄对象的大小,以及调整目标缩略图对应的图像中拍摄对象的显示位置。
上述第二输入可以是用户对目标缩略图的点击输入,或者为用户输入的语音指令,或者为用户输入的特定手势,具体的可以根据实际使用需求确定,本实施例在此不做具体限定。
一种可选的实施方式为,第二输入为针对目标背景缩略图的输入,这种情况下,响应该第二输入,对目标背景缩略图对应的图像执行第二编辑处理。
另一种可选的实施方式为,第二输入为针对目标前景缩略图的输入,这种情况下,响应该第二输入,对目标前景缩略图对应的图像执行第二编辑处理。
另一种可选的实施方式为,第二输入为针对目标背景缩略图和目标前景缩略图的输入,这种情况下,响应该第二输入,对目标背景缩略图对应的图像和目标前景缩略图对应的图像执行第二编辑处理。
为便于理解,请参阅图6,在图6示出的实施场景中,在拍摄预览界面601,显示有目标前景缩略图对应的图像和目标背景缩略图对应的图像,用户可以对目标前景缩略图和/或目标背景缩略图执行输入,以编辑目标前景缩略图对应的图像和/或目标背景缩略图对应的图像。
本实施例中,通过对目标缩略图对应的图像执行第二编辑处理,可以调整目标视频的显示效果,得到让用户满意的视频。
可选地,所述拍摄预览界面包括第三显示区域和第四显示区域,所述第三显示区域用于显示所述至少一帧第一图像,所述第一图像为背景图像,所述第四显示区域用于显示所述摄像头采集的至少一帧前景预览图像。
本实施例中,拍摄预览界面还包括第三显示区域和第四显示区域,第三显示区域显示有至少一帧第一图像,第四显示区域显示有至少一帧前景预览图像,其中,上述第一图像为背景图像,上述前景预览图像是摄像头采集到的图像。
示例性地,第三显示区域显示的第一图像可以只包括拍摄背景,即第一图像中不包括拍摄主体,第一图像包括为拍摄主体设置的预设区域;第四显示区域显示的前景预览图像为用户使用摄像头拍摄得到的拍摄主体图像。这种情况下,用户在使用摄像头时,需要将拍摄主体的显示位置调整到第一图像中的预设区域,例如,当预设区域的轮廓为全身照的轮廓时,用户可以摆出与轮廓相符的姿势;当预设区域为头像的轮廓时,将自身的头像调整到第三显示区域中的预设区域。进一步地,用户可以对拍摄预览界面中的相关控件进行操作,完成拍照。
可选地,所述方法还包括:
在所述第三显示区域显示所述至少一帧第一图像中的第二目标图像,以及所述第四显示区域显示目标前景预览图像的情况下,接收用户的第三输入;
响应于所述第三输入,得到第一视频图像。
本实施例中,上述第二目标图像可以理解为第一图像对应的视频帧序列中的暂停帧对应的图像。应理解,在视频帧序列的当前帧为暂停帧时,视频帧序列停止播放。当第三显示区域显示第二目标图像时,暂停更新第一图像。也就是说,当第三显示区域显示的图像为视频帧序列中的暂停帧对应的图像时,在第三显示区域暂停更新背景图像。当暂停帧只包括背景图像时,第一图像即为暂停帧;当暂停帧既包括背景图像又包括全景图像时,第一图像可以为从暂停帧中抠图出来的背景图像。
当第三显示区域显示第二目标图像,第四显示区域显示前景预览图像时,接收用户的第三输入。
其中,上述第三输入可以是用户对拍摄预览界面中目标控件的点击输入,或者为用户输入的语音指令,或者为用户输入的特定手势,具体的可以根据实际使用需求确定,本实施例在此不做具体限定。
在接收到第三输入后,响应该第三输入,对第二目标图像和该目标前景预览图像进行图像融合处理,得到第一视频图像。其中,上述第一视频图像为目标视频的一部分,即目标视频包括第一视频图像。
示例性地,用户想拍摄一段舞蹈视频,但是因为没有基本功,所以无法连贯的完成舞蹈动作。这种情况下,可以从舞蹈视频中确定多个暂停帧,当播放至目标暂停帧时,对目标暂停帧进行图像分割处理,将分割出来的背景图像作为第二目标图像,并在拍摄预览界面显示该第二目标图像。用户根据显示的第二目标图像摆出舞蹈动作,得到目标前景预览图像,用户执行第三输入,生成第一视频图像。进一步地,拍摄完成后,继续播放下一暂停帧,然后用户可以继续拍摄得到第二视频图像,基于拍摄得到的多帧视频图像,可以得到以用户为拍摄主体的舞蹈视频。
本实施例中,在接收到用户的第三输入的情况下,对第二目标图像和拍摄得到的目标前景预览图像进行图像融合处理,得到第一视频图像,进而生成具备特效效果的目标视频。
可选地,所述接收用户的第三输入之后,还包括:
响应于所述第三输入,在所述第三显示区域显示所述至少一帧第一图像中的第三目标图像。
本实施例中,上述第三目标图像可以理解为第一图像对应的视频帧序列中的暂停帧。当接收到第三输入之后,按照视频帧序列中第一图像的排序继续显示第一图像,直至显示到暂停帧,即第三目标图像时,第三显示区域暂停显示第一图像,在拍摄得到目标前景预览图像,且接收到用户的输入后,才继续显示第一图像。
本实施例中,在第三显示区域显示第三目标图像,进一步地,对第三目标图像和拍摄得到的目标前景预览图像进行图像融合处理,生成具备特效效果的目标视频。
可选地,所述得到第一视频图像之后,还包括:
显示所述第一视频图像;
接收用户的第四输入;
响应于所述第四输入,对所述第一视频图像进行第三编辑处理。
本实施例中,在得到第一视频图像之后,显示第一视频图像,并接收用户对第一视频图像的第四输入。
其中,上述第四输入可以是用户对第一视频图像的点击输入,或者为用户输入的语音指令,或者为用户输入的特定手势,具体的可以根据实际使用需求确定,本实施例在此不做具体限定。
响应该第四输入,对第一视频图像进行第三编辑处理。其中,第三编辑处理包括但不限于在第一视频图像中添加文字信息,在第一视频图像中添加滤镜效果,在第一视频图像中添加音频信息、改变第一视频图像中拍摄对象的大小,以及调整第一视频图像的显示位置。
请参阅图7,在图7示出的实施场景中,图7中的控件显示栏701显示有多个控件,例如“构图”、“修复”、“调节”、“涂鸦”、“滤镜”和“文字”。用户可以对这些控件执行触控输入,以此对第一视频图像进行编辑。
本实施例中,通过对第一视频图像执行第三编辑处理,可以调整目标视频的显示效果,得到让用户满意的视频。
本申请实施例提供的视频拍摄方法,执行主体可以为视频拍摄装置,或者该视频拍摄装置中的用于执行视频拍摄方法的控制模块。本申请实施例中以视频拍摄装置执行视频拍摄方法为例,说明本申请实施例提供的视频拍摄装置。
请参阅图8,图8是本申请实施例提供的视频拍摄装置的结构图。如图8所示,视频拍摄装置800包括:
控制模块801,用于在拍摄预览界面显示至少一帧第一图像的情况下,控制摄像头采集至少一帧第二图像;
生成模块802,用于基于所述至少一帧第一图像和所述至少一帧第二图像,生成目标视频。
本实施例中,用户不需要使用后期处理工具对摄像头采集的图像进行相应的处理,降低了制作特效视频的复杂度,使得特效视频的制作更为便捷。
可选地,所述生成模块802,具体用于:
对所述至少一帧第一图像进行图像分割处理,得到至少一帧目标背景图像,对所述至少一帧第二图像进行图像分割处理,得到至少一帧目标前景图像;
对所述至少一帧目标背景图像和所述至少一帧目标前景图像进行图像融合处理,得到所述目标视频。
本实施例中,通过分别对第一图像和第二图像进图像分割,得到目标背景图像和目标前景图像,进一步地,对目标背景图像和目标前景图像进行图像融合,得到具备特效效果的目标视频。
可选地,所述视频拍摄装置800还包括:
第一显示模块,用于显示第一目标背景图像和第一目标前景图像,所述第一目标背景图像为所述至少一帧目标背景图像中的一帧图像,所述第一目标前景图像为所述至少一帧目标前景图像中的一帧图像;
第一接收模块,用于接收用户对第一目标图像的第一输入,所述第一目标图像包括以下至少一项:第一目标背景图像、第一目标前景图像;
第一处理模块,用于响应于所述第一输入,对所述第一目标图像执行第一编辑处理。
本实施例中,通过对第一目标图像执行第一编辑处理,可以调整目标视频的显示效果,得到让用户满意的视频。
可选地,所述视频拍摄装置800还包括:
第二显示模块,用于显示至少一个目标背景缩略图和至少一个目标前景缩略图,所述至少一个目标背景缩略图为所述至少一帧目标背景图像的缩略图,所述至少一个目标前景缩略图为所述至少一帧目标前景图像的缩略图;
第二接收模块,用于接收用户对目标缩略图的第二输入,所述目标缩略图包括以下至少一项:所述目标背景缩略图中的至少一个、所述目标前景缩略图中的至少一个;
第二处理模块,用于响应于所述第二输入,对所述目标缩略图对应的图像执行第二编辑处理。
本实施例中,通过对目标缩略图执行第二编辑处理,可以调整目标视频的显示效果,得到让用户满意的视频。
可选地,所述视频拍摄装置800还包括:
第三接收模块,用于在所述第三显示区域显示所述至少一帧第一图像中的第二目标图像,以及所述第四显示区域显示目标前景预览图像的情况下,接收用户的第三输入;
响应模块,用于响应于所述第三输入,得到第一视频图像,所述目标视频包括所述第一视频图像。
本实施例中,在接收到用户的第三输入的情况下,对第二目标图像和拍 摄得到的目标前景预览图像进行图像融合处理,得到第一视频图像,进而生成具备特效效果的目标视频。
可选地,所述视频拍摄装置800还包括:
第三显示模块,用于响应于所述第三输入,在所述第三显示区域显示所述至少一帧第一图像中的第三目标图像。
本实施例中,在第三显示区域显示第三目标图像,进一步地,对第三目标图像和拍摄得到的目标前景预览图像进行图像融合处理,生成具备特效效果的目标视频。
可选地,所述视频拍摄装置800还包括:
第四显示模块,用于显示所述第一视频图像;
第四接收模块,用于接收用户的第四输入;
第三处理模块,用于响应于所述第四输入,对所述第一视频图像进行第三编辑处理。
本实施例中,通过对第一视频图像执行第三编辑处理,可以调整目标视频的显示效果,得到让用户满意的视频。
本申请实施例中的视频拍摄装置可以是电子设备,也可以是电子设备中的部件、例如集成电路、或芯片。该电子设备可以是终端,也可以为除终端之外的其他设备。示例性地,电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、移动上网装置(Mobile Internet Device,MID)、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、机器人、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,还可以为服务器、网络附属存储器(Network Attached Storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的视频拍摄装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可 能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的视频拍摄装置能够实现图1方法实施例实现的各个过程,实现相同的技术效果,为避免重复,这里不再赘述。
可选地,如图9所示,本申请实施例还提供一种电子设备900,包括处理器901和存储器902,存储器902上存储有可在所述处理器901上运行的程序或指令,该程序或指令被处理器901执行时实现上述视频拍摄方法实施例的各个步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,本申请实施例中的电子设备包括上述所述的移动电子设备和非移动电子设备。
图10为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备1000包括但不限于:射频单元1001、网络模块1002、音频输出单元1003、输入单元1004、传感器1005、显示单元1006、用户输入单元1007、接口单元1008、存储器1009、以及处理器1010等部件。
本领域技术人员可以理解,电子设备1000还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器1010逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图10中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
其中,处理器1010,还用于在拍摄预览界面显示至少一帧第一图像的情况下,控制摄像头采集至少一帧第二图像;
基于所述至少一帧第一图像和所述至少一帧第二图像,生成目标视频。
本实施例中,用户不需要使用后期处理工具对摄像头采集的图像进行相应的处理,降低了制作特效视频的复杂度,使得特效视频的制作更为便捷。
其中,处理器1010,还用于对所述至少一帧第一图像进行图像分割处理,得到至少一帧目标背景图像,对所述至少一帧第二图像进行图像分割处理,得到至少一帧目标前景图像;
对所述至少一帧目标背景图像和所述至少一帧目标前景图像进行图像融 合处理,得到所述目标视频。
本实施例中,通过分别对第一图像和第二图像进图像分割,得到目标背景图像和目标前景图像,进一步地,对目标背景图像和目标前景图像进行图像融合,得到具备特效效果的目标视频。
其中,显示单元1006,还用于显示第一目标背景图像和第一目标前景图像;
用户输入单元1007,还用于接收用户对第一目标图像的第一输入;
处理器1010,还用于响应于所述第一输入,对所述第一目标图像执行第一编辑处理。
本实施例中,通过对第一目标图像执行第一编辑处理,可以调整目标视频的显示效果,得到让用户满意的视频。
其中,显示单元1006,还用于显示至少一个目标背景缩略图和至少一个目标前景缩略图;
用户输入单元1007,还用于接收用户对目标缩略图的第二输入;
处理器1010,还用于响应于所述第二输入,对所述目标缩略图对应的图像执行第二编辑处理。
本实施例中,通过对目标缩略图执行第二编辑处理,可以调整目标视频的显示效果,得到让用户满意的视频。
其中,用户输入单元1007,还用于在所述第三显示区域显示所述至少一帧第一图像中的第二目标图像,以及所述第四显示区域显示目标前景预览图像的情况下,接收用户的第三输入;
处理器1010,还用于响应于所述第三输入,得到第一视频图像。
本实施例中,在接收到用户的第三输入的情况下,对第二目标图像和拍摄得到的目标前景预览图像进行图像融合处理,得到第一视频图像,进而生成具备特效效果的目标视频。
其中,显示单元1006,还用于响应于所述第三输入,在所述第三显示区域显示所述至少一帧第一图像中的第三目标图像。
本实施例中,在第三显示区域显示第三目标图像,进一步地,对第三目标图像和拍摄得到的目标前景预览图像进行图像融合处理,生成具备特效效果的目标视频。
其中,显示单元1006,还用于显示所述第一视频图像;
用户输入单元1007,还用于接收用户的第四输入;
处理器1010,还用于响应于所述第四输入,对所述第一视频图像进行第三编辑处理。
本实施例中,通过对第一视频图像执行第三编辑处理,可以调整目标视频的显示效果,得到让用户满意的视频。
应理解的是,本申请实施例中,输入单元1004可以包括图形处理器(Graphics Processing Unit,GPU)10041和麦克风10042,图形处理器10041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元1006可包括显示面板10061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板10061。用户输入单元1007包括触控面板10071以及其他输入设备10072中的至少一种。触控面板10071,也称为触摸屏。触控面板10071可包括触摸检测装置和触摸控制器两个部分。其他输入设备10072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
存储器1009可用于存储软件程序以及各种数据。存储器1009可主要包括存储程序或指令的第一存储区和存储数据的第二存储区,其中,第一存储区可存储操作系统、至少一个功能所需的应用程序或指令(比如声音播放功能、图像播放功能等)等。此外,存储器1009可以包括易失性存储器或非易失性存储器,或者,存储器1009可以包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM) 或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请实施例中的存储器1009包括但不限于这些和任意其它适合类型的存储器。
处理器1010可包括一个或多个处理单元;可选地,处理器1010集成应用处理器和调制解调处理器,其中,应用处理器主要处理涉及操作系统、用户界面和应用程序等的操作,调制解调处理器主要处理无线通信信号,如基带处理器。可以理解的是,上述调制解调处理器也可以不集成到处理器1010中。
本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述视频拍摄方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,所述处理器为上述实施例中所述的电子设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(ROM)、随机存取存储器(RAM)、磁碟或者光盘等。
本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述视频拍摄方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
本申请实施例提供一种计算机程序产品,该程序产品被存储在存储介质中,该程序产品被至少一个处理器执行以实现如上述视频拍摄方法实施例的 各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (20)

  1. 一种视频拍摄方法,包括:
    在拍摄预览界面显示至少一帧第一图像的情况下,控制摄像头采集至少一帧第二图像;
    基于所述至少一帧第一图像和所述至少一帧第二图像,生成目标视频。
  2. 根据权利要求1所述的方法,其中,所述拍摄预览界面包括第一显示区域和第二显示区域,所述第一显示区域用于显示所述至少一帧第一图像,所述第二显示区域用于显示所述摄像头采集的所述至少一帧第二图像。
  3. 根据权利要求1所述的方法,其中,所述基于所述至少一帧第一图像和所述至少一帧第二图像,生成目标视频,包括:
    对所述至少一帧第一图像进行图像分割处理,得到至少一帧目标背景图像,对所述至少一帧第二图像进行图像分割处理,得到至少一帧目标前景图像;
    对所述至少一帧目标背景图像和所述至少一帧目标前景图像进行图像融合处理,得到所述目标视频。
  4. 根据权利要求3所述的方法,其中,所述对所述至少一帧目标背景图像和所述至少一帧目标前景图像进行图像融合处理之前,还包括:
    显示第一目标背景图像和第一目标前景图像,所述第一目标背景图像为所述至少一帧目标背景图像中的一帧图像,所述第一目标前景图像为所述至少一帧目标前景图像中的一帧图像;
    接收用户对第一目标图像的第一输入,所述第一目标图像包括以下至少一项:第一目标背景图像、第一目标前景图像;
    响应于所述第一输入,对所述第一目标图像执行第一编辑处理。
  5. 根据权利要求3所述的方法,其中,所述对所述至少一帧目标背景图像和所述至少一帧目标前景图像进行图像融合处理之前,还包括:
    显示至少一个目标背景缩略图和至少一个目标前景缩略图,所述至少一 个目标背景缩略图为所述至少一帧目标背景图像的缩略图,所述至少一个目标前景缩略图为所述至少一帧目标前景图像的缩略图;
    接收用户对目标缩略图的第二输入,所述目标缩略图包括以下至少一项:所述目标背景缩略图中的至少一个、所述目标前景缩略图中的至少一个;
    响应于所述第二输入,对所述目标缩略图对应的图像执行第二编辑处理。
  6. 根据权利要求1所述的方法,其中,所述拍摄预览界面包括第三显示区域和第四显示区域,所述第三显示区域用于显示所述至少一帧第一图像,所述第一图像为背景图像,所述第四显示区域用于显示所述摄像头采集的至少一帧前景预览图像。
  7. 根据权利要求6所述的方法,其中,所述方法还包括:
    在所述第三显示区域显示所述至少一帧第一图像中的第二目标图像,以及所述第四显示区域显示目标前景预览图像的情况下,接收用户的第三输入;
    响应于所述第三输入,得到第一视频图像,所述目标视频包括所述第一视频图像。
  8. 根据权利要求7所述的方法,其中,所述接收用户的第三输入之后,还包括:
    响应于所述第三输入,在所述第三显示区域显示所述至少一帧第一图像中的第三目标图像。
  9. 根据权利要求7所述的方法,其中,所述得到第一视频图像之后,还包括:
    显示所述第一视频图像;
    接收用户的第四输入;
    响应于所述第四输入,对所述第一视频图像进行第三编辑处理。
  10. 一种视频拍摄装置,包括:
    控制模块,用于在拍摄预览界面显示至少一帧第一图像的情况下,控制摄像头采集至少一帧第二图像;
    生成模块,用于基于所述至少一帧第一图像和所述至少一帧第二图像, 生成目标视频。
  11. 根据权利要求10所述的视频拍摄装置,其中,所述生成模块,具体用于:
    对所述至少一帧第一图像进行图像分割处理,得到至少一帧目标背景图像,对所述至少一帧第二图像进行图像分割处理,得到至少一帧目标前景图像;
    对所述至少一帧目标背景图像和所述至少一帧目标前景图像进行图像融合处理,得到所述目标视频。
  12. 根据权利要求11所述的视频拍摄装置,其中,所述视频拍摄装置还包括:
    第一显示模块,用于显示第一目标背景图像和第一目标前景图像,所述第一目标背景图像为所述至少一帧目标背景图像中的一帧图像,所述第一目标前景图像为所述至少一帧目标前景图像中的一帧图像;
    第一接收模块,用于接收用户对第一目标图像的第一输入,所述第一目标图像包括以下至少一项:第一目标背景图像、第一目标前景图像;
    第一处理模块,用于响应于所述第一输入,对所述第一目标图像执行第一编辑处理。
  13. 根据权利要求11所述的视频拍摄装置,其中,所述视频拍摄装置还包括:
    第二显示模块,用于显示至少一个目标背景缩略图和至少一个目标前景缩略图,所述至少一个目标背景缩略图为所述至少一帧目标背景图像的缩略图,所述至少一个目标前景缩略图为所述至少一帧目标前景图像的缩略图;
    第二接收模块,用于接收用户对目标缩略图的第二输入,所述目标缩略图包括以下至少一项:所述目标背景缩略图中的至少一个、所述目标前景缩略图中的至少一个;
    第二处理模块,用于响应于所述第二输入,对所述目标缩略图对应的图像执行第二编辑处理。
  14. 根据权利要求10所述的视频拍摄装置,其中,所述视频拍摄装置还包括:
    第三接收模块,用于在所述第三显示区域显示所述至少一帧第一图像中的第二目标图像,以及所述第四显示区域显示目标前景预览图像的情况下,接收用户的第三输入;
    响应模块,用于响应于所述第三输入,得到第一视频图像,所述目标视频包括所述第一视频图像。
  15. 根据权利要求14所述的视频拍摄装置,其中,所述视频拍摄装置还包括:
    第三显示模块,用于响应于所述第三输入,在所述第三显示区域显示所述至少一帧第一图像中的第三目标图像。
  16. 根据权利要求15所述的视频拍摄装置,其中,所述视频拍摄装置还包括:
    第四显示模块,用于显示所述第一视频图像;
    第四接收模块,用于接收用户的第四输入;
    第三处理模块,用于响应于所述第四输入,对所述第一视频图像进行第三编辑处理。
  17. 一种电子设备,包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,其中,所述程序或指令被所述处理器执行时实现如权利要求1-9中任一项所述的视频拍摄方法的步骤。
  18. 一种可读存储介质,所述可读存储介质上存储程序或指令,其中,所述程序或指令被处理器执行时实现如权利要求1-9中任一项所述的视频拍摄方法的步骤。
  19. 一种芯片,包括处理器和通信接口,其中,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1-9中任一项所述的视频拍摄方法的步骤。
  20. 一种计算机程序产品,其中,所述程序产品被存储在非易失的存储介 质中,所述程序产品被至少一个处理器执行以实现如权利要求1-9中任一项所述的视频拍摄方法的步骤。
PCT/CN2022/133206 2021-11-26 2022-11-21 视频拍摄方法、装置、电子设备及存储介质 WO2023093669A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111421290.3A CN114125297B (zh) 2021-11-26 2021-11-26 视频拍摄方法、装置、电子设备及存储介质
CN202111421290.3 2021-11-26

Publications (1)

Publication Number Publication Date
WO2023093669A1 true WO2023093669A1 (zh) 2023-06-01

Family

ID=80369974

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/133206 WO2023093669A1 (zh) 2021-11-26 2022-11-21 视频拍摄方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN114125297B (zh)
WO (1) WO2023093669A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125297B (zh) * 2021-11-26 2024-04-09 维沃移动通信有限公司 视频拍摄方法、装置、电子设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055834A (zh) * 2009-10-30 2011-05-11 Tcl集团股份有限公司 一种移动终端的双摄像头拍照方法
CN103856617A (zh) * 2012-12-03 2014-06-11 联想(北京)有限公司 一种拍照方法和用户终端
CN104349065A (zh) * 2014-10-29 2015-02-11 宇龙计算机通信科技(深圳)有限公司 一种图片拍摄方法、装置和智能终端
CN107767430A (zh) * 2017-09-21 2018-03-06 努比亚技术有限公司 一种拍摄处理方法、终端及计算机可读存储介质
CN109413330A (zh) * 2018-11-07 2019-03-01 深圳市博纳思信息技术有限公司 一种证件照片智能换背景方法
CN111405199A (zh) * 2020-03-27 2020-07-10 维沃移动通信(杭州)有限公司 一种图像拍摄方法和电子设备
CN112511741A (zh) * 2020-11-25 2021-03-16 努比亚技术有限公司 一种图像处理方法、移动终端以及计算机存储介质
CN114125297A (zh) * 2021-11-26 2022-03-01 维沃移动通信有限公司 视频拍摄方法、装置、电子设备及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055834A (zh) * 2009-10-30 2011-05-11 Tcl集团股份有限公司 一种移动终端的双摄像头拍照方法
CN103856617A (zh) * 2012-12-03 2014-06-11 联想(北京)有限公司 一种拍照方法和用户终端
CN104349065A (zh) * 2014-10-29 2015-02-11 宇龙计算机通信科技(深圳)有限公司 一种图片拍摄方法、装置和智能终端
CN107767430A (zh) * 2017-09-21 2018-03-06 努比亚技术有限公司 一种拍摄处理方法、终端及计算机可读存储介质
CN109413330A (zh) * 2018-11-07 2019-03-01 深圳市博纳思信息技术有限公司 一种证件照片智能换背景方法
CN111405199A (zh) * 2020-03-27 2020-07-10 维沃移动通信(杭州)有限公司 一种图像拍摄方法和电子设备
CN112511741A (zh) * 2020-11-25 2021-03-16 努比亚技术有限公司 一种图像处理方法、移动终端以及计算机存储介质
CN114125297A (zh) * 2021-11-26 2022-03-01 维沃移动通信有限公司 视频拍摄方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN114125297B (zh) 2024-04-09
CN114125297A (zh) 2022-03-01

Similar Documents

Publication Publication Date Title
US20120249575A1 (en) Display device for displaying related digital images
US10048858B2 (en) Method and apparatus for swipe shift photo browsing
WO2016106997A1 (zh) 屏幕截图方法及装置、移动终端
CN112714253B (zh) 视频录制方法、装置、电子设备和可读存储介质
WO2023174223A1 (zh) 视频录制方法、装置和电子设备
WO2023151611A1 (zh) 视频录制方法、装置和电子设备
WO2023143531A1 (zh) 拍摄方法、装置和电子设备
WO2023134583A1 (zh) 视频录制方法、装置及电子设备
WO2023151609A1 (zh) 延时摄影视频录制方法、装置和电子设备
CN112672061B (zh) 视频拍摄方法、装置、电子设备及介质
WO2023093669A1 (zh) 视频拍摄方法、装置、电子设备及存储介质
CN113259743A (zh) 视频播放方法、装置及电子设备
WO2024061134A1 (zh) 拍摄方法、装置、电子设备及介质
WO2024022349A1 (zh) 图像处理方法、装置、电子设备及存储介质
WO2023143529A1 (zh) 拍摄方法、装置和电子设备
WO2023103949A1 (zh) 视频处理方法、装置、电子设备及介质
WO2022247766A1 (zh) 图像处理方法、装置及电子设备
CN113852757B (zh) 视频处理方法、装置、设备和存储介质
CN115037874A (zh) 拍照方法、装置和电子设备
CN114025237A (zh) 视频生成方法、装置和电子设备
CN111757177A (zh) 视频裁剪方法及装置
CN114157810B (zh) 拍摄方法、装置、电子设备及介质
CN114866694A (zh) 拍摄方法和拍摄装置
CN116847187A (zh) 拍摄方法、装置、电子设备和存储介质
CN117395462A (zh) 媒体内容的生成方法、装置、电子设备和可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22897752

Country of ref document: EP

Kind code of ref document: A1